Skip to content

Commit a5c6da9

Browse files
author
Dhruva Shaw
committed
methodology
1 parent 378643c commit a5c6da9

File tree

1 file changed

+47
-41
lines changed

1 file changed

+47
-41
lines changed

_projects/mcba.md

Lines changed: 47 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -56,63 +56,69 @@ toc: true
5656

5757
## Abstract
5858

59-
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a cost-effective bionic arm design that leverages mind-controlled functionality and integrates a sense of touch to replicate natural hand movements. The system utilizes a non-invasive EEG-based control mechanism, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity.
59+
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a **cost-effective bionic arm** design that leverages **mind-controlled functionality** and integrates a **sense of touch** to replicate natural hand movements. The system utilizes a **non-invasive EEG-based control mechanism**, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity.
6060
The proposed bionic arm prioritizes three essential features:
61-
1. Integrated Sensory Feedback: Providing users with a tactile experience to mimic the sense of touch (signals directly going to the brain). This capability is crucial for safe object manipulation by arm and preventing injuries
62-
2. Mind-Control Potential: Harnessing EEG signals for seamless, thought-driven operation.
63-
3. Non-Invasive Nature: Ensuring user comfort by avoiding invasive surgical procedures.
61+
1. **Integrated Sensory Feedback**: Providing users with a tactile experience to mimic the sense of touch (signals directly going to the brain). This capability is crucial for safe object manipulation by arm and preventing injuries
62+
2. **Mind-Control Potential**: Harnessing EEG signals for seamless, thought-driven operation.
63+
3. **Non-Invasive Nature**: Ensuring user comfort by avoiding invasive surgical procedures.
6464
This novel approach aims to deliver an intuitive, natural, and efficient solution for restoring complex hand functions.
6565

6666
---
6767

6868
## Methodology
6969
### 1. Data Collection and Dataset Overview
70-
The model development utilized a publicly available EEG dataset comprising data from 60 volunteers performing 8 distinct activities [3]. The dataset includes a total of 8,680 four-second EEG recordings, collected using 16 dry electrodes configured according to the international 10-10 system [3].
71-
Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
72-
Signal Sampling: EEG signals were sampled at 125 Hz and preprocessed using:
73-
- A bandpass filter (5–50 Hz) to isolate relevant frequencies [3].
74-
- A notch filter (60 Hz) to remove powerline interference [3].
70+
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities** [3]. The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system** [3].
71+
* Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
72+
* Signal Sampling: EEG signals were sampled at **125 Hz** and preprocessed using:
73+
- **A bandpass filter (5–50 Hz)** to isolate relevant frequencies [3].
74+
- **A notch filter (60 Hz)** to remove powerline interference [3].
7575

7676
### 2. Data Preprocessing
77-
The dataset, originally provided in CSV format, underwent a comprehensive preprocessing workflow:
78-
The data was split into individual CSV files for each of the 16 channels, resulting in an increase from 74,441 files to 1,191,056 files.
79-
Each individual channel's EEG data was converted into audio signals and saved in .wav format, allowing the brain signals to be audibly analyzed.
80-
The entire preprocessing workflow was implemented in Python to ensure scalability and accuracy.
77+
The dataset, originally provided in **CSV format**, underwent a comprehensive preprocessing workflow:
78+
* The data was split into individual CSV files for each of the 16 channels, resulting in an increase from **74,441** files to **1,191,056** files.
79+
* Each individual channel's EEG data was converted into **audio signals** and saved in **.wav format**, allowing the brain signals to be audibly analyzed.
80+
* The entire preprocessing workflow was implemented in **Python** to ensure scalability and accuracy.
8181
The dataset captured brainwave signals corresponding to the following activities:
82-
1) BEO (Baseline with Eyes Open): One-time recording at the beginning of each run [3].
83-
2) CLH (Closing Left Hand): Five recordings per run [3].
84-
3) CRH (Closing Right Hand): Five recordings per run [3].
85-
4) DLF (Dorsal Flexion of Left Foot): Five recordings per run [3].
86-
5) PLF (Plantar Flexion of Left Foot): Five recordings per run [3].
87-
6) DRF (Dorsal Flexion of Right Foot): Five recordings per run [3].
88-
7) PRF (Plantar Flexion of Right Foot): Five recordings per run [3].
89-
8) Rest: Recorded between each task to capture the resting state [3] [4].
82+
1) **BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run [3].
83+
2) **CLH** (Closing Left Hand): Five recordings per run [3].
84+
3) **CRH** (Closing Right Hand): Five recordings per run [3].
85+
4) **DLF** (Dorsal Flexion of Left Foot): Five recordings per run [3].
86+
5) **PLF** (Plantar Flexion of Left Foot): Five recordings per run [3].
87+
6) **DRF** (Dorsal Flexion of Right Foot): Five recordings per run [3].
88+
7) **PRF** (Plantar Flexion of Right Foot): Five recordings per run [3].
89+
8) **Rest**: Recorded between each task to capture the resting state [3] [4].
9090

9191
### 3. Feature Extraction and Classification
92-
Feature extraction and activity classification were performed using transfer learning with YamNet [5], a deep neural network model.
93-
Audio Representation: Audio files were imported into MATLAB using an Audio Datastore [6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess [7] function [8].
94-
Dataset Split: The data was divided into training (70%), validation (20%), and testing (10%) sets.
92+
Feature extraction and activity classification were performed using **transfer learning** with **YamNet** [5], a deep neural network model.
93+
* **Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore** [6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess [7] function [8].
94+
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets.
9595
Transfer Learning with YamNet [5] [8]:
96-
- The pre-trained YamNet model (86 layers) was adapted for an 8-class classification task:
97-
-> The initial layers of YamNet [5] were frozen to retain previously learned representations [8].
98-
-> A new classification layer was added to the model [8].
96+
- The **pre-trained YamNet model** (86 layers) was adapted for an 8-class classification task:
97+
+ The initial layers of YamNet [5] were **frozen** to retain previously learned representations [8].
98+
+ A **new classification layer** was added to the model [8].
9999
- Training details:
100-
-> Learning Rate: Initial rate of 3e-4, with an exponential learning rate decay schedule [8].
101-
-> Mini-Batch Size: 128 samples per batch.
102-
-> Validation: Performed every 651 iterations.
100+
+ **Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule [8].
101+
+ **Mini-Batch Size**: 128 samples per batch.
102+
+ **Validation**: Performed every **651 iterations**.
103103

104104
### 4. Robotic Arm Design and Simulation
105-
A 3-Degree-of-Freedom (DOF) robotic arm was designed using MATLAB Simulink and Simscape toolboxes. To ensure robust validation:
106-
A virtual environment was developed in Simulink, simulating the interactions between the trained AI models and the robotic arm.
107-
The simulations served as a testbed to evaluate the system's performance before real-world integration.
105+
A **3-Degree-of-Freedom (DOF) robotic arm** was designed using **MATLAB Simulink** and **Simscape toolboxes**. To ensure robust validation:
106+
* A **virtual environment** was developed in Simulink, simulating the interactions between the trained AI models and the robotic arm.
107+
* The simulations served as a testbed to evaluate the system's performance before real-world integration.
108108

109109
### 5. Project Progress and Future Directions
110-
Completed Tasks:
111-
1. AI Model Development: Successfully trained models to classify human activities based on EEG signals.
112-
2. Robotic Arm Design: Designed a functional 3-DOF robotic arm with simulated controls.
113-
3. Virtual Simulation: Validated AI-robotic arm interactions in a virtual environment.
110+
_Completed Tasks_:
111+
1. **AI Model Development**: Successfully trained models to classify human activities based on EEG signals.
112+
2. **Robotic Arm Design**: Designed a functional 3-DOF robotic arm with simulated controls.
113+
3. **Virtual Simulation**: Validated AI-robotic arm interactions in a virtual environment.
114114

115-
Future Directions:
116-
1. Hardware Integration: Implement the developed AI models into physical robotic hardware for real-world testing.
117-
2. Real-Time EEG Acquisition: Develop a system for real-time EEG data acquisition and activity classification.
118-
3. Tactile Feedback System: Integrate tactile sensors with the robotic arm for real-world sensory feedback, complemented by Simulink-based simulations.
115+
_Future Directions_:
116+
1. **Hardware Integration**: Implement the developed AI models into physical robotic hardware for real-world testing.
117+
2. **Real-Time EEG Acquisition**: Develop a system for **real-time EEG data acquisition** and activity classification.
118+
3. **Tactile Feedback System**: Integrate tactile sensors with the robotic arm for **real-world sensory feedback**, complemented by Simulink-based simulations.
119+
120+
---
121+
122+
## Protocols
123+
Here is the protocol(steps) to reproduce our work with ease.
124+
<iframe src="https://www.protocols.io/widgets/doi?uri=dx.doi.org/10.17504/protocols.io.n92ldr869g5b/v1" style="width: 520px; height: 300px; border: 1px solid transparent;"></iframe>

0 commit comments

Comments
 (0)