You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _projects/mcba.md
+47-41Lines changed: 47 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,63 +56,69 @@ toc: true
56
56
57
57
## Abstract
58
58
59
-
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a cost-effective bionic arm design that leverages mind-controlled functionality and integrates a sense of touch to replicate natural hand movements. The system utilizes a non-invasive EEG-based control mechanism, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity.
59
+
Advancements in bionic technology are transforming the possibilities for restoring hand function in individuals with amputations or paralysis. This paper introduces a **cost-effective bionic arm** design that leverages **mind-controlled functionality** and integrates a **sense of touch** to replicate natural hand movements. The system utilizes a **non-invasive EEG-based control mechanism**, enabling users to operate the arm using brain signals processed into PWM commands for servo motor control of the bionic arm. Additionally, the design incorporates a touch sensor (tactile feedback) in the gripper, offering sensory feedback to enhance user safety and dexterity.
60
60
The proposed bionic arm prioritizes three essential features:
61
-
1. Integrated Sensory Feedback: Providing users with a tactile experience to mimic the sense of touch (signals directly going to the brain). This capability is crucial for safe object manipulation by arm and preventing injuries
62
-
2. Mind-Control Potential: Harnessing EEG signals for seamless, thought-driven operation.
63
-
3. Non-Invasive Nature: Ensuring user comfort by avoiding invasive surgical procedures.
61
+
1.**Integrated Sensory Feedback**: Providing users with a tactile experience to mimic the sense of touch (signals directly going to the brain). This capability is crucial for safe object manipulation by arm and preventing injuries
62
+
2.**Mind-Control Potential**: Harnessing EEG signals for seamless, thought-driven operation.
63
+
3.**Non-Invasive Nature**: Ensuring user comfort by avoiding invasive surgical procedures.
64
64
This novel approach aims to deliver an intuitive, natural, and efficient solution for restoring complex hand functions.
65
65
66
66
---
67
67
68
68
## Methodology
69
69
### 1. Data Collection and Dataset Overview
70
-
The model development utilized a publicly available EEG dataset comprising data from 60 volunteers performing 8 distinct activities [3]. The dataset includes a total of 8,680 four-second EEG recordings, collected using 16 dry electrodes configured according to the international 10-10 system [3].
71
-
• Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
72
-
• Signal Sampling: EEG signals were sampled at 125 Hz and preprocessed using:
73
-
- A bandpass filter (5–50 Hz) to isolate relevant frequencies [3].
74
-
- A notch filter (60 Hz) to remove powerline interference [3].
70
+
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities**[3]. The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**[3].
71
+
* Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
72
+
* Signal Sampling: EEG signals were sampled at **125 Hz** and preprocessed using:
73
+
-**A bandpass filter (5–50 Hz)** to isolate relevant frequencies [3].
74
+
-**A notch filter (60 Hz)** to remove powerline interference [3].
75
75
76
76
### 2. Data Preprocessing
77
-
The dataset, originally provided in CSV format, underwent a comprehensive preprocessing workflow:
78
-
• The data was split into individual CSV files for each of the 16 channels, resulting in an increase from 74,441 files to 1,191,056 files.
79
-
• Each individual channel's EEG data was converted into audio signals and saved in .wav format, allowing the brain signals to be audibly analyzed.
80
-
• The entire preprocessing workflow was implemented in Python to ensure scalability and accuracy.
77
+
The dataset, originally provided in **CSV format**, underwent a comprehensive preprocessing workflow:
78
+
* The data was split into individual CSV files for each of the 16 channels, resulting in an increase from **74,441** files to **1,191,056** files.
79
+
* Each individual channel's EEG data was converted into **audio signals** and saved in **.wav format**, allowing the brain signals to be audibly analyzed.
80
+
* The entire preprocessing workflow was implemented in **Python** to ensure scalability and accuracy.
81
81
The dataset captured brainwave signals corresponding to the following activities:
82
-
1) BEO (Baseline with Eyes Open): One-time recording at the beginning of each run [3].
83
-
2) CLH (Closing Left Hand): Five recordings per run [3].
84
-
3) CRH (Closing Right Hand): Five recordings per run [3].
85
-
4) DLF (Dorsal Flexion of Left Foot): Five recordings per run [3].
86
-
5) PLF (Plantar Flexion of Left Foot): Five recordings per run [3].
87
-
6) DRF (Dorsal Flexion of Right Foot): Five recordings per run [3].
88
-
7) PRF (Plantar Flexion of Right Foot): Five recordings per run [3].
89
-
8) Rest: Recorded between each task to capture the resting state [3][4].
82
+
1)**BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run [3].
83
+
2)**CLH** (Closing Left Hand): Five recordings per run [3].
84
+
3)**CRH** (Closing Right Hand): Five recordings per run [3].
85
+
4)**DLF** (Dorsal Flexion of Left Foot): Five recordings per run [3].
86
+
5)**PLF** (Plantar Flexion of Left Foot): Five recordings per run [3].
87
+
6)**DRF** (Dorsal Flexion of Right Foot): Five recordings per run [3].
88
+
7)**PRF** (Plantar Flexion of Right Foot): Five recordings per run [3].
89
+
8)**Rest**: Recorded between each task to capture the resting state [3][4].
90
90
91
91
### 3. Feature Extraction and Classification
92
-
Feature extraction and activity classification were performed using transfer learning with YamNet [5], a deep neural network model.
93
-
• Audio Representation: Audio files were imported into MATLAB using an Audio Datastore [6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess [7] function [8].
94
-
• Dataset Split: The data was divided into training (70%), validation (20%), and testing (10%) sets.
92
+
Feature extraction and activity classification were performed using **transfer learning** with **YamNet**[5], a deep neural network model.
93
+
***Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**[6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess [7] function [8].
94
+
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets.
95
95
Transfer Learning with YamNet [5][8]:
96
-
- The pre-trained YamNet model (86 layers) was adapted for an 8-class classification task:
97
-
-> The initial layers of YamNet [5] were frozen to retain previously learned representations [8].
98
-
-> A new classification layer was added to the model [8].
96
+
- The **pre-trained YamNet model** (86 layers) was adapted for an 8-class classification task:
97
+
+ The initial layers of YamNet [5] were **frozen** to retain previously learned representations [8].
98
+
+ A **new classification layer** was added to the model [8].
99
99
- Training details:
100
-
-> Learning Rate: Initial rate of 3e-4, with an exponential learning rate decay schedule [8].
101
-
-> Mini-Batch Size: 128 samples per batch.
102
-
-> Validation: Performed every 651 iterations.
100
+
+**Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule [8].
101
+
+**Mini-Batch Size**: 128 samples per batch.
102
+
+**Validation**: Performed every **651 iterations**.
103
103
104
104
### 4. Robotic Arm Design and Simulation
105
-
A 3-Degree-of-Freedom (DOF) robotic arm was designed using MATLAB Simulink and Simscape toolboxes. To ensure robust validation:
106
-
• A virtual environment was developed in Simulink, simulating the interactions between the trained AI models and the robotic arm.
107
-
• The simulations served as a testbed to evaluate the system's performance before real-world integration.
105
+
A **3-Degree-of-Freedom (DOF) robotic arm** was designed using **MATLAB Simulink** and **Simscape toolboxes**. To ensure robust validation:
106
+
* A **virtual environment** was developed in Simulink, simulating the interactions between the trained AI models and the robotic arm.
107
+
* The simulations served as a testbed to evaluate the system's performance before real-world integration.
108
108
109
109
### 5. Project Progress and Future Directions
110
-
Completed Tasks:
111
-
1. AI Model Development: Successfully trained models to classify human activities based on EEG signals.
112
-
2. Robotic Arm Design: Designed a functional 3-DOF robotic arm with simulated controls.
113
-
3. Virtual Simulation: Validated AI-robotic arm interactions in a virtual environment.
110
+
_Completed Tasks_:
111
+
1.**AI Model Development**: Successfully trained models to classify human activities based on EEG signals.
112
+
2.**Robotic Arm Design**: Designed a functional 3-DOF robotic arm with simulated controls.
113
+
3.**Virtual Simulation**: Validated AI-robotic arm interactions in a virtual environment.
114
114
115
-
Future Directions:
116
-
1. Hardware Integration: Implement the developed AI models into physical robotic hardware for real-world testing.
117
-
2. Real-Time EEG Acquisition: Develop a system for real-time EEG data acquisition and activity classification.
118
-
3. Tactile Feedback System: Integrate tactile sensors with the robotic arm for real-world sensory feedback, complemented by Simulink-based simulations.
115
+
_Future Directions_:
116
+
1.**Hardware Integration**: Implement the developed AI models into physical robotic hardware for real-world testing.
117
+
2.**Real-Time EEG Acquisition**: Develop a system for **real-time EEG data acquisition** and activity classification.
118
+
3.**Tactile Feedback System**: Integrate tactile sensors with the robotic arm for **real-world sensory feedback**, complemented by Simulink-based simulations.
119
+
120
+
---
121
+
122
+
## Protocols
123
+
Here is the protocol(steps) to reproduce our work with ease.
0 commit comments