You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _projects/mcba.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -119,37 +119,37 @@ This novel approach aims to deliver an intuitive, natural, and efficient solutio
119
119
120
120
## Methodology
121
121
### 1. Data Collection and Dataset Overview
122
-
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities**[3]. The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**[3].
122
+
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities** . The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**.
123
123
* Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
124
124
* Signal Sampling: EEG signals were sampled at **125 Hz** and preprocessed using:
125
-
-**A bandpass filter (5–50 Hz)** to isolate relevant frequencies[3].
126
-
-**A notch filter (60 Hz)** to remove powerline interference[3].
125
+
-**A bandpass filter (5–50 Hz)** to isolate relevant frequencies.
126
+
-**A notch filter (60 Hz)** to remove powerline interference.
127
127
128
128
### 2. Data Preprocessing
129
129
The dataset, originally provided in **CSV format**, underwent a comprehensive preprocessing workflow:
130
130
* The data was split into individual CSV files for each of the 16 channels, resulting in an increase from **74,441** files to **1,191,056** files.
131
131
* Each individual channel's EEG data was converted into **audio signals** and saved in **.wav format**, allowing the brain signals to be audibly analyzed.
132
132
* The entire preprocessing workflow was implemented in **Python** to ensure scalability and accuracy.
133
133
The dataset captured brainwave signals corresponding to the following activities:
134
-
1.**BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run[3].
135
-
2.**CLH** (Closing Left Hand): Five recordings per run[3].
136
-
3.**CRH** (Closing Right Hand): Five recordings per run[3].
137
-
4.**DLF** (Dorsal Flexion of Left Foot): Five recordings per run[3].
138
-
5.**PLF** (Plantar Flexion of Left Foot): Five recordings per run[3].
139
-
6.**DRF** (Dorsal Flexion of Right Foot): Five recordings per run[3].
140
-
7.**PRF** (Plantar Flexion of Right Foot): Five recordings per run[3].
141
-
8.**Rest**: Recorded between each task to capture the resting state[3][4].
134
+
1.**BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run.
135
+
2.**CLH** (Closing Left Hand): Five recordings per run.
136
+
3.**CRH** (Closing Right Hand): Five recordings per run.
137
+
4.**DLF** (Dorsal Flexion of Left Foot): Five recordings per run.
138
+
5.**PLF** (Plantar Flexion of Left Foot): Five recordings per run.
139
+
6.**DRF** (Dorsal Flexion of Right Foot): Five recordings per run.
140
+
7.**PRF** (Plantar Flexion of Right Foot): Five recordings per run.
141
+
8.**Rest**: Recorded between each task to capture the resting state.
142
142
143
143
### 3. Feature Extraction and Classification
144
144
Feature extraction and activity classification were performed using **transfer learning** with **YamNet** <d-citekey="yamnetgithub"></d-cite>, a deep neural network model.
145
-
***Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**[6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess <d-citekey="yamnetpreprocess"></d-cite> function <d-citekey="transferlearningmatlab"></d-cite>.
145
+
***Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess.
146
146
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets.
147
-
Transfer Learning with YamNet <d-citekey="yamnetgithub, transferlearningmatlab"></d-cite>:
148
-
- The **pre-trained YamNet model** (86 layers) <d-citekey="yamnetgithub"></d-cite> was adapted for an 8-class classification task:
149
-
+ The initial layers of YamNet <d-citekey="yamnetgithub"></d-cite> were **frozen** to retain previously learned representations <d-citekey="transferlearningmatlab"></d-cite>.
150
-
+ A **new classification layer** was added to the model <d-citekey="transferlearningmatlab"></d-cite>.
147
+
Transfer Learning with YamNet :
148
+
- The **pre-trained YamNet model** (86 layers) was adapted for an 8-class classification task:
149
+
+ The initial layers of YamNet were **frozen** to retain previously learned representations.
150
+
+ A **new classification layer** was added to the model.
151
151
- Training details:
152
-
+**Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule <d-citekey="transferlearningmatlab"></d-cite>.
152
+
+**Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule.
153
153
+**Mini-Batch Size**: 128 samples per batch.
154
154
+**Validation**: Performed every **651 iterations**.
0 commit comments