Skip to content

Commit 01ec9a0

Browse files
author
Dhruva Shaw
committed
some changes
1 parent b6caba1 commit 01ec9a0

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

_projects/mcba.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -119,37 +119,37 @@ This novel approach aims to deliver an intuitive, natural, and efficient solutio
119119

120120
## Methodology
121121
### 1. Data Collection and Dataset Overview
122-
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities** [3]. The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system** [3].
122+
The model development utilized a publicly available EEG dataset comprising data from **60 volunteers** performing **8 distinct activities** . The dataset includes a total of **8,680 four-second EEG recordings**, collected using **16 dry electrodes** configured according to the **international 10-10 system**.
123123
* Electrode Configuration: Monopolar configuration, where each electrode's potential was measured relative to neutral electrodes placed on both earlobes (ground references).
124124
* Signal Sampling: EEG signals were sampled at **125 Hz** and preprocessed using:
125-
- **A bandpass filter (5–50 Hz)** to isolate relevant frequencies [3].
126-
- **A notch filter (60 Hz)** to remove powerline interference [3].
125+
- **A bandpass filter (5–50 Hz)** to isolate relevant frequencies.
126+
- **A notch filter (60 Hz)** to remove powerline interference.
127127

128128
### 2. Data Preprocessing
129129
The dataset, originally provided in **CSV format**, underwent a comprehensive preprocessing workflow:
130130
* The data was split into individual CSV files for each of the 16 channels, resulting in an increase from **74,441** files to **1,191,056** files.
131131
* Each individual channel's EEG data was converted into **audio signals** and saved in **.wav format**, allowing the brain signals to be audibly analyzed.
132132
* The entire preprocessing workflow was implemented in **Python** to ensure scalability and accuracy.
133133
The dataset captured brainwave signals corresponding to the following activities:
134-
1. **BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run [3].
135-
2. **CLH** (Closing Left Hand): Five recordings per run [3].
136-
3. **CRH** (Closing Right Hand): Five recordings per run [3].
137-
4. **DLF** (Dorsal Flexion of Left Foot): Five recordings per run [3].
138-
5. **PLF** (Plantar Flexion of Left Foot): Five recordings per run [3].
139-
6. **DRF** (Dorsal Flexion of Right Foot): Five recordings per run [3].
140-
7. **PRF** (Plantar Flexion of Right Foot): Five recordings per run [3].
141-
8. **Rest**: Recorded between each task to capture the resting state [3] [4].
134+
1. **BEO** (Baseline with Eyes Open): One-time recording at the beginning of each run.
135+
2. **CLH** (Closing Left Hand): Five recordings per run.
136+
3. **CRH** (Closing Right Hand): Five recordings per run.
137+
4. **DLF** (Dorsal Flexion of Left Foot): Five recordings per run.
138+
5. **PLF** (Plantar Flexion of Left Foot): Five recordings per run.
139+
6. **DRF** (Dorsal Flexion of Right Foot): Five recordings per run.
140+
7. **PRF** (Plantar Flexion of Right Foot): Five recordings per run.
141+
8. **Rest**: Recorded between each task to capture the resting state.
142142

143143
### 3. Feature Extraction and Classification
144144
Feature extraction and activity classification were performed using **transfer learning** with **YamNet** <d-cite key="yamnetgithub"></d-cite>, a deep neural network model.
145-
* **Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore** [6]. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess <d-cite key="yamnetpreprocess"></d-cite> function <d-cite key="transferlearningmatlab"></d-cite>.
145+
* **Audio Representation**: Audio files were imported into **MATLAB** using an **Audio Datastore**. Mel-spectrograms, a time-frequency representation of the audio signals, were extracted using the yamnetPreprocess.
146146
* Dataset Split: The data was divided into **training (70%)**, **validation (20%)**, and **testing (10%)** sets.
147-
Transfer Learning with YamNet <d-cite key="yamnetgithub, transferlearningmatlab"></d-cite>:
148-
- The **pre-trained YamNet model** (86 layers) <d-cite key="yamnetgithub"></d-cite> was adapted for an 8-class classification task:
149-
+ The initial layers of YamNet <d-cite key="yamnetgithub"></d-cite> were **frozen** to retain previously learned representations <d-cite key="transferlearningmatlab"></d-cite>.
150-
+ A **new classification layer** was added to the model <d-cite key="transferlearningmatlab"></d-cite>.
147+
Transfer Learning with YamNet :
148+
- The **pre-trained YamNet model** (86 layers) was adapted for an 8-class classification task:
149+
+ The initial layers of YamNet were **frozen** to retain previously learned representations.
150+
+ A **new classification layer** was added to the model.
151151
- Training details:
152-
+ **Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule <d-cite key="transferlearningmatlab"></d-cite>.
152+
+ **Learning Rate**: Initial rate of **3e-4**, with an exponential learning rate decay schedule.
153153
+ **Mini-Batch Size**: 128 samples per batch.
154154
+ **Validation**: Performed every **651 iterations**.
155155

0 commit comments

Comments
 (0)