You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1

2
-
# Sign language Recognition <hr>
2
+
# Sign language Recognition
3
3
4
4
This repository contains the source code and resources for a Sign Language Recognition System. The goal of this project is to develop a computer vision system that can recognize and interpret sign language gestures in real-time.
5
5
@@ -20,7 +20,7 @@ Though the project name is `Sign Language Recognition`, it can be used for any h
20
20
21
21
<br><br>
22
22
23
-
## Introduction <hr>
23
+
## Introduction
24
24
Sign language is a visual means of communication used by individuals with hearing impairments. This project aims to bridge the communication gap by developing an automated system that can understand and interpret sign language gestures. The system utilizes computer vision techniques and machine learning algorithms to recognize and translate these gestures into text or speech.
25
25
26
26
The Sign Language Recognition System consists of several components:
@@ -36,7 +36,7 @@ The Sign Language Recognition System consists of several components:
36
36
37
37
<br><br>
38
38
39
-
## Installetion <hr>
39
+
## Installetion
40
40
To set up the Sign Language Recognition System on your local machine, follow these steps:
41
41
42
42
1. Clone the repository to your local machine.
@@ -65,7 +65,7 @@ You are now ready to use the Sign Language Recognition System on your local mach
65
65
66
66
<br><br>
67
67
68
-
## Usage <hr>
68
+
## Usage
69
69
To use the Sign Language Recognition System, follow these steps:
70
70
71
71
1. Ensure that the required dependencies and resources are properly installed and set up.
@@ -81,13 +81,13 @@ To use the Sign Language Recognition System, follow these steps:
81
81
82
82
Here is a demo of the Sign Language Recognition System in action:
The Sign Language Recognition System is fully customizable and can be trained to recognize any hand gesture. To customize the system, follow these steps:
92
92
93
93
1. Run the `app.py` file to open the application.
@@ -101,15 +101,15 @@ Enjoy your customized Sign Language Recognition System!
101
101
102
102
<br><br>
103
103
104
-
## System Overview <hr>
104
+
## System Overview
105
105
In order to build such systems, data (keypoints of hands) must be obtained, then features involved in sign making must be extracted and finally combination of features must be analysed to describe the performed sign.
106
106
107
107

108
108
109
109
110
110
<br><br>
111
111
112
-
## Data Collection <hr>
112
+
## Data Collection
113
113
The first step in building a sign language recognition system is to collect a dataset of sign language gestures. The dataset is used to train the machine learning model to recognize and interpret these gestures.
114
114
115
115
For this project we use our hand sign data. And the data is collected using the [MediaPipe](https://google.github.io/mediapipe/) library. The library provides a hand tracking solution that can detect and track 21 hand landmarks in real-time. The hand landmarks are the key points or 2D coordinates that can be used to determine the pose of the hand.
@@ -121,16 +121,16 @@ The hand landmarks are used to extract features from the hand gestures. The feat
121
121
122
122
<br><br>
123
123
124
-
## Preprocessing <hr>
124
+
## Preprocessing
125
125
The collected data is preprocessed to enhance the quality, remove noise, and extract relevant features. The preprocessing steps include:
126
126
127
127
1. Getting the hand landmarks from the video stream.
128
128
129
129
2. Converting the hand landmarks relative to the `wrist` landmark's coordinate `(0, 0)`. This is done by subtracting the `wrist` landmark's coordinate from all the other landmarks.
3. Flatten the normalized hand landmarks to a 1 Dimensional list.
@@ -167,7 +167,7 @@ The collected data is preprocessed to enhance the quality, remove noise, and ext
167
167
168
168
<br><br>
169
169
170
-
## Model Training <hr>
170
+
## Model Training
171
171
The preprocessed data is used to train a machine learning model to recognize and interpret sign language gestures. The model is trained using a convolutional neural network (CNN) architecture. The CNN is trained on the preprocessed data to learn the mapping between input gestures and their corresponding meanings.
172
172
173
173
1. Obtain or create a sign language dataset.
@@ -185,15 +185,15 @@ The preprocessed data is used to train a machine learning model to recognize and
185
185
186
186
<br><br>
187
187
188
-
## Results <hr>
188
+
## Results
189
189
The model was trained on a dataset of 24,000 hand gestures. The dataset was split into training and validation sets with a ratio of 80:20. The model was trained for 100 epochs with a batch size of 180. The training and validation accuracy and loss were recorded for each epoch.
190
190
191
191
Our Proposed Model achieved an accuracy of `71.12%` on the validation set and `90.60%` on the testing set. The model was able to recognize and interpret sign language gestures in real-time with an accuracy of `71.12%`.
192
192
193
193
194
194
<br><br>
195
195
196
-
## Contributing <hr>
196
+
## Contributing
197
197
Contributions to this project are welcome. If you encounter any issues or have suggestions for improvements, please open an issue or submit a pull request. Let's work together to make the Sign Language Recognition System even better!
198
198
199
199
We appreciate your contributions, whether big or small, and we look forward to working together to enhance the Sign Language Recognition System. Let's make a positive impact on the lives of individuals with hearing impairments and promote inclusivity in communication.
0 commit comments