Deep Learning Assignment for conversion of American Sign Language to Text format
- Research Papers I follow to Complete Project
- Project Stage 1 & Stage 2 Report
- Project Stage 1 & Stage 2 Presentation
- Model Folder Contains the Json and h5 files of trained CNN model
- Pics folder Contains Sign Language of All Character
- 5 Files to Execute the Project
- Collect-data.py file: I was not able to find the correct dataset. Then I decided to create my own dataset. Using this file Own Dataset for each Character will be created
- Image_processing.py & preprocessing.py files: To improve the accuracy of the project I converted the RGB Images to Black & white Image. After Conversion I applied Gaussian Blur Filter which focuses on Boundry of the Sign Language.
- train.py file: File contains the code of model training
- app.py File: Contains the code of Front End which is created using TKinter.
- I have uploaded dataset. Execute Collect data file first and Create your own Dataset for each character.
- Execute Image processing and Preprocessing files. So the Black & White Images of your dataset will be created.
- Execute train.py file so json & h5 files for your dataset will be created
- Execute the App.py file so the frontend will be loaded & conversion of each sign language will be displayed
- Once the Frontend is loaded.
- Show the Sign language of character you wanted to translate
- For the 50 frames predicted text conversion will be stored in backend and most predicted character will be displayed in front of character.
- When all the characters will be predicted don't show any sign language in screen. When Model finds blank screen it predicts word is completed.
- After Predicting all characters it will be shifted in front of word and if blank screen continues it will be shifted in front of sentence.
Image 1) Sign Language Conversion Portal Home Page