Sign Language to Speech Conversion is a real-time American Sign Language (ASL) recognition system powered by computer vision and deep learning. It translates ASL hand gestures into both text and speech output, enhancing accessibility and communication.
📖 For installation, architecture, usage, and contribution guidelines, visit the Project Wiki.
- 🔮 Real-time hand detection & gesture tracking
- 🧠 CNN-based classification using TensorFlow/Keras
- 🔊 Simultaneous text & speech output
- 📢 Designed for accessibility & inclusivity
Level 0 | Level 1 | Level 2 |
---|---|---|
![]() |
![]() |
![]() |
For details on Data Flow Diagrams (DFD), Use Case Diagrams, and System Design, check the Architecture Section in the Wiki.
Sign-Language-to-Speech/
├── data/
├── Application.py
├── trainedModel.h5
├── requirements.txt
└── white.jpg
For a detailed breakdown of modules and system design, refer to the Project Documentation.
We welcome contributions! Before submitting a pull request, please check out the Contributing Guide.
This project is licensed under the BSD 3-Clause License. See the full details in the LICENSE file.
📌 For all documentation, including installation, setup, and FAQs, visit the 👉 Project Wiki.