Skip to content

Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.

License

Notifications You must be signed in to change notification settings

RhythmusByte/Sign-Language-to-Speech

Repository files navigation

Typing SVG

Project demonstration

License Status


🎯 Project Overview

Sign Language to Speech Conversion is a real-time American Sign Language (ASL) recognition system powered by computer vision and deep learning. It translates ASL hand gestures into both text and speech output, enhancing accessibility and communication.

📖 For installation, architecture, usage, and contribution guidelines, visit the Project Wiki.


✨ Key Features

  • 🔮 Real-time hand detection & gesture tracking
  • 🧠 CNN-based classification using TensorFlow/Keras
  • 🔊 Simultaneous text & speech output
  • 📢 Designed for accessibility & inclusivity

📊 System Architecture

Level 0 Level 1 Level 2
DFD Level 0 DFD Level 1 DFD Level 2

For details on Data Flow Diagrams (DFD), Use Case Diagrams, and System Design, check the Architecture Section in the Wiki.


🛠 Tech Stack

Core Technologies

Python
OpenCV
TensorFlow

Supporting Libraries

NumPy
cvzone
pyttsx3


📂 Repository Structure

Sign-Language-to-Speech/
├── data/            
├── Application.py   
├── trainedModel.h5  
├── requirements.txt 
└── white.jpg        

For a detailed breakdown of modules and system design, refer to the Project Documentation.


📢 Contributing

We welcome contributions! Before submitting a pull request, please check out the Contributing Guide.


📜 License

This project is licensed under the BSD 3-Clause License. See the full details in the LICENSE file.


📌 For all documentation, including installation, setup, and FAQs, visit the 👉 Project Wiki.

About

Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.

Topics

Resources

License

Stars

Watchers

Forks

Languages