Skip to content

Latest commit

 

History

History
47 lines (36 loc) · 2.17 KB

README.md

File metadata and controls

47 lines (36 loc) · 2.17 KB

Project: Sign Language Recognition System

Install

This project requires Python 3 and the following Python libraries installed:

Notes:

  1. The most recent development version of hmmlearn, 0.2.1, contains a bugfix related to the log function, which is used in this project. You should install this version of hmmearn.

Code

A template notebook is provided as asl_recognizer.ipynb. This document will walk you through the project. Some code that is used is external to the document and can be found in the repo.

Run

The notebook can be run locally from the command line by using jupyter notebook asl_recognizer.ipynb

Additional Information

Provided Raw Data

The data in the asl_recognizer/data/ directory was derived from the RWTH-BOSTON-104 Database. The handpositions (hand_condensed.csv) are pulled directly from the database boston104.handpositions.rybach-forster-dreuw-2009-09-25.full.xml. The three markers are:

  • 0 speaker's left hand
  • 1 speaker's right hand
  • 2 speaker's nose
  • X and Y values of the video frame increase left to right and top to bottom.

Take a look at the sample ASL recognizer video to see how the hand locations are tracked.

The videos are sentences with translations provided in the database.
For purposes of this project, the sentences have been pre-segmented into words based on slow motion examination of the files.
These segments are provided in the train_words.csv and test_words.csv files in the form of start and end frames (inclusive).

The videos in the corpus include recordings from three different ASL speakers. The mappings for the three speakers to video are included in the speaker.csv file.