Based on speech-to-text, this software allows a variety of hand gestures to be recorded as input and outputs it into text!

Inspiration

Our primary inspiration was the current speech to text system, it is widely used and hence very developed and effective. Unfortunately many individuals who lack the ability to speak often cannot benefit from these advancements. This is why we have decided to contribute to the space by implementing an accessible app which is capable of translating a variety of hand gestures/sign language into text.

How we built it

We used tensorflow in order to make our application understand and track the joint locations and movements. On the basis of the users joint positions the system is capable of identifying and naming these gestures.

Challenges we ran into

The biggest problem was to train the model using python language where we also had to convert it into a tensorflow.js compatible model

Accomplishments that we’re proud of

Successfully engaging video as an input which can then be processed into useful data in the form of nameable gestures.

What we learned

The process of training models and implementing them in a program.

What’s next for Sign to Text

As we improve our data models, additional functionalities and definitions are planned to be added. These could include the implementation of seperate use-cases such as a text to braille or text to morse and vice versa.

Built With

  • javascript
  • machine-learning
  • next.js
  • react.js
  • tensorflow
  • typescript

Try it out

https://astonhack2021.vercel.app/

https://astonhack2021-bnct2bqmk-asobirov.vercel.app/

https://github.com/asobirov/astonhack2021

Created By

Akbarshokh Sobirov

Milosz Paszkowski

Michael Monfries