asl-recognizer
Here are 64 public repositories matching this topic...
Language:All
Sort:Most stars
Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
- Updated
Jan 4, 2025 - Python
A simple sign language detection web app built using Next.js and Tensorflow.js. 2020 Congressional App Challenge. Winner! Developed by Mahesh Natamai and Arjun Vikram.
- Updated
May 9, 2021 - JavaScript
Signapse is an open source software tool for helping everyday people learn sign language for free!
- Updated
May 2, 2022 - C++
ASL gesture recognition from the webcam using OpenCV & CNN
- Updated
May 22, 2019 - Python
A simple app that analyses and recognises the alphabet in sign language using machine learning
- Updated
Jun 12, 2022 - Swift
Portable sign language (ASL) recognition device that utilizes real-time and efficient programming to help mute and deaf by establishing two-way communication channel with people who have never studied sign language.
- Updated
Apr 20, 2022 - C++
A Computer Vision based project that uses CNN to translate American Sign Language(ASL) to text and speech
- Updated
Nov 21, 2022 - Jupyter Notebook
EchoSign was made as part of an IBM internship project which we won with this project. It uses transfer learning on MobileNet on a hand-curated dataset of ASL images. The website for this classification was developed in Flask and it uses TTS technology for ASL text to speech conversion.
- Updated
Mar 29, 2024 - HTML
American Sign Language Alphabet recognition with Deep Learning's CNN architecture
- Updated
May 17, 2022 - Jupyter Notebook
Sign Language Alphabet Recognition System that automatically detects American Sign Language and convert gestures from live webcam into text and speech.
- Updated
Feb 11, 2020 - Jupyter Notebook
Repo for storing files for the graduation project. It holds code for the CV and NLP Model
- Updated
Apr 6, 2023 - Jupyter Notebook
This repository contains a transformer-based model for real-time American Sign Language (ASL) recognition. The model leverages transformer architecture to interpret ASL gestures and utilizes the Gemini-Pro LLM API for constructing sentences from recognized ASL signs.
- Updated
Jun 24, 2024 - Jupyter Notebook
The purpose of the Sign-Interfaced Machine Operating Network, or SIMON, is to develop a machine learning classifier that translates a discrete set of ASL sign language presentations from images of a hand into a response from another system.
- Updated
Mar 18, 2019 - Python
There are many applications where hand gesture can be used for interaction with systems like videogames, controlling UAV’s, medical equipment’s, etc. These hand gestures can also be used by handicapped people to interact with the systems. The main focus of this work is to create a vision based system to identify sign language gestures from real-…
- Updated
Jun 3, 2021 - Python
Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.
- Updated
Sep 6, 2021 - Jupyter Notebook
A simple for web app for enabling people to communicate with deaf and dumb people.
- Updated
Jan 22, 2020 - Jupyter Notebook
A pi setup to recognize ASL signs using a pre-trained CNN model and speak it out using a suitable TTS engine with adaptive settings.
- Updated
Dec 8, 2021 - Python
- Updated
Oct 11, 2024 - HTML
- Updated
Nov 5, 2024 - TypeScript
Bangla Sign Language Interpreter using CNN
- Updated
Jul 23, 2018 - C++
Improve this page
Add a description, image, and links to theasl-recognizer topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theasl-recognizer topic, visit your repo's landing page and select "manage topics."