Background
Real-time immediate feedback is known to enhance learning by providing better engagement with learners, as seen in a classroom environment with teachers. Applications with automated feedback are designed to mimic the prompt feedback provided by teachers in a classroom. Advances in artificial intelligence (AI) technology have enabled this feedback to be granular and detailed, but there is still a distrust and confusion in the minds of the users about the effectiveness of this automated feedback, especially for learners of American Sign Language (ASL).
Invention Description
Researchers at Arizona State University have developed ASLHelp, a self-paced American Sign Language application that uses a two-step mechanism to evaluate the effectiveness of automated feedback. ASLHelp provides context-based explainable feedback to facilitate higher learning outcomes.
The two-step mechanism involves first having users learn ASL gestures (as performed by experts) for everyday words, and then evaluating their progress by having users perform gestures of a given word that they have learned. This system compares expert gesture execution with learner’s self-recorded video and checks them for correctness through recognition features for location, movement, and handshape.
Potential Applications
- Combat training
- Medical surgery
- Performance coaching
- Deaf and Hard of Hearing (DHH) education
Benefits & Advantages
- Improved performance in execution of gestures (picking up on finer details in the videos than experts)
- Extendability of automated feedback to other gesture-based applications in different fields
Related Publication: Engendering Trust in Automated Feedback: A Two Step Comparison of Feedbacks in Gesture Based Learning