Publications

Filter by category
Filter by tag

2023

FingerSpeller: Camera-Free Text Entry Using Smart Rings for American Sign Language Fingerspelling Recognition

FingerSpeller: Camera-Free Text Entry Using Smart Rings for American Sign Language Fingerspelling Recognition

David Martin, Zikang Leng, Tan Gemicioglu, Jon Womack, Jocelyn Heath, Bill Neubauer, Hyeokhyen Kwon, Thomas Plöetz, Thad Starner

Published in The 25th International ACM SIGACCESS Conference on Computers and Accessibility, 2023

Abstract

Camera-based text entry using American Sign Language (ASL) fingerspelling has become more feasible due to recent advancements in recognition technology. However, there are numerous situations where camera-based text entry may not be ideal or acceptable. To address this, we present FingerSpeller, a solution that enables camera-free text entry using smart rings. FingerSpeller utilizes accelerometers embedded in five smart rings from TapStrap, a commercially available wearable keyboard, to track finger motion and recognize fingerspelling. A Hidden Markov Model (HMM) based backend with continuous Gaussian modeling facilitates accurate recognition as evaluated in a real-world deployment. In offline isolated word recognition experiments conducted on a 1,164-word dictionary, FingerSpeller achieves an average character accuracy of 91% and word accuracy of 87% across three participants. Furthermore, we demonstrate that the system can be downsized to only two rings while maintaining an accuracy level of approximately 90% compared to the original configuration. This reduction in form factor enhances user comfort and significantly improves the overall usability of the system.

poster      accessibility, sensing, subtle-interaction doi paper

Transitional Gestures for Enhancing ITR and Accuracy in Movement-based BCIs

Transitional Gestures for Enhancing ITR and Accuracy in Movement-based BCIs

Tan Gemicioglu, Yuhui Zhao, Melody Jackson, Thad Starner

Published in Proceedings of the 10th International Brain-Computer Interface Meeting, 2023

Abstract

BCIs using imagined or executed movement enable subjects to communicate by performing gestures in sequential patterns. Conventional interaction methods have a one-to-one mapping between movements and commands but new methods such as BrainBraille have instead used a pseudo-binary encoding where multiple body parts are tensed simultaneously. However, non-invasive BCI modalities such as EEG and fNIRS have limited spatial specificity, and have difficulty distinguishing simultaneous movements. We propose a new method using transitions in gesture sequences to combinatorially increase possible commands without simultaneous movements. We demonstrate the efficacy of transitional gestures in a pilot fNIRS study where accuracy increased from 81% to 92% when distinguishing transitions of two movements instead of two movements independently. We calculate ITR for a potential transitional version of BrainBraille, where ITR would increase from 143bpm to 218bpm.

poster      brain-computer-interface, gesture paper

2022

Tongue Gestures for Hands-Free Interaction in Head Worn Displays

Tongue Gestures for Hands-Free Interaction in Head Worn Displays

Tan Gemicioglu, Mike Winters, Yu-Te Wang, Ivan Tashev

Published in Proceedings of the 2022 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 2022

Abstract

Head worn displays are often used in situations where users’ hands may be occupied or otherwise unusable due to permanent or situational movement impairments. Hands-free interaction methods like voice recognition and gaze tracking allow accessible interaction with reduced limitations for user ability and environment. Tongue gestures offer an alternative method of private, hands-free and accessible interaction. However, past tongue gesture interfaces come in intrusive or otherwise inconvenient form factors preventing their implementation in head worn displays. We present a multimodal tongue gesture interface using existing commercial headsets and sensors only located in the upper face. We consider design factors for choosing robust and usable tongue gestures, introduce eight gestures based on the criteria and discuss early work towards tongue gesture recognition with the system.

poster      gesture, sensing, subtle-interaction doi paper poster