The field of brain-computer interfaces (BCIs) is growing rapidly, but there’s a lack of reliable learning resources for students and new researchers. As part of my role in the Postdoc & Student Commitee of the BCI Society, I’m creating a public repository of tutorials for teaching various topics in BCI.
Tongue gestures are an accessible and subtle method for interacting with wearables but past studies have used custom hardware with a single sensing modality. At Microsoft Research, we used multimodal sensors in a commercial VR headset and EEG headband to build a 50,000 gesture dataset and real-time classifier. We also invented a new interaction method combining tongue and gaze to enable faster gaze-based selection in hands-free interactions.
With the COVID-19 pandemic, online delivery of public health messages has become a critical role for public healthcare. We examined how credibility of public health messages regarding COVID-19 varies across different platforms (Twitter, original website) and source (CDC, Georgia Department of Health, independent academics) in a controlled experiment.
For elderly people with cognitive impairments, the kitchen can be dangerous. To reduce risks of burns, falling objects and memory lapses in the kitchen, we prototyped an intelligent stovetop appliance and mobile app interface. We conducted a Wizard of Oz study on prototype and collected usability information from interviews, performance, and NASA-TLX.
Despite recent attention, Horizon Worlds hasn’t been studied extensively as a social VR platform. Using ethnographic methods, my group studied the online VR community in Horizon Worlds. Based on observational reports and interviews, we found the creative community of world designers to be a prototypical community of practice for designing social VR experiences.
As part of Neuromatch Academy student collaboration, we investigated working memory activity through N-back tasks with different sequence lengths and stimuli based on the Human Connectome Project’s task-based fMRI data. We localized and characterized prominent regions of activation using GLMs.
Learning piano is difficult, especially for older learners with busy lives. Passive haptic learning can reduce time spent practicing piano through instructional tactile cues. We designed a custom vibrotactile haptic glove for daily wear, enabling faster learning of piano skills. I led a group of undergraduate and graduate students in manufacturing glove hardware, designing a web portal and organizing user studies to evaluate performance.
We developed a new brain-computer interface using fNIRS to detect attempted motor movement in different regions of the body. Converting attempted motions to language to enable more versatile communication options for people with movement disabilities. For my undergraduate thesis, I explored how transitional gestures may enable higher accuracy and information transfer with brain-computer interfaces.
Traditionally, computer vision models are trained using large datasets gathered online. We investigated a new method for training unsupervised object recognition models using egocentric computer vision from head-worn displays. In particular, we aimed to classify objects for order picking in warehouses.
Estimating hand poses is valuable for gesture interactions and hand tracking but often requires expensive depth cameras. Stereo cameras show multiple perspectives of the hand, allowing depth perception. We created a pipeline for estimating location of hand and finger keypoints with a stereo camera using deep convolutional neural networks.
Silent speech systems provide a means for communication to people with movement disabilities like muscular dystrophy while preserving privacy. SilentSpeller is the first-ever silent speech system capable of use with a >1000 word vocabulary while in motion. We made a novel text entry system with capacitive tongue sensing from an oral wearable device to enable a privacy-preserving alternative to speech recognition.
Traditionally, exoplanet discovery relies on expert astronomers or models that integrate astronomy domain knowledge. We applied machine learning and feature extraction techniques to get 80% accuracy for stellar light curves in NASA’s Mikulski Archive, demonstrating that domain knowledge is no longer necessary for discovery of new exoplanets.
As part of the software team, I prepared a complete replica of competition in simulation to enable RoboJackets’ Intelligent Ground Vehicle Competition robots to be tested realistically. I also coded motor control firmware and path planning algorithms to enable more accurate robot motion. Later, as project manager, I supervised the software, electrical and mechanical teams’ progress.
I designed and applied a framework for training reinforcement learning models to control rapid-action mobile robots. My team became a finalist from among 100+ teams and earned 11th place at ICRA 2018 as the only high-school team to ever compete in the challenge.
As a small high-school team, we competed in the IEEE Robotics and Automation Society’s Humanitarian Robotics and Technologies Challenge by applying machine learning for autonomous mine detection with a metal detector on a low-cost robot platform. We earned 3rd place in the competition and demonstrated robot at ICRA 2017 as a finalist.