Georgia Tech inventors have implemented FingerSound, a system for uni-stroke thumb gesture recognition that enables character based input for wearable computing devices. FingerSound uses a ring with a gyroscope and a contact microphone on the thumb to detect unistroke gestures made against the hand. A user can perform gestures by rubbing/scraping the thumb across fingers. Input can be started virtually at any time and in any position without requiring the visual attention of the user to select each letter. Similarly, command gestures can be made without requiring the visual attention of the user or causing the user to feel around the physical environment blindly searching for an interface device.
FingerSound may also be useful in certain contexts for wearable computing. Imagine being in a team meeting with a head worn display integrated into the lens of a pair of eyeglasses. Unlike the use of a mobile phone in a meeting, the head-worn display is designed to be subtle and maintain the rapport of the conversation. However, as soon as the user touches the eyeglasses to control them, it draws attention to the wearer and to the use of the system. With FingerSound, the user can place their hand under the table and give commands to the head worn display. For situations where a custom and short message needs to be constructed, FingerSound allows all 36 letters and digits to be written by scraping the thumb across the fingers.
In addition, FingOrbits were developed as a wearable device with simple interaction techniques for its input capabilities. The concept is for thumb based interaction with wearables such as heads up displays or smart watches. Users wear a 3D printed thumb ring that contains a contact microphone that communicates with the wearable. Through rubbing a thumb against the fingers of the same hand users perform specific input gestures, which are sensed by an inertial measurement unit. These gestures require little to no user training. Utilizing a signal processing and classification framework, FingOrbits is able to recognize up to 12 different one-handed input gestures that are defined through three different movement (”rubbing”) patterns that can be executed at each of the four fingers. Through a user study with 10 participants (7 novices, 3 experts), it was demonstrated that FingOrbits can distinguish up to 12 thumb gestures with an accuracy of 89% to 99% rendering the approach applicable for practical applications.
- Simplifies the input interface experience between users and virtual realities and/or smart computing devices
- Demonstrates recognition of three sets of unistroke gestures using only three training samples per gesture
- Unistroke gestures (e.g. directional controls, digits 0-9, and Graffiti characters) can be performed by wearer subtly and without the need to look at device
- Potential replacement of remote controllers paired with ubiquitous computing devices (e.g. TV, air conditioner, and other home devices)
- Demonstrates recognition of up to 12 different gestures through detecting rates of movement against each of the four fingers
- Wearable devices that require a user-input interface
- Virtual reality devices- rendered virtual keyboards and controls
- Wearable computing devices: Google glasses
- Input device for any ubiquitous computing device
Wearable devices – most prominently smart watches, fitness bands, and mobile virtual reality devices – can now be considered commodity hardware and large proportions of the population are using them in their everyday lives. With such popularity comes the desire for streamlining input to wearable and mobile computing devices. The reason for the former is that traditional means of interaction, such as mouse and keyboard, are typically not very well suited for miniaturized mobile and wearable devices.
In the example of mobile virtual reality (VR), users are immersed in a synthetic world where traditional computer interfaces such as the keyboard and mouse may disappear. As such, the demand for novel, effective input modalities is striking. Arguably, text input for short messages may still be necessary in such scenarios for responding to notifications from others, labeling files or objects, or controlling the operating system. The need to input short messages input has led to VR systems that render virtual keyboards and controls over the virtual world where the user can select each letter with head or hand movement. Another option is to render a representation of a physical keyboard in the virtual world so that the user can find it. Both of these options require significant visual and manual attention and can break the sense of immersion in the virtual world. In addition, physical keyboards (and virtual keyboards rendered in a specific location) require the user to move to the interface, which may be awkward or distracting in a virtual world.