On the turn of the twentieth century, William Hoy transformed Major League Baseball. Probably the most outstanding deaf player in history, he taught his team American Sign Language (ASL) to speak on the sector while keeping opponents at midnight. His silent speech, a legacy well over a century old now, also inspired umpires to make calls using hand gestures.
ASL is certainly one of some 300 sign languages used today by roughly 70 million deaf people worldwide. But only a sliver of society understands signs. On a regular basis tasks, like ordering at a restaurant or meeting people at social events might be difficult. To bridge the gap, a South Korean team developed smart rings to translate finger motions into text.
Older devices normally require a jungle of cables to attach sensors. But the brand new rings are wireless, freeing people to make use of natural hand motions. The rings also stretch to suit different finger sizes. These upgrades make them more comfortable and reliable, wrote the team. Each ring is powered by a replaceable 12-hour battery.
Fluent signers can communicate at speeds of around 100 to 150 signs per minute, just like spoken conversation. Devices must sustain with that speed to avoid uncomfortable pauses. So the team developed AI-based “autocomplete” for the system that, like typing, guesses the subsequent word based on what’s already been signed to generate phrases and sentences on the fly.
Trained on 100 common words in ASL and International Sign Language (ISL), the wearable was over 88 percent accurate in tests, even for users with no experience.
The rings are a step toward “seamless interaction between signers and non-signers,” wrote the team.
Let’s Chat
There are a number of devices that translate sign language into text or speech, some already in the marketplace.
One design is a bit like virtual reality gaming. It uses cameras and computer vision software to acknowledge hand gestures. The approach is fairly fast and accurate within the lab, but struggles in simulated real-world scenarios, where changes in lighting or background confuse the system.
Devices worn by users are more reliable. WearSign, for instance, uses sensors to capture the electrical activity of muscles during signing and translates it into text. Often, these devices have to be tailored to the user, a hurdle that limits use, as some can’t commit to the training.
Engineers have also tried embedding tracking sensors in a sensible glove. The sensors send signals through cables to a shared wireless transmitter. Nevertheless it’s a bit like using tools wearing a heavy winter glove. The devices limit natural movement and are uncomfortable for day by day use.
In addition they normally are available in just one size with fixed sensor placements, wrote the team. So, depending available size, the sensors could also be misplaced, reducing accuracy.
Put a Ring on It
To beat these problems, the team built AI rings to trace the seven most dominant fingers in signing. (The fitting pinkie, left middle finger, and thumb didn’t make the cut.) The rings are worn right below the second knuckle to permit natural movement.
Each device is made from stretchy material to accommodate different finger sizes and appears more like a translucent Band-Aid than a typical ring. A tiny accelerometer captures movements like bending, curling, and holding still. The sensors are low-cost, low-power, and already utilized in Apple Watches, Fitbits, and other wearables. There are also onboard chips to administer power use, wafer-thin Bluetooth transmitters, and customary replaceable batteries that last nearly 12 hours.
The rings broadcast signals to a bunch device, which processes the info and maintains a timeline of every movement so incoming signs aren’t scrambled in translation.
To discover words, the system matches gestures to a database of 100 ASL and ISL signs. For instance, closing each open palms into fists means “want.” The rings may pick up signs in motion, like “dance” or “fly,” and people with fingers held still, like “I” and “you.” In first-time users, the system was 88 percent accurate for each ASL and ISL.
To be sure that conversations flow naturally, the team added an AI to trace conversations and predict what word comes next. In tests, the system autocompleted easy phrases, like “family want beautiful animal.”
While still experimental, the rings could also translate between sign languages. Since the AI learns from gestures alone, with enough training data, it could eventually turn right into a sort of Google Translate for signing.
But finger gestures fail to capture the complete spectrum of sign language. Facial expressions, mouth movements, shoulder and body posture, speed, and rhythm all carry critical information, including meaning and emotion. Without this context, the system could easily miscommunicate intent. Some efforts at the moment are returning to older video-based systems to higher capture your entire signing experience, this time with sleeker hardware and way more processing power.
The team thinks the rings is perhaps useful elsewhere too, like to be used in virtual or augmented reality, touchless computer interfaces, and tracking hand movements in rehabilitation.

