This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Thumbs up for prosthetics: Device recognises hand gestures from arm signals

06 January 2021

Imagine typing on a computer without a keyboard, playing a video game without a controller, or driving a car without a wheel. That’s the goal researchers envision for the system.

Image courtesy of the Rabaey Lab

A new device can recognise hand gestures based on electrical signals it detects in the forearm.

The new device couples wearable biosensors with artificial intelligence (AI) and could one day control prosthetics or to interact with almost any type of electronic device.

“Prosthetics are one important application of this technology, but besides that, it also offers a very intuitive way of communicating with computers,” says Ali Moin, who helped design the device as a doctoral student in the electrical engineering and computer sciences department at the University of California, Berkeley. Moin is co-first author of the study in Nature Electronics.

“Reading hand gestures is one way of improving human-computer interaction. While there are other ways of doing that, by, for instance, using cameras and computer vision, this is a good solution that also maintains an individual’s privacy.”

To create the hand gesture recognition system, the team collaborated with Ana Arias, a professor of electrical engineering, to design a flexible armband that can read the electrical signals at 64 different points on the forearm. The electrical signals then feed into an electrical chip, programmed with an AI algorithm capable of associating these signal patterns in the forearm with specific hand gestures.

The team succeeded in teaching the algorithm to recognise 21 individual hand gestures, including a thumbs-up, a fist, a flat hand, holding up individual fingers and counting numbers.

“When you want your hand muscles to contract, your brain sends electrical signals through neurons in your neck and shoulders to muscle fibres in your arms and hands,” Moin says. “Essentially, what the electrodes in the cuff are sensing is this electrical field. It’s not that precise, in the sense that we can’t pinpoint which exact fibres were triggered, but with the high density of electrodes, it can still learn to recognise certain patterns.”

Like other AI software, the algorithm first has to “learn” how electrical signals in the arm correspond with individual hand gestures. To do this, each user has to wear the cuff while making the hand gestures one by one.

However, the new device uses a type of advanced AI called a hyperdimensional computing algorithm, which is capable of updating itself with new information.

For instance, if the electrical signals associated with a specific hand gesture change because a user’s arm gets sweaty, or they raise their arm above their head, the algorithm can incorporate this new information into its model.

“In gesture recognition, your signals are going to change over time, and that can affect the performance of your model,” Moin says. “We were able to greatly improve the classification accuracy by updating the model on the device.”

Another advantage of the new device is that all of the computing occurs locally on the chip: no personal data are transmitted to a nearby computer or device. Not only does this speed up the computing time, but it also ensures that personal biological data remain private.

“When Amazon or Apple creates their algorithms, they run a bunch of software in the cloud that creates the model, and then the model gets downloaded onto your device,” says Jan Rabaey, Professor of electrical engineering and senior author of the paper.

“The problem is that then you’re stuck with that particular model. In our approach, we implemented a process where the learning is done on the device itself. And it is extremely quick: You only have to do it one time, and it starts doing the job. But if you do it more times, it can get better. So, it is continuously learning, which is how humans do it.”

While the device is not ready to be a commercial product yet, Rabaey says that it could likely get there with a few tweaks.

“Most of these technologies already exist elsewhere, but what’s unique about this device is that it integrates the biosensing, signal processing, and interpretation, and artificial intelligence into one system that is relatively small and flexible and has a low power budget,” Rabaey says.

Video courtesy of UC Berkeley.


More information...

Print this page | E-mail this page