Tech

MIT’s virtual ‘guide dog’ could help the visually impaired

With a 3D camera and braille feedback, the system helps people interact with the world.

Photo of Selena Larson

Selena Larson

Article Lead Image

A tiny chip could be the powerful, driving force behind new technologies that let the visually impaired navigate the world.

Featured Video

Researchers at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory developed a chip and navigation system, or virtual “guide dog,” that’s worn around the neck. It uses a Texas Instruments 3D-video camera and algorithms to analyze the space in front of someone and tell them what the camera sees. The computer reports back to the wearer, who has a handheld braille interface, and can be alerted when something will physically appear in front of them.

Dongsuk Jeon, who was a postdoc at MIT’s Microsystems Research Laboratories when this technology was developed, explained that the handheld device can determine a safe distance to walk in different directions, and it uses Bluetooth to send the data from the computer to the device. It has five columns with four balls in each column that react to objects in front of the wearer.

MIT Researchers

Advertisement

“For instance, if there is nothing in front then all the pins are down,” Jeon said in an email to the Daily Dot. “If some obstacle appears in one direction from far away, only the top pin in that column will go up. As that obstacle gets closer, more pins will go up in that column towards the bottom.” 

The navigation system that’s about the size of a pair of binoculars is still a prototype, but once it moves beyond the research facility, it has the potential to change the way visually impaired individuals interact with the spaces around them. Jeon said they are working on something even smaller that’s the size of a flashlight; the wearer would simply point it in any direction and it would recognize what’s in front of a person.

The chip’s processing power matches that of common processors, but only requires one-thousandth as much power as one that’s running the same tech. Anything the 3D-video camera captures is converted into something called a point cloud, the university explains, or the representation of the camera’s footage based on surfaces of objects. The navigation system runs an algorithm to identify the point cloud and determine how close things are to one another and the person. 

MIT Researchers

Advertisement

Additionally, researchers worked to lower power consumption by modifying the point cloud algorithm to analyze objects in a set pattern instead of sporadically, and to know when a person isn’t moving to cut down on power. 

MIT’s system isn’t the first to use 3D aides to help visually impaired individuals, however it is the smallest.

Intel, for instance, is pairing computer vision with haptic feedback. The company created customized clothing using its RealSense 3D technology that turns the wearer into a walking environmental scanner. Similar to MIT’s navigation system, the Intel project scans and analyzes the environment, and vibrates to alert the wearer of impending objects. 

The virtual “guide dog” system developed by MIT is a practical, fully realized solution. However, there’s still work to be done before it’s available to consumers.

Advertisement

“If there is enough financial support, this may be available in the market within few years,” Jeon said.

Photo via lissalou66/Flickr (CC BY ND 2.0)

 
The Daily Dot