Article Lead Image

MIT created a system that predicts what robots are thinking

MIT researchers hope it will become a testing system for robotics companies. 

 

Elizabeth Robinson

Tech

Posted on Nov 18, 2014   Updated on May 30, 2021, 4:38 am CDT

A group of researchers at MIT have employed a virtual reality system that creates a way to observe a robot’s train of thought in real-time.

At MIT’s Aerospace Controls Lab, a system of ceiling-mounted projectors display maps and potential trajectories of a robot. In one example of use for the project, a red dot represents the person, and a series of green lines depicts the potential paths an autonomous robot can take in order to reach a certain destination without running into the human.

The entire project started thanks to the researchers’ confusion. After robotics presentations were given, researchers found that people were sometimes left wondering why a robot made certain decisions. By laying out the process for researchers to see, they can more easily pinpoint where a part of an algorithm—the thing controlling the robot mind—might lead to an error. Before, researchers would have to look at the most basic level of code to find where a mistake had been made, which took far more time to do.

“What we’d really like to be able to do is to be able to read the minds of our autonomous agents and get an idea of how their decision-making processes work,” Shayegan Omidshafiei, a graduate student, said in a video explaining the lab.

The researchers are currently interested in observing self-navigating machines, sometimes called drones, that can complete a set of tasks without human intervention. Since many instances of drone use are banned by the federal government, researchers at the Aerospace Controls Lab can test drones in a simulated environment before deploying it for use in real-world situations. Still, the researchers acknowledge that this type of testing cannot be a substitute for testing in an actual environment. At the lab, the ability to create an array of situations allows for careful research to be done without having to integrate the robots into the real world before they’re ready to be used, giving more clarity to the people developing the machines and to the people observing them in action.

A real-world application here would be how we’re starting to use drones to fight wildfires. Since use of this kind of drone is still in development, researchers can use the lab to project an environment from above, as if the machines were flying above a forest fire. Then, researchers would project fires onto various parts of the map and order the robots to gather images of the fire. The drone would survey the fire to see which types of vegetation are burning and then attempt to put out the fires based on which spots are more susceptible to spreading.  

Beyond firefighting drones, however, the lab can be used to research the kinds of technologies that could be used in package delivery drones as well as self-routing vehicles.

“We’re hoping that this system can become a future indoor environment in which private institutions can test then research their vehicles before deploying them into the real world,” Omidshafiei said.

H/T IEEE Spectrum Photo via OiMax/Flickr (CC BY 2.0)

Share this article
*First Published: Nov 18, 2014, 2:33 pm CST