Robots that can think for themselves are speeding up undersea research

Why should scientists spend their time reprogramming robots to perform menial tasks when they can just write software that lets the robots think for themselves?

Researchers at the Massachusetts Institute of Technology did exactly that, developing a new autonomous mission-planning system called Enterprise that lets autonomous underwater vehicles (AUVs)—essentially, robotic submarines—make their own decisions.

These underwater robots are deployed to collect data from oceanic habitats, including the health of the different species living nearby. Usually, an engineer is responsible for writing code that tells the robot what to do: where to maneuver, which way to move, and how long to spend exploring the ocean.

With MIT’s autonomous mission-planning system, submarine robots can perform time-consuming tasks without an engineer specifically telling them to do so. The researchers say that sentient machinery will be able to plan and execute a research mission and even repair itself in the event of a hardware malfunction.

MIT

Scientists tested a fully autonomous decision-making underwater glider among its human-controlled peers in March. They found that it made mission-planning and directional decisions entirely on its own, even avoiding running into the other bots in the water. Scientists still must tell the robot to execute higher-level commands, but the robot’s own programming can decide where to go and what data to collect at a basic level.

“We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support,” MIT aeronautics and astronautics professor Brian Williams, the lead developer of Enterprise, said in a statement. “With this system, we were showing we could safely zigzag all the way around the reef, like an obstacle course.”

The Enterprise system, which was named after the famous spaceship in Star Trek, features a chain of command between its building blocks that is not too dissimilar from the crew of that fictional ship’s bridge. One piece acts as “captain,” deciding what to do and where to go, while a “navigator” component decides how the robot is going to get there. The “doctor” or “engineer” component identifies and fixes malfunctions.

Scientists hope that the robots’ ability to repair themselves will solve a persistent problem with undersea research: the loss of a number of vehicles on the ocean floor in much the same way that NASA loses spacecraft when the machines lose contact with Earth.

Now that robots can make decisions on their own, scientists should be able to spend more time focusing on the larger goals of undersea research missions. And because robots that can navigate on their own don’t need to stay connected to humans, they’ll be able to go deeper into the ocean than was previously possible, exploring dark corners of the murky deep that used to be unreachable by any man-made object.

H/T DiscoverPhoto via MIT (CC BY-NC-ND 3.0)

Selena Larson

Selena Larson

Selena Larson is a technology reporter based in San Francisco who writes about the intersection of technology and culture. Her work explores new technologies and the way they impact industries, human behavior, and security and privacy. Since leaving the Daily Dot, she's reported for CNN Money and done technical writing for cybersecurity firm Dragos.