top of page

24/06/2019 – Science & Technology / Autonomous Control / MIT / Driverless Cars

Autonomous for the people

IND-NET_03-19_Sci&Tech.jpg

A new autonomous control system developed by MIT researchers brings human-like reasoning to driverless car navigation.

 

Human drivers are exceptionally good at navigating roads they have not previously driven on. We simply match what we see around us to what we see on our GPS devices to determine where we are and where we need to go. Driverless cars, on the other hand, struggle with this basic reasoning. In every new area, the cars must first map and analyse all new roads – a time-consuming and computationally-intensive process.

 

Now, however, MIT researchers have developed an autonomous control system that ‘learns’ the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple GPS-like map. The trained system can then control a driverless car along a planned route in a brand-new area, essentially by imitating the human driver.

 

To train the system initially, a human operator controlled an automated Toyota Prius – equipped with several cameras and a basic GPS navigation system – to collect data from local suburban streets including various road structures and obstacles. When deployed autonomously, the system successfully navigated the car along a pre-planned path in a different forested area, designated for autonomous vehicle tests.

 

Downloading the road

 

“With our system, you don’t need to train on every road beforehand,” says first author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”

 

“Our objective is to achieve autonomous navigation that is robust for driving in new environments,” adds co-author Daniela Rus, Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). “For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that’s an environment it has never seen before.”

 

Point-to-point navigation

 

For years, Rus’s group has been developing ‘end-to-end’ navigation systems, which process inputted sensory data and output steering commands, without a need for specialised modules. Until now, however, those models were strictly designed to safely follow the road, without any real destination in mind. 

 

In the new paper, the researchers advanced their end-to-end system to drive from goal to destination, in a previously unseen environment. They did this by training their system to predict a full probability distribution over all possible steering commands at any given instant while driving.

 

Steering committee

 

The system uses a machine-learning model called a convolutional neural network (CNN), commonly used for image recognition. During training, the system watches and learns how to steer from a human driver. The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map. Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries.

 

“Initially, at a T-shaped intersection, there are many different directions the car could turn,” Rus says. “The model starts by thinking about all those directions, but as it sees more and more data about what people do, it will see that some people turn left and some turn right, but nobody goes straight. Straight ahead is ruled out as a possible direction, and the model learns that, at T-shaped intersections, it can only move left or right.”

 

What does the map say?

 

In testing, the researchers input the system with a map outlining a randomly chosen route. When driving, the system extracts visual features from the camera, which enables it to predict road structures. 

 

Importantly, the system uses maps that are easy to store and process. Autonomous control systems typically use LIDAR scans to create massive, complex maps that take roughly 4,000 gigabytes (4 terabytes) of data to store the city of San Francisco alone. Yet maps used by the new system capture the entire world using just 40 gigabytes of data.  

 

During autonomous driving, the MIT system also continuously matches its visual data to the map data and notes any mismatches. Doing so ensures the car stays on the safest path if it’s being fed contradictory input information. “In the real world, sensors do fail,” Amini says. “We want to make sure the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localise itself correctly on the road.”

Latest issue – Vol 1/23
Lead stories
– Mining & Minerals focus
– IMARC post-event report
– Responsibly resourcing - Future Minerals Forum pre-event report  
OFC_IndNetmag0123_large.jpg
  • Twitter Social Icon
  • Facebook Social Icon

Mines and Money Connect London 2023

London, UK

Petrochemical and Refining Congress: Europe 2023

Vösendorf, Austria

ADIPEC 2023

Abu Dhabi, UAE

Mines and Money Connect London 2023

London, UK

Petrochemical and Refining Congress: Europe 2023

Vösendorf, Austria

ADIPEC 2023

Abu Dhabi, UAE

bottom of page