Talking Shop with Professor John Leonard
A Mechanical Engineer’s Obsession with Self-Driving Cars
A Mechanical Engineer’s Obsession with Self-Driving Cars
John J. Leonard is the Samuel C. Collins Professor of Mechanical and Ocean Engineering and Associate Department Head for Research in the MIT Department of Mechanical Engineering. His research focuses on navigation and mapping for autonomous mobile robots, and he was one of the first researchers to work on the problem of simultaneous localization and mapping (SLAM), answering the question of how to deal with uncertainties in robotic navigation so that a robot can locate and navigate itself on a map that it’s still in the process of building. He holds the degrees of B.S.E.E. in electrical engineering and science from the University of Pennsylvania (1987) and D.Phil. in engineering science from the University of Oxford (1994). http://marinerobotics.mit.edu/
Professor John Leonard trying the autonomous Google prototype out for size.
What’s your interest in autonomous cars?
I’ve worked on mobile robots my whole life, and the self-driving car has been the dream of robot navigation researchers, including myself, for more than 50 years. I’ve been obsessed with the developing story of self-driving cars, and I try to talk to everyone I can. There is a really intense amount of activity in this space, and I watch it very closely.
Why has it suddenly heated up after all this time – is it that the technology has gotten so much better recently and we’re really close now?
One thing is that Uber has shown there might be a new way to think about mobility on-demand, where you use your cell phone to call for a vehicle that just shows up and then takes you where you need to go. Uber currently uses human drivers, but if there were fully autonomous vehicles that could do these sort of jobs without a driver, then this would be a real transformation of the economy.
I have mixed emotions. I don’t think we’re as close to this fully driverless future as some others do. I worry about the impact upon employment and the legal consequences, as well as many other big policy questions. But I think that things have happened so rapidly in the past few years, and I have to give credit to Google. Last year I got to ride in the Google Lexus prototype, in Mountain View, Calif., and it was truly amazing. I felt like I was on the beach at Kittyhawk.
Is it just logistical problems, or do you have technological reservations too?
With self-driving vehicles, there’s a big distinction between different levels of autonomy. A Level 2 vehicle has multiple active safety systems (like adaptive cruise control, anti-lock brakes, or lane-keeping assistance), but the driver has to pay attention at all times. That’s what you can buy now. A Level 3 autonomous vehicle would be one in which you could surrender control to the car for sustained periods of time, such as 10 or 15 minutes, and hopefully with some warning, the car would ask you to take control when it needed you back. Tesla just released their auto-pilot software, which is a highly advanced Level 2 system. It’s a very advanced adaptive advanced cruise control – so good that some drivers are treating it like a Level 3 system, based on videos that have been posted on the Internet.
The level 3 vehicles could provide benefits, like reducing your stress while driving in traffic and making you feel more refreshed when you arrive to work in the morning, and they could prevent accidents. But studies have shown that humans are bad at monitoring an autonomous system, so there is an issue of how you hand control back to the human. So suppose someone is working on their laptop and the car suddenly needs to give them control – are they going to be ready? Or what if the driver falls asleep? So this hand-off issue is a big challenge. Google was working on a Level 3 product a few years back, but they gave up on it for this reason and decided to go up to Level 4, in which there’s no steering wheel and no brakes. You would just press a start button, and the car does all of the work. This is a lot more difficult technically than a Level 3 system. It’s one thing to make sure a system works 95% of the time but it’s another to make sure it works nearly 100% of the time.
AT MIT, we’re really lucky because CSAIL just partnered with Toyota for a new major research project to work on highly automated driving. But we’re not going to choose Level 3 or Level 4. We’re going to go back to the Level 2 system, where the human pays attention at all times, but we will also take advantage of recent advances in robotic perception and planning to try to prevent a broader class of accidents. I refer to this as “Level 2.99.” The idea is to go as far as you can with active safety systems so that the car might be constantly scanning the environment for obstacles, pedestrians, cyclists, and other cars, and be ready to jump in to prevent an accident if the car determines that the human driver might be making a mistake. The ultimate goal of such a project would be to create a car that would never crash. That could make an enormous impact on safety: Currently, more than 30,000 people are killed every year in traffic accidents in the US and a million every year worldwide, so it’s a tremendous societal problem.
I’m cheering from the sidelines for Google’s Level 4 project, but I do want to give a balanced view to the public about challenges that will be difficult to solve, like driving in the snow and interacting with people such as traffic cops or crossing guards. It’s important to me to do what I can to inform the policy debate and ground things in how real sensors work and how the algorithms work, how cars navigate, and how they detect things. I think that’s part of our job at MIT. But I also want to educate students and try to develop advances in these algorithms. The most important “product” of our work at MIT is our students. The impact that our students have in the world after they graduate is the greatest gift of being an MIT faculty member.