YOUR FAVORITE MTV SHOWS ARE ON PARAMOUNT+

Driverless Cars Are So Good At Following The Law It's Making Them Dangerous

As it turns out, humans are kind of terrible at that. Which is a real problem for robot-cars.

Driverless cars are the future of super-chill, accident-free roadways -- maybe.

One of the biggest obstacles currently facing researchers is the fact that driverless cars are engineered to always follow the law. So human drivers, who obviously don't do the same, keep crashing into them when they're "moving too slow" -- AKA actually doing the speed limit.

As a result, according to a recent report from Bloomberg, driverless cars are now seeing a crash rate twice as high as cars with humans at the wheel. The report notes that they're all "minor scrape-ups for now", that they're always the human-driver's fault (usually human drivers hit the slower-moving computer-driven cars from behind), and that none of these accidents have caused any injuries.

But now researchers have to decide whether driverless cars should be taught how to break the law in small ways -- like humans so often do -- in order to make sure that they can safely do things like merge with high-speed highway traffic. Which gets into some murky ethical territory.

"Most if not all people drive at least slightly above the speed limit," professor Raj Rajkumar, the co-director of the General Motors-Carnegie Mellon Autonomous Driving Collaborative Research Lab in Pittsburgh, told MTV News. "There's an unspoken rule that we can drive ten to fifteen miles above the speed limit and not get speeding ticket. So when driverless cars are driving at the speed limit on the highway, other cars are zipping by."

Rajkumar said his research group is constantly debating the question of whether the cars should be programmed to drive any faster than the speed limit -- and if so, when. "If a cop's around, most of us probably slow down," he said. "I do the same thing. But if it's an autonomous car, it's scrutinized much more closely. The debate becomes, do we follow the letter of the law, or the spirit of the law?"

Rajkumar also pointed out that it ultimately becomes a question of liability, which is why his team has ultimately decided to err on the side of caution and stick to the speed limit.

"If a driverless vehicles is driving faster than the speed limit, and a cop stops it, who gets the ticket -- the non-driver in the driver's seat, or the engineer?" he asked. "The engineer obviously doesn't want to become liable for that. So we opt to follow the letter of the law and leave it to society to decide whether we need to adapt our speed limits to account autonomous cars following the letter of the law."

That level of caution led Dmitri Dolgov, the principal engineer of Google's driverless car program, to tell Bloomberg that the cars are "a little bit like a cautious student driver or a grandma."

This level of caution recently led a police officer in Mountain View, California to become the first officer to stop a car with no driver, according to Bloomberg, when he "noticed traffic stacking up behind a Google car going 24 miles an hour in a busy 35 mph zone," and, instead of issuing the computer a ticket, "warned the two engineers on board about creating a hazard." As a result, Google is now working to make driverless cars more slightly "aggressive" -- while still erring on the side of caution.

What does it say about humans that we need to consider teaching our robots how to break the rules in order for them to be able to go with the flow? (And why don't we just set speed limits actually reflect the speeds at which most people drive -- and then actually hold people accountable for not speeding?)

"From a techonlogy standpoint," Rajkumar told MTV News, "it's easy for us to set the speed of the cars at above or below the speed limit. As humans, we can continue with fluffy, soft rules, but computers only operate under very strict rules. That's the difference between computers and humans. So we ultimately, society has some big decisions to make about this as our transportation technology changes."

As the Bloomberg report notes, these aren't the only ethical questions raised by driverless cars. "Creators are [also] wrestling with how to program them to make life-or-death decisions in an accident," the report states. "For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?"

Can these kinds of decisions really be determined by algorithms?

"When you ask people questions like that," Rajkumar said, "their answer regarding the right choice often depends on who is inside the car. If their child is in the car, they'd often do whatever it takes to protect their child. So that's a difficult choice for humans to make, and the answer can be very different depending on lots of factors."

"At the end of the day," he continued, "the objective of the technology is just to make it as safe as possible for the passenger -- much safer than human drivers -- and to try to create technology that looks ahead and behind, considers speed limits, and does not crash into anybody, or anything."

Latest News