Featured
For many, it’s an alluring proposition — imagine a world where humans no longer need to own a car and instead commute via robotaxi.
Autonomous vehicle evangelists say the potential benefits are vast. With fewer human drivers on the road, there could be a reduction in greenhouse gas emissions, a decrease in vehicular accidents, and less traffic congestion.
Self-driving companies such as Waymo, Cruise and Amazon’s Zoox have been developing that technology for more than a decade, deploying and testing their robotaxi services in select U.S. cities, including Phoenix and San Francisco.
Advancements have been made over the past 10 years and these companies continue to expand their operations in more cities. Waymo, a subsidiary of Alphabet, last month began offering its robotaxi service in parts of Los Angeles with no safety driver behind the wheel, for example.
But for as good as the technology has become, rollouts have not been without controversy. A select number of cars have been documented “glitching out” — stopping in the middle of roads, making illegal turns and causing accidents. Self-driving vehicles also continue to struggle to operate through snow, rain and other challenging weather environments that cloud their sensors.
It looks like Tesla, which is developing its own self-driving technology, will soon throw its hat in the ring and reportedly reveal its own robotaxi on Aug. 8.
When can we expect these robotaxi services to hit mass adoption?
Sign up for NGN’s daily newsletter for news, discovery and analysis from around the world.
Michael Everett, a Northeastern University assistant professor with joint appointments in the College of Engineering and Khoury College of Computer Sciences, says there is still a long way to go before the technology is good enough to hit the mainstream market.
“The technology doesn’t seem there yet to me,” says Everett, who leads Northeastern University’s Autonomy and Intelligence Laboratory. “The reality is that these autonomous vehicles are still pretty specialized pieces of equipment.”
Autonomy kits outfitted on these self-driving vehicles are made up of lidar sensors, a GPS navigation system and a host of cameras, Everett says. When driving on the road, these autonomous vehicles have to make numerous decisions a second to understand their environment.
And they are far from perfect, he notes.
“There’s a few pieces that happen in this autonomous decision-making process that the car has to make,” he says. “One is figuring out all these different things in the world, and even that isn’t super obvious.”
By taking advantage of lidar sensors, which work by emitting pulsed waves of lights to help determine objects in an environment, self-driving vehicles are able to develop a proximate map of their surroundings, Everett says.
The challenge comes when these self-driving vehicles’ onboard artificial intelligence has to determine what’s safe to drive on and what’s not, he says.
“There’s road surface and you’d figure that would be safe to drive, but I think in one of these cases, there was a person who was on the ground who had been hit by another car,” he says. “Then [the self-driving car] might think, ‘Oh is that a pothole? Or is that something that is really important I should avoid?’
“Having to think through these rare events that may look like they may be almost harmless but are just on the border of being a really safety critical situation versus something that you can just continue on and ignore is one of those really hard questions,” he adds.
Everett says it has been challenging to assess how far the broader industry has come along because the companies “have had an incentive to make the technology seem more advanced.”
Tesla is currently in the middle of a class-action lawsuit over claims that it misled users about the capabilities of its self-driving technology.
The National Highway Traffic Safety Administration has also opened investigations over the past several weeks into Waymo, Zoox, Tesla, Cruise and Ford, all of which are either testing autonomous vehicles or advanced driver-assist systems.
Everett highlighted the importance of these companies being transparent when deploying these technologies.
“Transparency is really important because these aren’t just like autonomous cars on some private test track where nothing can go wrong and everybody has agreed to accept the risk,” he says.
“These are really experiments that have been happening over about a decade across our public streets, and people who are walking along the sidewalk may or may not have explicitly agreed to be part of these companies’ experiments,” he adds.
So what needs to happen for these technologies to significantly improve?
Its artificial intelligence has to get a lot more advanced, Everett says.
“The hardware is pretty awesome at this point,” he says. “Lidar sensors can give you way beyond what humans are capable of in terms of accuracy, range and building up these representations of what is around you.
“Really, I think a lot of it lies in the software of the algorithms. That’s where a lot of the innovation still needs to happen,” he says. “Given the hardware that exists today, on the Waymo-type vehicles, those provide you with more than enough data to solve the problem.”
So, how do you figure out those algorithms?
“That’s the billion-dollar question,” Everett says. “There are quite large engineering teams that are focusing on different subsets of the problem — thinking about planning, thinking about perception, thinking about predictions of other agents in the world. … I think that’s the path towards eventually having anything that surpasses human capabilities and then keeps going beyond that.”