Autos

Autonomous vehicles were supposed to push gas-powered cars off the road in the future. Will we ever get there? – Grid


Tesla’s claims of “full self-driving” cars are once again under fire from lawmakers and activists, throwing the hope — and hype — around autonomous cars into sharp relief.

California state legislators approved a bill last week that would prevent the car company, headed by billionaire Elon Musk, from continuing to market its Autopilot software as “full self-driving” because it is not fully autonomous. At the same time, the Dawn Project advocacy group — headed by Dan O’Dowd, a billionaire whose firm, Green Hills Software, makes operating systems for planes and cars — recently released a commercial that featured Tesla vehicles striking child-sized mannequins of children, claiming that the cars’ self-driving systems failed to react in time.

Musk, for his part, has been promising autonomous driving the next year in various ways since 2014. Last week, he did it again, saying he was hoping to have Tesla’s self-driving technology released by the end of the year.

But fulfilling the promise of autonomous driving — a perfectly safe ride without a human behind the wheel at all — has been elusive at scale, despite continued optimism from automakers and tech companies. While the technology has held up remarkably well in certain environments, such as during good weather or on a controlled track, it has also failed in challenging situations like snow and heavy rain that can disrupt its sensors. The revolution is here! Until it wasn’t.

Paris Marx, author of “The Road to Nowhere: What Silicon Valley Gets Wrong About the Future of Transportation,” said that the difficulties bringing the vision of fully autonomous cars to fruition fits in with a larger pattern of overly optimistic promises about technology’s ability to fix issues with the transportation system.

“Whether it’s Uber or autonomous vehicles, [when it comes to] reducing traffic or reducing road deaths and things like that, all of these technologies were supposed to address these problems that people do recognize exist but then were unable to really deliver on what they were promising,” said Marx.

Tesla has not yet responded to a request for comment.

The slowing of development

Marx said that up until 2018, when a self-driving Uber car struck and killed a pedestrian, there had been a common narrative around autonomous cars — that these vehicles were only a few years away and once the tech matured, autonomous vehicles would quickly take over the transportation system. Proponents argued that this would have innumerable benefits for the movement of people and goods, making driving safer and more efficient.

The reality has been bumpier, though.

Following that 2018 accident, Uber would go on to sell its autonomous vehicle unit to a company called Aurora. Lyft sold its self-driving unit to Toyota in 2021 for around $550 million. Equity funding for autonomous vehicle companies topped $12 billion in 2021, and according to one 2020 estimate, companies have spent over $16 billion in pursuit of autonomous vehicles.

And while there are autonomous cars and taxis being tested in the wild, companies have little to show for all that money spent. In Austin, Texas, and Miami, Ford-backed Argo AI is removing human drivers from its autonomous cars, though they’re currently only transporting Argo AI employees rather than paying customers. In San Francisco, Waymo — owned by Google’s parent, Alphabet — removed human “safety drivers” from its autonomous vehicles earlier this year.

But the sweeping disruption to transportation systems hasn’t happened. That transformation is still years away, if it does happen.

In July, Ford filed a patent application for an augmented reality app. It described a communications system between a “vulnerable road user” and an autonomous vehicle. The accompanying illustration showed a person holding their phone up to view a crosswalk, with the app letting them know if the approaching autonomous car would actually stop at said crosswalk — or not.

Ford did not respond to a request for comment.

The Tesla problem

Tesla and its enigmatic CEO, Musk, have long been a lightning rod when it comes to autonomous capabilities.

Musk has consistently said since 2014 that autonomous capabilities in Tesla vehicles will soon be here. What’s emerged is “full self-driving” (FSD), a $12,000 driver-assist system that can drive to a preset destination, though a human is expected to be observing and ready to take over if needed. This makes it more akin to a level 2 autonomous system, as opposed to a level 5 fully autonomous one. These classifications come from a six-point scale created by the Society of Automotive Engineers and adopted by federal regulators, in which zero is fully manual and 5 is fully autonomous.

In July, the California Department of Motor Vehicles accused Tesla of deceptive marketing when it came to its “Autopilot” and “Full Self-Driving” labels, claiming that these terms suggest the cars are fully autonomous when they are not. Federal regulators are also investigating the full self-driving system.

The system has sparked acclaim and criticism.

O’Dowd, the driving force behind the Dawn Project, is one vocal critic. “This is not just not a great program,” he said. “This thing is the most poorly engineered program I have ever seen.”

O’Dowd said that the Dawn Project conducted numerous tests of Tesla’s FSD system and experienced multiple failures. In one case, the car attempted a left turn into the oncoming lane of traffic as a vehicle was approaching, and in another, the car ignored “Do not enter” signs on a road and entered a construction zone.

The Dawn Project recently released a commercial that featured Tesla vehicles striking mannequins of children, claiming that the FSD system failed to recognize them in multiple cases. Tesla sent a cease-and-desist letter to the organization to take down the ad, claiming it’s defamatory. The publication Electrek recently published a report that seemingly poked some holes in the Dawn Project’s tests.

“The assertion of the cease-and-desist letter is based on zero credible evidence that the tests are fake,” said O’Dowd.

The cease-and-desist letter makes a number of claims, including that “FSD Beta incorporates safety by design and does recognize pedestrians, including children, and when utilized properly, the system reacts to prevent or mitigate a collision.”

A former Tesla employee, who spoke on the condition of anonymity in order to speak about the company, said that when it comes to FSD, the company timeline projections have always included a healthy dose of posturing.

“They have been saying ‘soon’ and ‘next year’ for years,” said the former employee. “Most employees are for it but also hesitant to trust it. The ones who have drunk the Kool-Aid are excited, [but] the ones who see the quality dipping are much more cautious and feel like it is too much money.”

Supporters of the FSD feature are quick to tout its safety potential and to document their own tests. One, Twitter user @WholeMarsBlog, recruited a child to approach near the front of a Tesla equipped with the software to demonstrate that it detected children, avoided them and was safer than a person driving.

Other supporters said that some performance problems are to be expected, as the FSD software is still technically in beta.

“This shows the disconnect when you unleash a software mentality into the physical world,” said Marx. “It’s quite different when you have a Gmail service or something like that in beta for a few years — it really doesn’t make a big difference to people; it’s not putting anyone really at risk.”

Musk, for his part, told people not to complain, even as the company faces a class-action lawsuit over “phantom breaking” and other tests have documented the program taking turns too fast.

The Tesla flashpoint is the one that captures headlines, driven often by Musk himself, but there is work quietly being done to reach that finish line that still lurks, just over the horizon.

The human element

Michael Felsberg, a professor of computer vision at Linköping University, said that the messaging for a long time from the industry was that it was 99 percent of the way to selling fully autonomous cars to the public — but this is obviously not the case.

He thinks there are fundamental issues at the heart of self-driving systems.

“The main issue is not that autonomous vehicles might be involved in a lethal accident,” said Felsberg. “I mean, even if we have the perfect autonomous vehicle, it is inevitable that lethal accidents happen. Traffic is a system which is in itself inconsistent, which leads to situations which are unsolvable.”

Whether a car is steered by a human or a fully autonomous computer system, accidents will occur, he said — adding that “the real problem is that there are many cases where autonomous driving is tested in the field, although it is not really ready for that.” Felsberg sees that as an ethical problem.

There are also unanswered questions that go well beyond just what the cars are capable of, such as how people will interact with them if they become more common.

That’s a core question that Richard Corey, Virtual Environment and Multimodal Interaction (VEMI) Lab Director at the University of Maine, is trying to answer. He and other researchers were selected as semifinalists for the Transportation Department’s Inclusive Design Challenge for their Autonomous Vehicle Assistant software, which is meant to address accessibility challenges for autonomous vehicles.

“There has been a realization over the last few years that without human input, it’s going to be very difficult for these cars or other apparatuses to work,” said Corey. “Without human input, there’s going to be some really interesting problems.”

He said there is going to come a time soon when a car will ask its driver whether there is a puddle up ahead or whether the road is completely washed out.

But these sorts of autonomy questions also create other challenges for people who might be blind or visually impaired. His team has also been asking questions like, what happens if someone has a heart attack while in their car? Will the car know something is wrong? What about if a car needs maintenance, will it let us know? Or even something as simple as wanting to take a longer, scenic route along the coast to get to where you are going. If the car is programmed for efficiency, how can you tell it to do something inefficient?

That’s why he’s working on the concept of human-vehicle collaboration. Corey wants to get to a point where cars can tell us, for example, it’s gotten too foggy for the autonomous driving system to see the road so the human has to take over.

“That’s the type of collaboration that I would love to see with semi-autonomous vehicles moving forward,” he said. “Because now I’m like, ‘Oh, I get why there’s a problem. I get how I need to help you.’ I know this sounds silly. But we’ve been changing the language around here to say we need to start thinking of these as these anthropomorphized entities that are going to be in our lives.”

It’s not a matter of if but when these cars will be present in our lives, he added.

“It’s a moving target, and it’s getting harder and harder to nail down,” said Corey. “I will say that I do not think we are 50 years out. I think you and I will absolutely be seeing them in our lifetime.”

Thanks to Lillian Barkley for copy editing this article.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.