We have seen an explosion in the past few years of developments toward truly autonomous vehicles. Some predict fully autonomous vehicles to be ready for consumers by 2020. But then along came the first fatal crash involving a semi-autonomous vehicle, a Tesla at that, and perhaps that timetable has just been pushed back a tad. If you look a the issues involved in rolling out truly autonomous vehicles, it seems unlikely that we will see them outside of beta testing for quite some time. Let's start with the recent fatal crash and look at some of the issues it highlights.
According to Bloomberg, Tesla has rolled out approximately 70,000 semi-autonomous cars since October 2014 in a massive beta test of sorts.
http://www.bloomberg.com/news/articles/2016-07-01/fatal-tesla-crash-spurs-criticism-of-on-the-road-beta-testing
From the description at Bloomberg, these Tesla cars appear to be Level 2 cars, i.e. those that are largely automated (at least for certain driving) but where the driver must remain fully attentive. It seems, however, that the driver who died and who supposedly was watching a movie at the time was treating the vehicle as Level 3, where the functions are sufficiently automated that the driver can safely engage in other activities but nonetheless can still take control if need be or desired. Even within the levels you can have a host of variations, such as cars that fall into Level 2 in urban settings but elevate to Level 3 on a well marked interstate. Be it Level 2 or 3, this car still failed to avoid a fatal crash it should have been able to avoid.
In the fatal accident, a Tesla on a highway in Florida unsuccessfully tried to pass under the trailer of an 18-wheeler that had crossed the highway in front of it. It is believed that the car's electronic sensors mistook the white trailer for bright sky and failed to brake.
In Tesla's defense, it claims its semi-autonomous cars had logged over 130 million miles before this fatality and that the average for regular cars is a fatality every 94 million. Well, as we lawyers like to say - tell it to the jury! And I am sure Mr. Musk or his company will have to do just that.
While the deceased is from Ohio, which has fairly conservative mid-west juries and verdicts, the fatal accident was in Florida, which has anything but conservative juries and verdicts. Eight and even nine figure verdicts from single fatalities are not unheard of there. Just last year one of those wonderful tobacco companies took a $23 billion punitive damage verdict in an individual wrongful death suit, which did get drastically reduced for constitutional reasons, but you get the picture - not a good state to be a defendant.
Well geez you say, they have to have insurance for this. And geez I say, I am sure they do (though not for punitive damages). But nothing like a fatality and perhaps a nice eight figure verdict to raise your premiums and with 70,000 such cars out there and more on the way, it's gonna' take a whole lotta' premium to keep this baby insured. Because despite accidents and fatalities in these cars perhaps being rarer than in regular old jalopies, when they do occur it is almost for certain that the manufacturer will be held liable. Yes, manufacturers have always had some liabilities in, but most accidents are due primarily if not exclusively to driver error, not product defect, so the liabilities were fewer. Indeed, almost 40% of vehicle fatalities are traditionally due to alcohol or drugs, but with autonomous cars they virtually all will involve some level of product defect. In this accident, for example, while the "driver" himself may well have been partially at fault for watching a movie, there's no way Tesla is going to prove to a jury the car did nothing wrong. Ain't happening my friends.
And I suspect Musk might get whacked pretty good with punitive damages. You see beta testing your vehicles on 70,000 end users is probably not wise, especially when many other manufacturers are refusing to do anything of the sort. Likely other manufacturers are doing so for safety reasons and also they undoubtedly do not want the PR of a fatal accident like just occurred. When such an experiment leads to the death of a 40 year old former Navy seal, sparks are going to fly and sales are going to drop.
This seems to be one of the big issues with autonomous cars, manufacturer liability. It is probably not the thorniest but it is one that may well sideline the whole shebang absent a legislative solution, which will likely come eventually but will take a long time. Ultimately, especially when all cars are autonomous, everyone predicts they will be much safer than cars today. But if manufacturers are facing massive liabilities for every wreck, even if there are a lot fewer wrecks, they will not survive. They must appeal for legislative relief along the lines of no fault protection and probably will need it on a federal level to be effective, but that poses a host of political issues to overcome.
You see torts and auto liability are issues traditionally handled on a state level with state regulation and common law. Driver's licenses are issued at a state level, driving laws are at a state level, required insurance is at a state level and liabilities for accidents are determine under individual state standards. Giving a lot of this control over to the federal government is not going to be an easy sell. Can you imagine Texas giving this up? But it is something that has to be uniform to work and it will not be uniform on a state level. Thus, Volvo, for one, has been pushing the federal government to regulate this area and not leave it to the states.
http://www.digitaltrends.com/cars/volvo-urges-u-s-government-to-regulate-autonomous-cars/
There is enough money and societal benefit at stake that it will likely eventually happen, but it will be a long and painful journey.
But liability is just one issue. Assuming they get uniform legislation, manufacturers still need to deal with the public attitudes. You probably have around 70,000 owners of semi-autonomous Teslas now very hesitant if not outright refusing to use the semi-autonomous features that they can turn on or off. Who wants to endanger their own lives or the lives of their families over a system that cannot tell a freakin' semi trailer from bright sky?
At least in Teslas some of the semi-autonomous features can be disengaged and the driver can take over. That will not be the case for all autonomous cars. Level 4 cars, like those being designed by Google, do not allow this. There are apparently some thorny issues switching from auto pilot to driver control while cruising down the road and doing so in an emergency situation is even more problematic. To avoid this increased danger, Google is not planning on giving the occupant a choice. There will be no steering wheel. It is either the computer or nothing.
Still, when the bugs are worked out the expectations are these autonomous cars will be a lot safer, saving probably tens of thousands of lives a year just in the U.S. IF they go into across-the-board use. Now it seems odd we feel relatively safe driving ourselves or letting others drive us (though I cannot relax with my wife behind the wheel), but we do not trust autonomous or semi-autonomous cars that are safer. Still, it won't take too many serious accidents to dampen the willingness of the public to trust a computer. Computer problems, after all, are nearly a daily happening. Half my draft of this article, for example, got lost yesterday when my computer crashed and went into the blue screen of death. This happening while typing a blog post is a nuisance; it happening while going 70 mph down the highway gives all new meaning to the blue screen of death.
Certainly manufacturers are building backup systems and a default for the car to safely pull over and park if all goes wrong, but technology is not perfect and every accident will be blamed on the technology. The more this happens the less folks will trust these cars to drive for them - despite them still being a lot safer than people driving. Overcoming this psychology will be difficult and is not happening in the next few years as some predict.
Another issue manufacturers and others seem to downplay is the prospect for vehicles operated by computers to be hacked. While manufacturers are undoubtedly jumping through hoops to insure security, there is no computer that cannot be hacked. An individual or group with sufficient knowledge, time and resources will achieve this in time. Perhaps it will just be a bored teenager getting the cars to communicate to each other about some traffic patterns that do not exist just for fun, or perhaps it will be terrorists driving cars off cliffs or into each other. The stuff of fiction movies will eventually become reality. And if you think a few accidents will hamper people's desire to use an autonomous vehicle, wait until the first successful hack gets publicized. This issue is also a prime concern on the insurance side of things, where a recent survey found it to top the list for concern for risk managers:
http://www.bloomberg.com/news/articles/2016-07-19/cybersecurity-is-biggest-risk-of-autonomous-cars-survey-finds
You can chalk my pessimism up to me being an old fart who still enjoys driving a stick shift. There may be some truth to that, but there are certainly strong pros and cons and I believe it will take a lot longer than most assume for us to go full throttle into fully autonomous vehicles.
Rand Corporation did an extensive study on autonomous vehicles, first released in 2014, which you can find here:
http://www.rand.org/pubs/research_reports/RR443-2.html
In it they identified a host of pros and cons to autonomous vehicles. Despite being around 200 pages, it is worth the read. It identifies a number of other considerations I have not addressed above.
Sunday, July 3, 2016
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment