Having read through all this material, my general feeling is: the near-term future (1 decade) for autonomous cars is not that great. What's been accomplished, legally speaking, is great but more limited than most people appreciate. And there are many serious problems with penetrating the elaborate ingrown rent-seeking tangle of law & politics & insurance. I expect the mid-future (+2 decades) to look more like autonomous cars completely taking over many odd niches and applications where the user can afford to ignore those issues (eg. on private land or in warehouses or factories), with highways and regular roads continuing to see many human drivers with some level of automated assistance. However, none of these problems seem fatal and all of them seem amenable to gradual accommodation and pressure, so I am<\/em> now more confident that in the long run we will see autonomous cars become the norm and human driving ever more niche (and possibly lower-class). On none of these am I sure how to formulate a precise prediction, though, since I expect lots of boundary-crossing and tertium quids. We'll see.<\/p>\n

<\/a><\/p>\n

0.1<\/span> Self-driving cars<\/a><\/h2>\n

The first success inaugurating the modern era can be considered the 2005 DARPA Grand Challenge<\/a> where multiple vehicles completed the course. The first legislation<\/a> of any kind addressing autonomous cars was Nevada’s 2011 approval. 5 states have passed legislation<\/a> dealing with autonomous cars.<\/p>\n

However, these laws are highly preliminary and all the analyses I can find agree that they punt on the real legal issues of liability; they permit relatively little.<\/p>\n

0.1.1<\/span> Lobbying, Liability, and Insurance<\/a><\/h3>\n

(Warning: legal analysis quoted at length in some excerpts.)<\/p>\n

“Toward Robotic Cars”<\/a>, Thrun 2010 (pre-Google):<\/p>\n

\n

Junior’s behavior is governed by a finite state machine, which provides for the possibility that common traffic rules may leave a robot without a legal option as how to proceed. When that happens, the robot will eventually invoke its general-purpose path planner to find a solution, regardless of traffic rules. [Raising serious issues of liability related to potentially making people worse-off]<\/p>\n<\/blockquote>\n

“Google Cars Drive Themselves, in Traffic”<\/a> (PDF<\/a>), NYT<\/em> 2010:<\/p>\n

\n

But the advent of autonomous vehicles poses thorny legal issues, the Google researchers acknowledged. Under current law, a human must be in control of a car at all times, but what does that mean if the human is not really paying attention as the car crosses through, say, a school zone, figuring that the robot is driving more safely than he would? And in the event of an accident, who would be liable - the person behind the wheel or the maker of the software?<\/p>\n

“The technology is ahead of the law in many areas,” said Bernard Lu, senior staff counsel for the California Department of Motor Vehicles. “If you look at the vehicle code, there are dozens of laws pertaining to the driver of a vehicle, and they all presume to have a human being operating the vehicle.” The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal. Mr. Lu agreed.<\/p>\n<\/blockquote>\n

“Calif. Greenlights Self-Driving Cars, But Legal Kinks Linger”<\/a>:<\/p>\n

\n

For instance, if a self-driving car runs a red light and gets caught, who gets the ticket? “I don’t know - whoever owns the car, I would think. But we will work that out,” Gov. Brown said at the signing event for California’s bill to legalize and regulate the robotic cars. “That will be the easiest thing to work out.” Google co-founder Sergey Brin, who was also at the ceremony, jokingly said “self-driving cars don’t run red lights.” That may be true, but Bryant Walker Smith, who teaches a class at Stanford Law School this fall on the law supporting self-driving cars, says eventually one of these vehicles will get into an accident. When it does, he says, it’s not clear who will pay.<\/p>\n

…Or is it the company that wrote the software? Or the automaker that built the car? When it came to assigning responsibility, California decided that a self-driving car would always have a human operator. Even if that operator wasn’t actually in the car, that person would be legally responsible. It sounds straightforward, but it’s not. Let’s say the operator of a self-driving car is inebriated; he or she is still legally the operator, but the car is driving itself. “That was a decision that department made - that the operator would be subject to the laws, including laws against driving while intoxicated, even if the operator wasn’t there,” Walker Smith says…Still, issues surrounding liability and who is ultimately responsible when robots take the wheel are likely to remain contentious. Already trial lawyers, insurers, automakers and software engineers are queuing up to lobby rule-makers in California’s capital.<\/p>\n<\/blockquote>\n

“Google’s Driverless Car Draws Political Power: Internet Giant Hones Its Lobbying Skills in State Capitols; Giving Test Drives to Lawmakers”<\/a>, WSJ<\/em>, 12 October 2012:<\/p>\n

\n

Overall, Google spent nearly $9 million in the first half of 2012 lobbying in Washington for a wide variety of issues, including speaking to U.S. Department of Transportation officials and lawmakers about autonomous vehicle technology, according to federal records, nearing the $9.68 million it spent on lobbying in all of 2011. It is unclear how much Google has spent in total on lobbying state officials; the company doesn’t disclose such data.<\/p>\n

…In most states, autonomous vehicles are neither prohibited nor permitted-a key reason why Google’s fleet of autonomous cars secretly drove more than 100,000 miles on the road before the company announced the initiative in fall 2010. Last month, Mr. Brin said he expects self-driving cars to be publicly available within five years.<\/p>\n

In January 2011, Mr. Goldwater approached Ms. Dondero Loop and the Nevada assembly transportation committee about proposing a bill to direct the state’s department of motor vehicles to draft regulations around the self-driving vehicles. “We’re not saying, ‘Put this on the road,’” he said he told the lawmakers. “We’re saying, ‘This is legitimate technology,’ and we’re letting the DMV test it and certify it.” Following the Nevada bill’s passage, legislators from other states began showing interest in similar legislation. So Google repeated its original recipe and added an extra ingredient: giving lawmakers the chance to ride in one of its about a dozen self-driving cars…In California, an autonomous-vehicle bill became law last month despite opposition from the Alliance of Automobile Manufacturers, which includes 12 top auto makers such as GM, BMW and Toyota. The group had approved of the Florida bill. Dan Gage, a spokesman for the group, said the California legislation would allow companies and individuals to modify existing vehicles with self-driving technology that could be faulty, and that auto makers wouldn’t be legally protected from resulting lawsuits. “They’re not all Google, and they could convert our vehicles in a manner not intended,” Mr. Gage said. But Google helped push the bill through after spending about $140,000 over the past year to lobby legislators and California agencies, according to public records<\/p>\n<\/blockquote>\n

\n

As with California’s recently enacted law, Cheh’s [Washington D.C.] bill requires that a licensed driver be present in the driver’s seat of these vehicles. While seemingly inconsequential, this effectively outlaws one of the more promising functions of autonomous vehicle technology: allowing disabled people to enjoy the personal mobility that most people take for granted. Google highlighted this benefit when one of its driverless cars drove a legally blind man to a Taco Bell. Bizarrely, Cheh’s bill also requires that autonomous vehicles operate only on alternative fuels. While the Google Self-Driving Car may manifest itself as an eco-conscious Prius, self-driving vehicle technology has nothing to do with hybrids, plug-in electrics or vehicles fueled with natural gas. The technology does not depend on vehicle make or model, but Cheh is seeking to mandate as much. That could delay the technology’s widespread adoption for no good reason…Another flaw in Cheh’s bill is that it would impose a special tax on drivers of autonomous vehicles. Instead of paying fuel taxes, “Owners of autonomous vehicles shall pay a vehicle-miles travelled (VMT) fee of 1.875 cents per mile.” Administrative details aside, a VMT tax would require drivers to install a recording device to be periodically audited by the government. There may be good reasons to replace fuel taxes with VMT fees, but greatly restricting the use of a potentially revolutionary new technology by singling it out for a new tax system would be a mistake.<\/p>\n<\/blockquote>\n

“Driverless cars are on the way. Here’s how not to regulate them.”<\/a><\/p>\n

“How autonomous vehicle policy in California and Nevada addresses technological and non-technological liabilities”<\/a>, Pinto 2012:<\/p>\n

\n

The State of Nevada has adopted one policy approach to dealing with these technical and policy issues. At the urging of Google, a new Nevada law directs the Nevada Department of Motor Vehicles (NDMV) to issue regulations for the testing and possible licensing of autonomous vehicles and for licensing the owners/drivers of these vehicles. There is also a similar law being proposed in California with details not covered by Nevada AB 511<\/a>. This paper evaluates the strengths and weaknesses of the Nevada and California approaches<\/p>\n

Another problem posed by the non-computer world is that human drivers frequently bend the rules by rolling through stop signs and driving above speed limits. How does a polite and law-abiding robot vehicle act in these situations? To solve this problem, the Google Car can be programmed for different driving personalities, mirroring the current conditions. On one end, it would be cautious, being more likely to yield to another car and strictly following the laws on the road. At the other end of the spectrum, the robocar would be aggressive, where it is more likely to go first at the stop sign. When going through a four-way intersection, for example, it yields to other vehicles based on road rules; but if other cars don’t reciprocate, it advances a bit to show to the other drivers its intention.<\/p>\n

However, there is a time period between a problem being diagnosed and the car being fixed. In theory, one would disable the vehicle remotely and only start it back up when the problem is fixed. However in reality, this would be extremely disruptive to a person’s life as they would have to tow their vehicle to the nearest mechanic or autonomous vehicle equivalent to solve the issue. Google has not developed the technology to approach this problem, instead relying on the human driver to take control of the vehicle if there is ever a problem in their test vehicles.<\/p>\n

[previous Lu quote about human-centric laws] …this can create particularly tricky situations such as deciding whether the police should have the right to pull over autonomous vehicles, a question yet to be answered. Even the chief counsel of the National Highway Traffic Safety Administration admits that the federal government does not have enough information to determine how to regulate driverless technologies. This can become a particularly thorny issue when there is the first accident between autonomous and self driving vehicles and how to go about assigning liability.<\/p>\n

This question of liability arose during an [unpublished 11 Feb 2012] interview on the future of autonomous vehicles with Roger Noll. Although Professor Noll hasn’t read the current literature on this issue, he voiced concern over what the verdict of the first trial between an accident between an autonomous vehicle and normal car will be. He believes that the jury will almost certainly side with the human driver despite the details of the case, as he eloquently put in his husky Utah accent and subsequent laughter, “how are we going to defend the autonomous vehicle; can we ask it to testify for itself?” To answer Roger Noll’s question, Brad Templeton’s blog elaborates how he believes that liability reasons are a largely unimportant question for two reasons. First, in new technology, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant so potential vendors must plan for liability. For the second reason, Brad Templeton makes an economic argument that the cost of accidents is borne by car buyers through higher insurance premiums. If the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self- insurance. Instead, Brad Templeton believes that the big question is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions because of punitive damages. In theory, robocars should drive the costs down because of the reductions in collisions, and that means savings for the car buyer and for society and thus cheaper auto insurance. However, if the cost per collision is much higher even though the number of collisions drops, there is uncertainty over whether autonomous vehicles will save money for both parties.<\/p>\n

California’s Proposition 103 dictates that any insurance policy’s price must be based on weighted factors, and the top 3 weighted factors must be, 1. driving record, 2. number of miles driven and 3. number of years of experience. Other factors like the type of car someone has (i.e. autonomous vehicle) will be weighed lower. Subsequently, this law makes it very hard to get cheap insurance for a robocar.<\/p>\n

Nevada Policy: AB 511 Section 8 This short piece of legislation accomplishes the goal of setting good standards for the DMV to follow. By setting general standards (part a), insurance requirements (part b), and safety standards (part c), this sets a precedent for these areas without being too limited with details, leaving them to be decided by the DMV instead of the politicians. …part b only discusses insurance briefly, saying the state must, “Set forth requirements for the insurance that is required to test or operate an autonomous vehicle on a highway within this State.” The definitions set in the second part of Section 8 are not specific enough. Following the open-ended standards set in the earlier part of the Section 8 is good for continuity, but not technically addressing the problem. According to Ryan Calo, Director of Privacy and Robotics for Stanford Law School’s Center for Internet and Society (CIS), the bill’s definition of “autonomous vehicles” is unclear and circular. In the context of this legislation, autonomous driving is seen as a binary system of existence, but in reality, it falls more under a spectrum.<\/p>\n

Overall, AB 511 did not address either the technological liabilities and barely mentioned the non-technological liabilities that are necessary to overcome for future success of autonomous vehicles. Since it was the first type of legislation to ever approach the issue of autonomous vehicles, it is understandable that the policymakers did not want to go into specifics and instead rely on future regulation to determine the details.<\/p>\n

California Policy: SB 1298…would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars). would require the adoption of safety standards and performance requirements to ensure the safe operation and testing of “autonomous vehicles” on California public roads. The bill would allow autonomous vehicles to be operated or tested on the public roads on the condition they meet safety standards and performance requirements of the bill. SB 1298’s 66 lines of text is also considerably longer than AB 511’s 12 lines of relevant text (the entirety of AB 511 is much longer but consists of irrelevant information for the purposes of autonomous cars).<\/p>\n

SB 1298 has clear intentions to have company developed vehicles by saying in Section 2, Part B that, “autonomous vehicles have been operated safely on public roads in the state in recent years by companies developing and testing this technology” and how these companies have set the standard for what safety standards will be necessary for future testing by others. This part of the legislation implicitly supports Google’s autonomous vehicle because it has the most extensively tested fleet of vehicles out of all the companies, and all this testing has been nearly exclusively done in California. This bill is an improvement over AB 511 by putting more control in the hands of Google to focus on developing the technology, which is a signal by the policymakers to create a climate favorable for Google’s innovation within the constraints of keeping society safe.<\/p>\n

To avoid setting a dangerous precedent for liability in accidents, policymakers can consider protecting the car companies from frivolous and malicious lawsuits. Without such legislation, future plaintiffs will be justified to sue Google and put full liability on them. There are also potential free riding effects of the economic moral hazard of putting the blame on the company that makes the technology, not the company that manufactures the vehicle. Since we are assuming that autonomous vehicle technology will all come from one source of Google, then any accident that occurs will pin the blame primarily on Google, the common denominator, not as much as on the car manufacturer…Policy that ensures the costs per accident remains close to today’s current cost will save money for both the insurer and customer. This could potentially mean putting a cap on rewards towards the recipients or punishments towards the company to limit shocks to the industry. Overall, a policymaker can choose to create a gradual limit on the amount of liability placed on the vendor based on certain technology or scaling issues that are met without accidents.<\/p>\n

SB 1298 manages to cover some of the shortcomings of AB 511, such as how to improve upon the definition of an autonomous vehicle, as well as looking more towards the future by giving Google more responsibility and alleviating some of the non-technical liability by considering their product “under development”. However, both pieces of legislation fail to address the specific technical liabilities such as bugs in the code base or computer attacks, and non-technical liabilities such as insurance or accident liability.<\/p>\n<\/blockquote>\n

“Can I See Your License, Registration and C.P.U.?”<\/a>, Tyler Cowen; see also his “What do the laws against driverless cars look like?”<\/a>:<\/p>\n

\n

The driverless car is illegal in all 50 states. Google, which has been at the forefront of this particular technology, is asking the Nevada legislature to relax restrictions on the cars so it can test some of them on roads there. Unfortunately, the very necessity for this lobbying is a sign of our ambivalence toward change. Ideally, politicians should be calling for accelerated safety trials and promising to pass liability caps if the cars meet acceptable standards, whether that be sooner or later. Yet no major public figure has taken up this cause.<\/p>\n

Enabling the development of driverless cars will require squadrons of lawyers because a variety of state, local and federal laws presume that a human being is operating the automobiles on our roads. No state has anything close to a functioning system to inspect whether the computers in driverless cars are in good working order, much as we routinely test emissions and brake lights. Ordinary laws change only if legislators make those revisions a priority. Yet the mundane political issues of the day often appear quite pressing, not to mention politically safer than enabling a new product that is likely to engender controversy.<\/p>\n

Politics, of course, is often geared toward preserving the status quo, which is highly visible, familiar in its risks, and lucrative for companies already making a profit from it. Some parts of government do foster innovation, such as Darpa, the Defense Advanced Research Projects Agency, which is part of the Defense Department. Darpa helped create the Internet and is supporting the development of the driverless car. It operates largely outside the public eye; the real problems come when its innovations start to enter everyday life and meet political resistance and disturbing press reports.<\/p>\n

…In the meantime, transportation is one area where progress has been slow for decades. We’re still flying 747s, a plane designed in the 1960s. Many rail and bus networks have contracted. And traffic congestion is worse than ever. As I’ argued in a previous column, this is probably part of a broader slowdown of technological advances.<\/p>\n

But it’s clear that in the early part of the 20th century, the original advent of the motor car was not impeded by anything like the current mélange of regulations, laws and lawsuits. Potentially major innovations need a path forward, through the current thicket of restrictions. That debate on this issue is so quiet shows the urgency of doing something now.<\/p>\n<\/blockquote>\n

Ryan Calo<\/a> of the CIS<\/em> argues essentially that no specific law bans<\/em> autonomous cars and the threat of the human-centric laws & regulations is overblown. (See the later Russian incident.)<\/p>\n

“SCU conference on legal issues of robocars”<\/a>, Brad Templeton:<\/p>\n

\n

Liability: After a technology introduction where Sven Bieker of Stanford outlined the challenges he saw which put fully autonomous robocars 2 decades away, the first session was on civil liability. The short message was that based on a number of related cases from the past, it will be hard for manufacturers to avoid liability for any safety problems with their robocars, even when the systems were built to provide the highest statistical safety result if it traded off one type of safety for another. In general when robocars come up as a subject of discussion in web threads, I frequently see “Who will be liable in a crash” as the first question. I think it’s a largely unimportant question for two reasons. First of all, when the technology is new, there is no question that any lawsuit over any incident involving the cars will include the vendor as the defendant, in many cases with justifiable reasons, but even if there is no easily seen reason why. So potential vendors can’t expect to not plan for liability. But most of all, the reality is that in the end, the cost of accidents is borne by car buyers. Normally, they do it by buying insurance. But if the accidents are deemed the fault of the vehicle maker, this cost goes into the price of the car, and is paid for by the vehicle maker’s insurance or self-insurance. It’s just a question of figuring out how the vehicle buyer will pay, and the market should be capable of that (though see below.) No, the big question in my mind is whether the liability assigned in any lawsuit will be significantly greater than it is in ordinary collisions where human error is at fault, because of punitive damages…Unfortunately, some liability history points to the latter scenario, though it is possible for statutes to modify this.<\/p>\n

Insurance: …Because Prop 103 [specifying insurance by weighted factors, see previous] is a ballot proposition, it can’t easily be superseded by the legislature. It takes a 2/3 vote and a court agreeing the change matches the intent of the original ballot proposition. One would hope the courts would agree that cheaper insurance to encourage safer cars would match the voter intent, but this is a challenge.<\/p>\n

Local and criminal laws: The session on criminal laws centered more on the traffic code (which isn’t really criminal law) and the fact it varies a lot from state to state. Indeed, any robocar that wants to operate in multiple states will have to deal with this, though fortunately there is a federal standard on traffic controls (signs and lights) to rely on. Some global standards are a concern - the Geneva convention on traffic laws requires every car has a driver who is in control of the vehicle. However, I think that governments will be able to quickly see - if they want to - that these are laws in need of updating. Some precedent in drunk driving can create problems - people have been convicted of DUI for being in their car, drunk, with the keys in their pocket, because they had clear intent to drive drunk. However, one would hope the possession of a robocar (of the sort that does not need human manual driving) would express an entirely different intent to the law.<\/p>\n<\/blockquote>\n

“Definition of necessary vehicle and infrastructure systems for Automated Driving”<\/a>, European Commission report 29 June 2011:<\/p>\n

\n

Yet another paramount aspect tightly related to automated driving at present and in the near future, and certainly related to autonomous driving in the long run, is the interpretation of the Vienna Convention. It will be shown in the report how this European legislation is commonly interpreted, how it creates the framework necessary to deploy on a large scale automated and cooperative driving systems, and what legal limitations are foreseen in making the new step toward autonomous driving. The report analyses in the same context other conventions and legislative acts, searches for gaps in the current legislation and makes an interesting link with the aviation industry where several lessons can be learnt from.<\/p>\n

It seems appropriate to end this summary with a few remarks not directly related to the subject of this report, but worth in the process of thinking about automated driving, cooperative driving, and autonomous driving. The progress in the human history has systematically taken the path of the shortest resistance and has often bypassed governmental rules, business models, and the obvious thinking. At the end of the 1990s nobody was anticipating the prominent role the smart phone would have in 10 years, but scientists were busy planning journeys to Mars within the same timeframe. The latter has not happened and will probably not happen soon… One lesson humanity has learnt during its existence is that historical changes that followed the path of the minimum resistance triggered at a later stage fundamental changes in the society. “A car is a car” like David Strickland, administrator of the National Highway Traffic Safety Administration (NHTSA) in the U.S. said in his speech at the Telematics Update conference in Detroit, June 2011, but it may drive soon its progress along a historical path of minimum resistance.<\/p>\n

An automated driving systems needs to meet the Vienna Convention (see Section 3, aspect 2). The private sector, especially those who are in the end responsible for the performance of the vehicle, should be involved in the discussion.<\/p>\n

The Vienna Convention on Road Traffic is an international treaty designed to facilitate international road traffic and to increase road safety by standardizing the uniform traffic rules among the contracting parties. This convention was agreed upon at the United Nations Economic and Social Council’s Conference on Road Traffic (October 7, 1968 - November 8, 1968). It came into force on May 21 1977. Not all EU countries have ratified the treaty, see Figure 13 (e.g. Ireland, Spain and UK did not). It should be noted that in 1968, animals were still used for traction of vehicles and the concept of autonomous driving was considered to be science fiction. This is important when interpreting the text of the treaty: in a strict interpretation to the letter of the text, or interpretation of what is meant (at that time).<\/p>\n

The common opinion of the expert panel is that the Vienna Convention will have only a limited effect on the successful deployment of automated driving systems due to several reasons:<\/p>\n