The Future of Autonomous Vehicles is Here but Are We Ready?

Self-driving vehicles, otherwise known as autonomous vehicles, are the wave of the future, and touted as the safest path toward significantly reducing motor vehicle injuries and deaths by eliminating the main cause of crashes—human error. As we move closer to the age of autonomous vehicles populating our roadways, the question of who is responsible for crashes and who is liable for damages stemming from accidents with autonomous vehicles looms large.

In one of the first lawsuits of its kind in the United States, a motorcyclist in California recently filed suit against General Motors (GM) claiming “negligent driving” by GM’s autonomous vehicle Cruise AV. The GM was allegedly operating in self-driving mode at the time of the collision although a human driver was present but allegedly did not have his hands on the steering wheel when the crash occurred. GM claims the motorcyclist at fault after he drove between two lanes of traffic (a legal maneuver in California called “lane splitting”), and tried to overtake and pass a vehicle on the right before it was safe to do so. The motorcyclist claims the GM automated vehicle scrapped a lane change and swerved back into its original lane, striking the motorcyclist and knocking him to the ground, resulting in lost wages and medical expenses.

Proponents of autonomous vehicles argue they are much safer than vehicles operated by human drivers, that fewer deaths and injuries from motor vehicle crashes will occur because autonomous vehicles will be programmed to go the speed limit and follow rules of the road, that and drunk and distracted driving will end, among others arguments in favor of semi- and fully-autonomous vehicles.

There are six stages of driving autonomy from no driving automation where the human driver is totally in control, to full automation where there is no human driver whatsoever, no steering wheel, and no accelerator or brake pedals. As we transition from one extreme to the other, and human drivers with older, non-autonomous vehicles are mixed in with partially and fully automated vehicles, how do we decide who or what is at fault when an accident occurs?

Experts believe the focus in deciding responsibility for a crash with an autonomous vehicle will turn on two questions: How would a human driver have reacted in a similar situation, and, did the autonomous vehicle react similarly as anticipated?

Fault in accidents involving autonomous vehicles will be determined by data—data from the automated vehicle, from the personal devices of witnesses and parties to the accident, from cameras at intersections, and from other data and roadway technology not yet developed but which will be necessary as more automated vehicles take to U. S. roadways. “Sometimes these data will provide certainty (allowing investigators to “replay” a crash) and sometimes they will actually introduce new uncertainty,” according to Bryant Walker Smith, an expert of the law of self-driving vehicles and law professor at the University of South Carolina.

Before autonomous vehicles take to our roadways in large numbers, however, many other important questions must be answered including, but not limited to:

How do cars with human drivers and those with different autonomous capabilities co-exist on our roadways?

Who’s liable if an accident involves more than one autonomous vehicle?

Who’s responsible when autonomous vehicles have computer programming or sensor malfunctions and cause accidents?

Who’s responsible if an autonomous vehicle is hacked and causes an accident?

Can autonomous vehicles be programmed to anticipate the actions of a bad driver in order to avoid an accident?

Are autonomous vehicle movements reasonable and predictable to human drivers?

In real world driving scenarios, can computers, sensors and robots make the same necessary judgments and split-second decisions made by human drivers?

When autonomous vehicles manufactured and programmed by different automakers share the road, is there some standard algorithm they all follow, or does each manufacturer have a different standard that will create new problems that don’t exist now?

What about privacy rights violations because of the constant monitoring and transmission of data from autonomous vehicles and those with which they share the road?

Although the future of autonomous vehicles is not a question of “if” but rather “when”, our fervent hope is that state and federal legislators, autonomous vehicle automakers, computer programmers, IT developers, and everyone else involved will not rush to get these vehicles on the road, but approach their responsibility thoughtfully, cautiously, and deliberately, before allowing this revolutionary technology to impact U. S. roadways and those who travel on them.