Who’s liable? The AV or the human driver?


Researchers have developed a joint fault-based liability rule that can be used to regulate both self-driving car manufacturers and human drivers. They propose a game-theoretic model that describes the strategic interactions among the law maker, the self-driving car manufacturer, the self-driving car, and human drivers, and examine how, as the market penetration of AVs increases, the liability rule should evolve.

AVs remove people from the hands-on task of driving and thus pose a complex challenge to today’s accident tort law, which primarily punishes humans. Legal experts anticipate that, by programming driving algorithms, self-driving car manufacturers, including car designers, sensor vendors, software developers, car producers, and related parties who contribute to the design, manufacturing, and testing, will have a direct influence on traffic. While these algorithms make manufacturers indispensable actors, with their product’s liability potentially playing a critical role, policy makers have not yet devised a quantitative method to assign the loss between the self-driving car and the human driver.

To tackle this problem, researchers at Columbia Engineering and Columbia Law School have developed a joint fault-based liability rule that can be used to regulate both self-driving car manufacturers and human drivers. They propose a game-theoretic model that describes the strategic interactions among the law maker, the self-driving car manufacturer, the self-driving car, and human drivers, and examine how, as the market penetration of AVs increases, the liability rule should evolve.

Their findings are outlined in a new study to be presented on January 14 by Sharon Di, assistant professor of civil engineering and engineering mechanics, and Eric Talley, Isidor and Seville Sulzbacher Professor of Law, at the Transportation Research Board’s 99th Annual Meeting in Washington, D.C

While most current studies have focused on designing AVs’ driving algorithms in various scenarios to ensure traffic efficiency and safety, they have not explored human drivers’ behavioral adaptation to AVs. Di and Talley wondered about the «moral hazard» effect on humans, whether with exposure to more and more traffic encounters with AVs, people might be less inclined to exercise «due care» when faced with AVs on the road and drive in a more risky fashion.

«Human drivers perceive AVs as intelligent agents with the ability to adapt to more aggressive and potentially dangerous human driving behavior,» says Di, who is a member of Columbia’s Data Science Institute. «We found that human drivers may take advantage of this technology by driving carelessly and taking more risks, because they know that self-driving cars would be designed to drive more conservatively.»

The researchers used game theory to model a world with interacting players who try to select their own actions to optimize their own goals. The players — law makers, AV manufacturers, AVs, and human drivers — have different goals in the transportation ecosystem. Law makers want to regulate traffic with improved efficiency and safety, self-driving car manufacturers are profit-driven, and both self-driving cars and human drivers interact on public roads and seek to select the best driving strategies. To capture the complex interaction among all the players, the researchers applied game theory methods to see which strategy each player settles on, so that others will not take advantage of his or her decisions.


Story Source: Materials provided by Columbia University School of Engineering and Applied Science. Original written by Holly Evarts. Note: Content may be edited for style and length.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *