It may sound like a ‘through the looking glass’ paradox, but the US National Highway Traffic Safety Agency (NHTSA) has decided — in the face of relentless innovation in driverless vehicles — that cars can be their own drivers. This has enormous implications, and was motivated by the design issues of future AI-driven cars.
Google’s Chris Urmson, the director of Google’s driverless car initiative, raised the issue with NHTSA, asking how the agency intreprets Federal Motor Vehicle Safety Standards (FMVSS) vis-à-vis smart cars:
Wayne Cunningham, Feds declare that Google’s self-driving car is its own driver
NHTSA posted a detailed response on its Web site. The response shows that Google was concerned how the FMVSS could be applied to a computer-controlled car lacking steering wheel or any other traditional driver controls. Urmson suggested that NHTSA could interpret the FMVSS as not to apply to Google’s cars at all, or that it require a traditional interpretation, assuming a driver in the left front seat, or that the system controlling the car could be considered the driver.
In NHTSA’s letter, it chose the latter solution, determining that the self-driving system is the driver for purposes of the FMVSS.
So, this in principle means that Google (and others) can design cars that have no requirement for human-oriented driver’s controls, like steering wheels, accelerators, brakes, or rear-view mirrors, for example.
If an AI-driven car is its own driver and no person riding in the car is playing that role, then in the case of an accident there is no human responsible since the car is the driver.
But secondly, this might open the door to something perhaps just as important. If an AI-driven car is its own driver and no person riding in the car is playing that role, then in the case of an accident there is no human responsible since the car is the driver. The NHTSA may have adroitly resolved the notion of driver accountability for the coming smart car future.
Originally published at stoweboyd.com on 10 February 2016.