Today at the Algorithms in Culture conference at the Berkeley Institute of Data Science, Helen Nissenbaum of NYU gave a fabulous keynote, Values in Algorithms: Then and Now, in which she examined bias and discrimination in, or resulting from, algorithms in credit scoring, IoT, predictive analytics, and targeted advertising. A central theme of her talk was accountability, bias, and governability of emerging technologies – along with other “newfangled societal quandaries.” She touched on insurance, aircraft autopilot systems and driverless cars, all topics dear to the risk analyst.
Citing massive data breaches that neither broke laws nor resulted in civil litigation, Helen suggested that a reduction in accountability has accompanied automation. She fears that future delegation of function will not necessitate delegation of accountability.
This made me wonder whether in the case of driverless cars, the reverse might actually be true. With human-controlled cars, when they crash into each other, we indeed have a strong sense of accountability in most US states. But this hinges on the decidability of fault and blame, which is not a trivial detail. Accountability isn’t simply a moral matter; it is a legal one involving highly nuanced legal code, judges, juries, and evidence collection. Deciding accountability in car crashes is both flawed and somewhat arbitrary.
I asked Helen if she had considered the possibility that driverless cars could in fact increase real accountability in the sense that when driverless cars crash, the blame will fall directly on a car maker. If driverless-car technology is feasible outside of the bay area sandbox, and if it is as reliable as auto-flight systems, determining fault even in the case of two-car collisions, will be much less arbitrary. In that sense, the algorithms might effectively reduce bias. She seemed to think the answer to that would depend on the reliability of the cars and how their accident rates will compare to that of human drivers. I’m less convinced that reliability need enter the equation, but I haven’t really explored the topic in any detail.
Regardless of the bias issue, I’d think driverless cars will force big changes on determining insurance-premiums and the insurance world in general. Perhaps more interesting, driverless-car algorithms will have to embody risk analysis and risk-reward calculus in a major way. The trolley problem, a favorite of philosophers who force us to decide which life to save, in all its incarnations, may have to be encoded; and software developers might need to minor in Mill, Bentham, and rule-utilitarianism.