Schrödinger’s Robot: Privacy in Uncertain States

Ian Kerr


Can robots or AIs operating independently of human intervention or oversight diminish our privacy? There are two equal and opposite reactions to this issue. On the robot side, machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This is fueling a growing optimism that robots and AIs will exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status. On the privacy side, one sees the very opposite: robots and AIs are, in a legal sense, nothing. The received view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law. This article argues that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Epistemic privacy offers a useful analytic framework for understanding the kind of cognizance that gives rise to diminished privacy. Because machines can actuate on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of cognizance that definitively implicates privacy. Consequently, I conclude that legal theory and doctrine will have to expand their understanding of privacy relationships to include robots and AIs that meet these epistemic conditions. An increasing number of machines possess epistemic qualities that force us to rethink our understanding of privacy relationships with robots and AIs.

Full Text: