Privacy, Emerging Technologies and the Law
By Marc Blitz, Series Editor of Palgrave Studies in Law, Neuroscience and Human Behavior.
Computers raise daunting challenges for our privacy – and the law of privacy. They store massive amounts of data – with even a small phone able to hold the contents of what, in physical form, would require an entire library building. And with intensive analysis of that data, a computer can learn still more about us. Even if there is no one piece of that data which clearly reveals that a person is pregnant, seeking a new job, or contemplating a move to a new city, there may be patterns in that data that do. Artificial intelligence raises the stakes – because it allows computers to “attend to,” and perhaps act upon, data about us even when government is far short of the manpower to do so. And it raises new challenges for courts and lawyers because they might expect government officials and other citizens to read and obey the principles encoded in privacy law, AI entities may have to be programmed to obey these laws or respect core principles of liberty and privacy.
This challenge is already somewhat familiar for judges, lawyers, and those who think about law and law enforcement: Each year, the U.S. Supreme Court and other courts face new cases on computer searches and electronic surveillance. What is less familiar, but may one day also be just as important, is a variant of this computer – and AI-generated challenge to privacy that surveils not only our external behavior (whether in physical activities or Internet searches) but the internal world of our thoughts and feelings. Equipped with fMRI machines, EEG readings, or other “neuroimaging” technologies, scientists have made modest progress in what some commentators call “brain-based mind reading.” They have use computers to match particular activity patterns in brain images with specific tasks required of, or stimuli presented to, an experimental subject – and thus, to provide a possible foundation for inferring that when we see the same brain activity pattern elsewhere, it must be because the person from which comes is experiencing the matching thought or feeling.
Another type of technology-based “mind reading” may become possible in the future if and when brain activity is not only measured and analyzed by computers, but partially generated by them. Instead of being merely a bystander and observer as the brain generated mental world, in other words, a computer might be a participant. This might occur, for example, where individuals use “brain implants” to replace lost mental function, or use some other type of less invasive brain-computer interface to generate certain types of brain activity, or perhaps to allow individuals to directly control certain aspects of the external world (like a computer inputs) with our minds, rather than via muscle movements.
Assuming these technologies do come to play a more significant part in everyday life, courts, lawyers, and ethicists will confront an important question: Do the privacy safeguards they are creating to safeguard us from surveillance from the outside – surveillance performed with cameras that record us as we walk or drive down a street, or from far-away computers that track us as we move through the World Wide Web – also apply, in a straightforward way, to surveillance conducted from the inside – or more specifically, by recording of, and inference from, our brain activity? This is one of the key topics that Palgrave’s series on Law, Neuroscience, and Human Behavior will explore as it asks what implications emerging neuroscience technologies may have for the way we understand, elaborate and give force to constitutional liberties
About the editor
Marc Blitz is the Allan Joseph Bennett Professor of Law at Oklahoma City University. He is a constitutional law scholar whose writing has focused on freedom of speech, privacy law, and law enforcement investigations -- and how these areas of law are affected by neuroscience and emerging technologies.