The so-called tech-lash continues, as a tech industry that just several years ago was a public darling keeps coming under fire. Witness the accelerating U.S. presidential election cycle, in which candidates have found it politically useful to attack tech giants for their transgressions.
This reflex won't be enough to sideline the best products of our ongoing tech revolution—such as that complex of systems and solutions that cluster under the label of the Internet of Things (IoT). But it does mean that the decision-makers responsible for these solutions will have to do better in justifying them. More to the point, they'll need to make clear to the world how they'll grapple with the unforeseen consequences that any major tech initiative will bring.
When it comes to the IoT, the sticking point is privacy: how to approach it, preserve it, think about it, claw it back where it's been lost, and even define it in the first place. The stakes are high. Just ask politicians in Canada. That's where a public outcry has meant the end of Quayside, a project in which Alphabet's Sidewalk Labs subsidiary was to build a model high-tech IoT-enabled smart city on a stretch of Lake Ontario waterfront. Five years ago, this initiative, which Prime Minister Justin Trudeau supported, would likely have sailed right through. But in the current climate of anxiety about overweening big tech, the “end of privacy," the misuse of data, and “surveillance capitalism," the rules of the game have changed.
How can IoT vendors cope with these new rules? First of all, by taking them seriously. That means more than ensuring, or claiming to ensure, that you're going to handle user data responsibly. It means devising and hewing to a comprehensive framework for digital ethics writ large—a framework within which privacy forms only a part.
What might such an approach look like in practice?
Look for the following things. First, transparency has to be the watchword when it comes to the collection and use of data. When they go to market, vendors should “lead with" their explanations of what they plan to do with any given project's data, as opposed to providing that information on a need-to-ask basis.
Second, data risks, and response protocols in the event of data breaches, need to be painstakingly—and, again, transparently—mapped out. Breakdowns both large and small need to be predictable.
And finally, there has to be clear ownership of privacy and data issues at every step along the data-use process. (Hiring a chief privacy officer might not be a bad idea towards making this happen.)
Even better than the chance to opt out of a given system is being able to choose whether to participate in the first place, and how much and in what way. As digital ethicist Luca van der Heide writes, it should be the user who makes the choice whether or not to interact with a digital system, not the system itself.
To put it another way, a system shouldn't invade our space. It should offer us the chance to participate in it.
A couple of obvious objections pop up here regarding the issue of opting out, or in, to a comprehensive IoT system. After all, lots of IoT systems preclude choice. With help from IT colleagues graced with super-human patience, an employee who works in a smart office might be able to refuse to participate in his company's IoT-enabled conference room booking system. But avoiding his city's IoT-enabled street lighting system promises to be a heavier lift.
Which leads us to our next pointer . . .
Many IoT systems aren't only pervasive. They're also unobtrusive to the point of invisibility.
Which is the point. One of the beauties of an IoT system is the way it frictionlessly augments reality, helping us save energy, manage our offices or homes, or find a parking space without our even noticing that we're getting any help at all.
That said, such unobtrusiveness can raise ethical flags. To find a way around this problem, van der Heide insists that every IoT system at some point “show itself," in his words—that is, make clear to the human beings who populate its space that it exists, and that those human beings exist within its terms. “Showing itself" might be as simple a matter as signage that announces to city residents that they're entering an area where sensors are tracking how and where they move.
Still, a system that shows itself isn't necessarily a system that you can easily opt out of. But there is a major ethical difference between a pervasive system that transparently announces its presence and one that doesn't.
On the other hand, context matters—in digital ethics, as in all things. IoT decision-makers should approach ethical matters with an eye towards what a reasonable human being would consider reasonable. An IoT system that manages office lighting, and that uses the information it gathers for no grander purpose than to route employees between well-lit spaces, is in a different ethical category than other, more comprehensive systems with more ambitious (and intrusive) plans for how they're going to use their data.