Can A Machine Care About Privacy? Thumbnail
Internet of Things (IoT) 23 April 2015

Can A Machine Care About Privacy?

By Robin WiltonDirector, Internet Trust

I recently attended the Digital Enlightenment Forum 2015 in Kilkenny; not your average tech conference, and not the average discussion topics, either – but topics of growing relevance.

For me, the value was in having a conference that offers the time – even if only for a day – to step back and look at the bigger picture within which all our urgent-seeming daily task-work takes place.

One theme in particular stood out for me, and it’s also a major strand of the Trust and Identity team’s work plan over the coming couple of years. Several sessions, including two breakout groups, addressed the theme of digital ethics. The discussion was wide-ranging, sometimes detailed and often abstract, but fascinating and – ultimately, entirely practical. There will be a full paper on the theme in due course, but here’s a hint of some of the ground we covered.

[Warning: may contain philosophy…]

I hope that warning hasn’t immediately discouraged you. “Don’t panic!”, as Douglas Adams would have said. There’s a really simple model for discussing complex topics like this when you have a very diverse group of people round the table; almost all the discussion tends to fall into one of four categories:

  • Philosophy/principles
  • Strategy/society
  • Implementation/practicalities
  • Technology

Once you know that, it’s much easier to avoid getting mired in the intricacies of any one of the four categories, and that keeps the discussion relevant to everyone.

So, philosophy:

Taking our cue from one of the morning’s presentations, we went right back to fundamentals: what have thinkers said about ethics in the pre-digital past? There’s the “continental” philosophical approach of people like Foucault and Barthes, who cast ethics as a series of narratives and structural relationships; then there’s the more “traditional” analytic approach, looking at systems of ethics based on consequences, rules and justice. What they have in common is a recognition that ethics is contextual, and a function of the society in which it evolves.

In our case, we’re in the midst of a post-industrial, technically-oriented society. It’s sometimes hard to imagine that things could be any other way… but what happens if you subtract technology from the ethical equation? You’re left with information (rather than data), decisions, relationships, and semantics. Technology may change a lot of things, but it doesn’t remove those fundamentals, and it doesn’t alter the contextual nature of ethics, so we can be reassured that we have some solid principles to build on.

What’s happening at the social level?

Here, the main point I picked up was about “agency”. In our technologically-oriented society, almost every action we are able take (our “agency”) is mediated – either through technology, such as computers, phones etc., or through third parties, such as banks, the retail supply chain, telcos, internet service providers, identity providers and so on. Ethically, the fact that what we do is mediated often moves us further from the consequences of our decisions and actions. This can leave us feeling that we’re not really responsible for what might happen. As one participant put it:

“Technically mediated phenomena are outstripping human-centric’ ideas of privacy and ethical outcomes.”

In the context of our discussion at the time, that was a perfectly normal and rational conclusion to draw. When you stop and think about it, it could be quite a scary one, too.

But so what… why should I care?

Practicalities:

Well, we should care because all those third parties through whom we act are making decisions, every day, which directly affect us. Sometimes they do so with our knowledge and consent, but on the Internet, that is far from the norm, as I suspect we all acknowledge. Here are some examples of the kinds of decision which are made on your behalf all the time:

  • “This privacy policy and these cookies are fine for you; there’s no need to ask you explicitly if you’re OK with them.”
  • “We’ll opt you in to our data-sharing policy by default. If you don’t like it, you can always tell us later.”
  • “Your personal data is safe with us, because we anonymise it. You don’t need to worry.”
  • “Collecting this data does compromise your privacy here and now, yes… but we expect there to be a collective benefit in the long run, so that’s OK.”
  • “We’re ‘personalising’ our prices for you, based on the really expensive laptop you’re using. But don’t worry – we have your best interests at heart.”

These are all real, practical consequences of our technically-mediated society, and they affect your privacy every day.

Technology:

So what’s the technical dimension? Again, what struck me was “agency”. The number and diversity of ethical agents we encounter is growing fast, and… not all of them are human. A lot of decisions these days are made by algorithms (remember those stock market volatilities caused by too many automated trading systems all reacting to each other?), and any algorithm that makes decisions is not ethically neutral. “Ah,” I hear you say, “but they only do what they’re programmed to do. Someone. somewhere is responsible… not the algorithm”.

OK – let’s look at that for a moment. First, some algorithms are adaptive; there are already network security products, for instance, that learn, over time, what constitutes “normal” behaviour, and adjust their own behaviour accordingly. Then there’s machine learning in its broader sense. Researchers into artificial intelligence already report that the algorithms they create frequently go on to evolve in often unexpected ways, and to exceed human capabilities.

And last: machines are increasingly capable of autonomy – self-driving cars are a good example. They will react to changing conditions, and deal with circumstances they have never encountered before, without human intervention. The first time a driverless vehicle runs someone over, we’ll see where the ethical buck stops.

Conclusions:

This has been a lightning gallop through several hours of discussion. What did we conclude?

  • First, that modern life raises just as many ethical issues as it ever did.
  • Second, that if we’re not careful, all the ethical calculations get made on our behalf – and not always in our best interest.
  • Third, that if we’re to retain our agency, we need to understand that that’s what we’re trying to do, and why.
  • Fourth, that there are indeed some constants here, despite the pace of change around us. Ethics is a social, contextual thing, and it has to do with meaning, relationships and decisions. Those are very human things.

And last, that we have entered the age where a growing number of ethical agents are non-human, and we have to understand how that affects us and our societies. Is there a fundamental ethical principle based on ‘global’ human values? Might that principle be to do with consciousness, or autonomy, for example? And if so, what’s the ethical status of machines that are increasingly autonomous and might even, at some point, be described as conscious?

We aren’t necessarily at the point where it makes sense to ask whether a machine can care about privacy… but we’re not far from it.

Disclaimer: Viewpoints expressed in this post are those of the author and may or may not reflect official Internet Society positions.

Related articles

Building Trust 5 December 2019

Rural Development Special Interest Group Organizes Internet Connectivity Tag 2019

In November, the Internet Society Rural Development Special Interest Group (RD SIG) organized an event called the Internet Connectivity Tag 2019 in Bangalore,...

Building Trust 14 November 2019

IoT Security Policy Platform Wants to Raise the Bar On Global IoT Security

By next year, five Internet of Things (IoT) devices are projected to be in use for every person on...

Building Trust 2 October 2019

Celebrating National Cybersecurity Awareness Month

Every October, we mark National Cybersecurity Awareness Month. From the U.S. Department of Homeland Security website, “Held every October,...