A new Internet is emerging, and the way we decide to shape its foundations will be decisive for the years to come.
That’s why we have to try and make the best design choices from the start. But how?
We can look at the issues of the recent past and of the present – connecting the dots between Web1, Web2, and Web3, integrating lessons we’ve already learnt to help make the next iteration of the internet better than the last.
In WIRED’s 2018 interview with tech activist Tristan Harris and historian Yuval Noah Harari, the host, Editor in Chief Nicholas Thomson introduces the conversation by reflecting on his magazine being already 25 years old, and the change that happened since its birth:
“When the magazine was founded, the whole idea was that it was a magazine about optimism and change, and technology was good and change is good. 25 years later you look at the world today, you can't really hold the entirety of that philosophy.”
His remark reflects growing concerns within the world of tech, and more particularly information technology, that have only recently been broadcast to a wide audience of non-experts, most notably with the release of the documentary The Social Dilemma. The film focuses on the “flip side of [the] coin” of platforms like social media, while touching upon many different problems within the context of the attention economy, mass surveillance and tech addiction. These already important topics are gaining even more momentum as the digital world turns towards extended reality.
Of course, worrying about the flourishing of new technology is nothing new. And some are keen to notice this pattern of people feeling concerned about the arrival of writing, the printing press, the radio, the television, and so on, and also notice that we’ve survived it all just fine, and conclude that new technology can’t possibly be the cause of any kind of problem. The almighty and ever-lasting domination of man over nature is left unscratched.
Despite sounding reassuring, this approach suffers from a complete lack of concern for the details of how technology is actually affecting human life. It’s more of a quick-fix, head-in-the-sand approach that allows you to lazily wave your hand at the life’s work of the neurologists, philosophers, sociologists, data scientists and other experts who have been trying to understand what’s actually happening. It turns a blind eye on the work of people like Shoshana Zuboff, Michel Desmurget or Tristan Harris. There’s no reason to panic, so they must be getting something wrong, right?
Recent concerns about the effects of information technology on politics, the economy, health, cognition, and more broadly regarding the question of whether we’re really always using technology to our advantage, have their place within the context of philosophy of technology. Defining and redefining humanity’s relation to technology rarely seems to be a question of being categorically for or against a technology, but rather of asking how we should use that technology, at an individual and societal level. How should we live?
Aristotle, in his time, already tried to delineate what the good life might be; the question is as old as time. And here, we will be likely to find more questions than answers. And different people will have different opinions, different interpretations.
Of course, it is conceivable that in some cases, completely rejecting certain technologies as a society may be the best solution, perhaps because it’s the simplest solution. For instance, would it be too radical to imagine a future where nuclear weapons are banned? When we look at information technology and extended reality, we may also encounter this kind of categorical rejection.
In The Social Dilemma, Harvard Professor Shoshana Zuboff describes the practice of observing, analysing, predicting, influencing and monetising user behaviour online as “markets that trade in human futures at scale”; she later asserts that “they should be outlawed”. A firm, categorical refusal of a specific technology. Or rather, of a specific use of technology; surely there must be other things we can do with highly complex algorithms and A.I., besides manipulating people for profit?
Despite not having 500 pages to explain how she got to this conclusion, like she does in her latest book, The Age of Surveillance Capitalism, Zuboff does make a point in the film. If the ability to use A.I. to exploit cognitive loopholes and influence people’s thoughts and behaviour at scale creates such an imbalance of power that it threatens the usual pillars of society, like high social trust and the right to self-determination, it’s hard to imagine how letting this run will not keep raising unforeseen problems.
So, we must face the dilemma: to let the infernal machine continue, or stop it in its tracks? That some would argue the simplest solution – get rid of it forever – is the best, is perhaps more to do with politics than with technology. However, as we plunge into the details of the global digital infrastructure we are most often confronted with nuance, a strong need for sophistication, and perhaps patience. Solutions to our (philosophical) problems won’t come quick and easy.
To express this nuance, our interviewees in The Social Dilemma often clarify their positions: “I don’t hate them. I don’t want to do any harm to Google or Facebook. I just want to reform them so they don’t destroy the world” says Jaron Lanier during the end credits.
In a part in which several of the interviewees recognise the importance of the financial incentives that drive these companies’ business model, Sandy Parakilas adds: “I think we need to accept that it’s okay for companies to be focused on making money. What’s not okay is when there’s no regulations, no rules, and no competition, and the companies are acting as sort of de facto governments.”
Tristan Harris, arguably the principal interviewee of the documentary, makes a similar kind of concession at the end of the first episode of Your Undivided Attention: “We’re not against tech or social media, we’re against the way it’s designed and being used today.”
These nuances reflect the unclear place that (information) technology is taking in our modern lives. Technology is traditionally seen as a means for humans to change the world around them, to reach their objectives, as something that works like an extension of the body, as a tool. We want to cut vegetables; we use a knife. We want to make sure we don’t forget something; we use a pen and paper. This way, our relation to technology is primarily defined at the individual level. If I have an issue with technology, it concerns me. Perhaps, in some cases, the person who crafted, or invented my tool.
But what happens when technology becomes a network? What happens when this network affects the lives of different people, in different but interdependent ways? What happens when technology is no longer used to affect the physical world, but something less tangible, more abstract, like information? Are we losing sight of the tool?
As we direct even more aspects of our lives to the Metaverse, these questions only become more crucial. If technology allows us to deeply immerse ourselves in separate universes which we can freely create, is it still a tool, or has it already become something unspeakable, something that comes to define the human condition in ways we never anticipated?
For something like Google’s search engine, we can perhaps think of it as a ‘double tool’, or a ‘two-way tool’. In fact, if we look at Google Search over the entire course of its 25 years of existence, to be precise, it used to be more of a ‘one-way tool’, until the turn of the millennium, when shareholder pressure forced Google’s founders to abandon their strong opposition to advertising, as was documented in Steven Levy’s book In the Plex.
Until then, the search engine was primarily a tool for its users: they used it to find websites on the internet, to find information. That the search engine was freely available to them was a service provided to them by Google.
Today, this same service is still provided, though Google has changed radically – more than appearances suggest. Now, Google Search is primarily a tool for advertisers, and for the company itself, although this isn’t directly visible for the public: the search engine still looks basically the same, besides the occasional few innocent-looking search results with ‘ad’ written next to them.
Advertisers use Google Ads as a tool to influence the consumption choices of users by showing them ads. The company itself uses the platform as a data collection machine that gathers as much information as possible about users, in order to observe and analyse their behaviour, and craft prediction products, to sell predictions about user behaviour to advertisers. For the sake of simplicity, we can forget the distinction between advertisers and Google itself, because it is secondary, and focus on the main distinction: we have on one hand, a tool for finding websites, and on the other, a tool for observing, analysing and influencing behaviour.
And yet, these two tools, though very different, are intrinsically welded together: one tool can’t be used unless the other is too. They exist within the same platform, and this is why I call it a ‘two-way tool’, by opposition to the ‘one-way tool’ that Google Search was before it began incorporating ads.
Now, suppose you want to use the tool that will find websites for you, but you don’t want a tool to be used on you to change – albeit by a fraction of a nothing at a time – the course of your life? Well, you can’t. You can only get that free service provided for you if you offer up not money, but the vulnerabilities of your psyche to the friendly-coloured multi-national A.I.-powered trillion-dollar corporation. Is this still a tool?
It seems that technology may have become a bargain. Given the addictive nature of this kind of technology, the inescapability of a market monopoly and the fact that users are barely informed of the ways in which their thoughts and behaviour are influenced, I’m not sure if we can call it a fair bargain.
Do we want to stay stuck with this kind of use of technology? That’s for humanity to decide, and now is the time. Ethical considerations and human sophistication must be baked into the development of the Metaverse early on. A good starting point is reflecting on the existing problems that today’s tech and media giants pose for society.
Sources: Wired video with Harris and Harari, The Social Dilemma, Humane Tech, The Social Dilemma documentary, In the Plex by Steven Levy
This post was kindly contributed by Emile Johnston, who runs the Humane Tech Blog, which is dedicated to reimagining the world with humane technology. Definitely worth checking out.