How the internet is killing us

 

By Frank McCourt

Before the Enlightenment spread across Europe, inspiring the likes of Thomas Paine and Thomas Jefferson in colonial America, most people were subjects. Their claim on life was quite literally subject to the discretion of a king or queen, and their livelihoods were determined by an accident of birth that left them obligated to serve the lord or baron on whose land they toiled.

Under American democracy, people became citizens, a concept that not only recognized their rights to liberty and the pursuit of happiness but also ensured that their votes—not some specious notion of divine right—would determine who governed them. It was a direct manifestation of the philosopher Jean-Jacques Rousseau’s notion of popular sovereignty, which proposed that governments have a legitimate claim on power only when they are derived from the “general will of the people,” a situation that creates what he called a “social contract” between the government and the people.

In this way, the rights and responsibilities conferred on the new citizens of the United States of America would collectively empower them to determine a shared destiny for their nation. Citizens would have agency—a capacity to make choices and effect changes—whereas subjects did not.

In contrast to these core principles of liberty, our current reality is, in our mind, best described as digital feudalism. Like poor, powerless subjects of monarchs and aristocrats, we are serfs, subjugated by a small group of companies that have exploited a feudal internet architecture. In this system, human beings are treated as afterthoughts—or not thought of at all—in service of building massive data extraction platforms.

In the United States, the ruling clique comprises our era’s biggest software companies: Google’s owner, Alphabet; Facebook and Instagram’s owner, Meta; Amazon; Apple; and Microsoft. The latter, with its giant investment in the research organization OpenAI, has joined the tech arms race to control our data, and the power and profits that come with it. The founders, senior executives, and large investors in the Big Tech companies command great sway over the operations of the internet.

Thanks to them and their management teams, whose compensation models incentivize them to maintain or double down on the status quo, we live at the discretion of their proprietary algorithms. The software programs based on these algorithms—which, much like computers, servers, and devices, should be understood as machines—treat us as quarries from which to mine our data, now the most valuable commodity of the digital economy. They then aggregate and organize this data and use it to create tools with which their corporate leaders can influence us.

Aided by sensors and data-capture points positioned at every turn in our daily life—in the words and emojis we put into texts and social media posts, in our purchases, in the movements of our computer mice, in the cameras pointed at us, in the GPS devices that track us, in the listening devices in our homes, and in the music to which we listen, the videos we watch, and the photos we share—these machines store far more information about us and our social connections than our own brains could possibly store. As one MIT professor put it to me, “We are living in a minimum-security prison. We just don’t know it.”

According to a 2016 ProPublica report, Facebook at that time collected an average of fifty-two thousand data points on each of its users. That number, now likely very much higher, gives a sense of how we are viewed by the platforms. Their “black box” systems, built with code into which no outsiders have visibility, extract a valuable commodity (our data). They then use that commodity to assign us each a profile and to fabricate a powerful machine (a proprietary algorithm) with which they categorize, target, and manipulate us. In doing so, they systematically dehumanize us—much as the feudal system dehumanized peasants.

In all of this, there is no social contract or even a moral obligation for these platforms not to treat us as their pawns. Instead, they’ve buried us in legal contracts, most of them in fine print that no one reads, imposing terms and conditions surrounding our use of these applications that compel us to forgo any claims over our data and the content we create and post. We’ve literally signed away our rights and surrendered our personhood to these Silicon Valley giants.

This dynamic has been building for two decades, but only very recently have more and more people started to recognize the enormity of what we’ve given up.

A seminal work in this field, Harvard professor Shoshana Zuboff ’s 2018 book, The Age of Surveillance Capitalism, recounts how the data extraction model originated at Google in the early 2000s. Facebook, after discovering it could profit from a powerful self-reinforcing feedback loop, adopted and updated the model.

Essentially, this is how it worked: Facebook’s surveillance of its users’ activity generated insights into how people responded to different textual, visual, or aural stimuli. Facebook’s data scientists and engineers then tweaked the platform’s content curation algorithm in a bid to steer users into engaging with other users for longer periods of time. Internally, Facebook called this engagement meaningful social interactions (MSI), and the MSI metrics served Facebook’s and its clients’ revenue goals. The cycle then repeated over and over, as new behaviors generated new data, allowing for iterative, perpetual “improvement” (i.e., more precise targeting) in the algorithm’s ability to modify user behavior.

One of the most notorious applications of this arose in the Facebook–Cambridge Analytica scandal, news of which broke in 2018. For years, the British consulting firm Cambridge Analytica collected Facebook users’ data without their consent and used it to feed them microtargeted disinformation. One of the goals was to influence the 2016 U.S. presidential election; another was to sway the United Kingdom’s Brexit vote.

How the internet is killing us

But, in many ways, that high-profile scandal was an outlier: A far bigger, if more subtle, problem is the data extraction model’s impact on our day-to-day lives. In multitudinous ways, these platforms’ algorithms color our view of the world, shape how we react to issues that matter, and drive us into the hands of advertisers. Zuboff says this exploitative business model, which has migrated from Facebook to become the modus operandi of virtually every internet platform or application, has stripped us of what makes us human: our free will, without which neither democracy nor markets can function.

Perhaps you’re sitting there thinking, “Nah, this isn’t me. I’m in control. I can’t be swayed by some computer code. I’m open to all ideas and suggestions, and I deliberate on them, carefully weighing the pros and cons of each before deciding what to do.”

We hear you. There are various areas of our day-to-day lives over which we retain control. But they are dwindling because powerful interests profit from depriving us of that control. The owners of these tracking and advertising-maximization systems have not spent the past two decades figuring out what makes us tick for nothing. They’ve watched to see what content suggestions provoke the dopamine releases that lead us to click, to “like,” to follow, or to share. They’ve figured out our political leanings, our artistic tastes, our sleep habits, our moods, and, most important, the social groups and online tribes with whom we form connections and allegiances. Facebook, it is said, knows you are going to break up with your partner before you do. If you even briefly let go of the “I’m in charge; no one is telling me what to do” mindset, you can see how the platforms can and will use their gigantic data hauls to shape our individual thoughts as well as our collective behavior—because they are incentivized to do so.

Here’s another way to think about how you pay for all this data extraction: a two-decade divergence in prices for different goods and services in the U.S. economy. A chart from the online publisher Visual Capitalist shows how prices for goods and services that you need to live a healthy and productive life—such as medical care, college tuition, housing, and food and beverages— all rose very sharply between 2000 and 2022. By contrast, prices for products that integrate with the internet and extract our data—such as software, cell phone services, TVs, and other entertainment devices—all fell significantly during the same period.

It’s worth asking why this is the case. Your quality of life in the nondigital world is deteriorating, but your digital existence keeps getting oddly less expensive. The reason is that the latter is subsidized by the ever-larger amounts of data you hand over to tech companies.

We need to think harder about the real price we are paying for data extraction devices and related software. Remember, your data is you. Here, the timeless words of the information security guru Bruce Schneier are helpful. Nearly a decade ago, he wrote: “If something is free, you’re not the customer; you’re the product.”

How the internet is killing us

Excerpted from Our Biggest Fight, by Frank H. McCourt, Jr. with Michael J. Casey. Copyright © 2024 by Frank H. McCourt, Jr.. Excerpted by permission of Crown. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Fast Company – technology

(11)