Algorithms Are Made of People!
Our digital technologies are not neutral tools. We tend to think of algorithms, in particular, as somehow devoid of human influence but they are, in fact, rules written by people. That makes the politics of the platforms that employ them of paramount importance. Mike Pepi explores how we can assess the situation to survive the platform age.
Have you ever felt trapped by a series of rules?
Maybe you have, but you weren’t aware. Have you ever felt the desperation of being left out by optimized sorting software, online targeting, or automated decisioning? With so much of our economic and social lives datafied by platforms, chances are you have encountered this post-internet ennui. Millions of data points may have been processed to lead you to your fate. You may feel helpless among those algorithms buried in so much indecipherable, proprietary code—its logic is opaque to your senses even as it instantly impacted your hopes and dreams. But take heart, digital subject! No matter how complex, instant, or nefariously you have been interpolated into our algorithmic regime, somewhere, somehow, it is possible to track down the source of this action. For every line of code has an author, and every author has a manager, and every manager has a bottom line, a goal, and an idea about how you should exist in their database. Algorithms may appear to be magic. They may appear to work automatically. But behind every algorithm is a human, and every decision a rationale that is, despite Silicon Valley’s best efforts to occlude this, political.
It’s difficult to pinpoint where we went wrong by thinking that algorithms operated through some form of nonhuman magic. As is usual, the marketing narrative of technology companies take on an unintended life of their own, trickling down to would-be critics who end up internalizing their myths in ways that contradict their basis in engineering. “The Facebook whistleblower says its algorithms are dangerous,” an MIT review reports. “My algorithm is funny today,” a friend remarked to me one day. In journalism and everyday life this kind of shorthand serves a linguistic utility. But it’s a dangerous abstraction. As critical literature started to form about our new platform overlords a rallying cry formed: Algorithms are made of people!
The first I recall of the backlash was from writer and curator Natalie D. Kane. She was reflecting on the 2014 Transmediale conference (a yearly meeting of artists and new media theorists in Berlin) where she wrote that the term “the algorithm … hung in the air like dust … pulled into almost every conversation like a catch-all explainer for why computational systems were messing with us.”
Gillian “Gus” Andrews expanded on the tech left’s mantra in her 2020 book Keep Calm and Log On. “Algorithms are just rules written by people, like any other system,” she says.
You can tack “is made of people” on the end of any fancy new tech idea to remind yourself it’s not magic. Search engines are made of people. Smart assistants and voice recognition are made of people. The blockchain? Looks like it’s made of math, ultimately made of people.
Sometimes, the algorithm is literally people! In 2020, Amazon began installing Just Walk Out centers in their Amazon Fresh physical locations. These “checkoutless” groceries enabled customers to walk away from the store without pausing to stop and pay. Amazon planned to deploy sophisticated cameras and machine vision to detect the product and would automatically bill the customer’s Amazon account. Or so it seemed.
In 2024, after nearly four years in operation, The Information reported that despite the impression that AI was recognizing the shoppers’ items, Just Walk Out used thousands of people in India who monitored checkout videos and manually labeled items in real time.
This trick is not uncommon in Silicon Valley, where being first is often more important than being accurate, or even legitimate. It’s a dirty secret of most digital technology, specifically AI, that much of the automation is built off the backs of human labor, either through a straightforward deception in which a “bot” or “algorithm” is just a human, or through more subtle ways, where humans are asked to classify images and data that machine learning requires for its models. Humans decide when and where to use these sleights of hand. And without transparency into how each layer of a digital product is made, it is difficult for us to truly grasp the impact.
After the first phase of the backlash against our algorithm-riddled world, a movement began to advocate for audits of big tech platforms. Facebook’s news feed, Amazon recommendations, and even Uber’s car routing and pricing algorithms all came under scrutiny. We quickly learned that the effort to “algorithmically audit” platforms that govern our lives is too complex and unrealistic. Not to mention that the owners of these technologies claim proprietary secrecy over company property, limiting any third parties’ ability to investigate. Moreover, the architects of such algorithms purposely design them to mutate so as to escape the point-in-time gaze of any investigation that might observe it as an objective, third-party outsider. Instead, Andrews implores us to ask a somewhat more straightforward question: “Why should we trust the system of people that produces these algorithms?” The question of platform accountability shifts from a question of forensic digital accounting to a question of institutional analysis. Algorithms will likely instill the effects that any organization claims to put into the world. They aren’t politically biased in themselves, nor are they even something we should fear, ban, or be skeptical of. Our best bet to ensure algorithmic accountability is to focus less on the nuts and bolts of algorithms and instead scrutinize the organizational form that builds and deploys them. Again, we’ve left the magical realm of digital computation and come back to plain old political analysis. Without forms of internal and external checks and balances, and with a growth-at-all-costs mandate, organizations will likely leverage algorithms to optimize for business outcomes above all else.
Still, there is something more at play. Even the most nefarious and coldhearted sorting algorithm is powerless if it operates in a legal framework that protects both sides of a marketplace. Uber and Lyft provide examples: the ride sharing revolution did not gain billions of dollars of valuation because of the military precision of its data crunching, geolocated driver-selection algorithms. Its real innovation was some crafty legal loopholes. Drivers on rideshare apps are classified as 1099 contractors, letting Uber and Lyft off the hook for health insurance, minimum wage considerations, and other hard-won labor-rights protections. Ride sharing’s initial land grab—which by 2016 made hailing a cab seem old hat—was propelled by the low prices that were a result of a brazen labor-rights violation, not some genius technical innovation.
Over at Meta, which operates Facebook and Instagram (addicting apps whose news feeds target children and teens to induce endless scrolling and algorithmic news feeds) the company spent $7.6 million in lobbying fees with the US government in Q1 of 2024. This effort is to try to influence lawmakers not to intervene in the platform’s growth, even as it has come under fire for the deteriorating mental health of its users. Is this not part of the “algorithm” too?
Algorithms never operate in a vacuum. The real problem lay with the political formations that situated them in platforms. Lying underneath this, still, is the ideology that the asynchronous organization of daily life is preferable to the institutional narratives brokered before. Culture has indeed gotten “flatter” and more predictable, but the reasons why the slavish devotion to the algorithm has ended up serving the logic of platforms and not the desires of audiences is less straightforward.
The deeper you get into the making of technology, the more you realize the degree to which design choices are made by processes that originate from nontechnical means. It could be the HIPPO rule—the highest paid person’s opinion. It could be as a result of a key performance indicator mandate from the top brass—a subtle change in weights or inputs of an algorithm could make a meaningful difference in quarterly revenue. But of course, there are trade-offs. Sometimes companies go under or are acquired, at which point a whole new management team is put in place, killing the darling algorithms we once treasured.
The algorithm has the least amount of agency in the complex-machine apparatus that has in the first decades of the twenty-first century been one part of a larger deinstitutionalization of society, a project that has Libertarian roots going back to the infant days of computation. The platform—as an organizing principle—simply uses computer algorithms to strip institutions of their meaning, with a swift cadre of digital utopians waiting in the wings to declare institutions obsolete. It is this ideological interplay that is far more powerful than the ability of a computer to execute lines of code. In short, our enemy is not algorithms. It’s not even digital technology or very powerful computers. Our enemy (or to put it milder, the focus of our reform) is the class position of people whose ideology impels them to deploy algorithms this way.
Unfortunately, there are too few critiques of the “big bad algorithm” that counsel us to look to the everyday architects of our media diets. Instead, the putative subject is the feed—lamentations of the subject-object relationship, which today is the helpless cultural consumer and the all-powerful social media algorithm.
Culture, broadly defined, is negatively impacted. But culture and technology are inseparable. Clearly, technologies privilege a certain form of consumption in the same way they privilege a certain form of art. They both reinforce ideologies of their own making. The delivery mechanism is crucial to any product or service. But so is the intent. Intent is much easier to isolate away from the technological substrate of technology. For example, one can organize an institution using digital methods and have it still not be a platform; this might include paying a proper fee for content, only being able to access content synchronously, and the institution being governed by a transparent group of leaders. In the end, it’s not just algorithms that flatten culture. Instead, the largest role is played by political ideology.
The problem with this mystification of algorithms—the disembodiment of their operation— is the same for almost all digital technology. It makes any kind of corrective seem more difficult than it needs to be. There is an infamous slide from a 1970s IBM presentation quote that recently resurfaced online:
a computer can never be held accountable
therefore a computer must never make a management decision
You won’t get far blaming an algorithm because an algorithm has no agency; it just does. Look instead to the designer of an algorithm, or better yet, zoom out on the entire enterprise they are operating within. Once you do that, then interrogate the value system in place. Ideological analysis is slower and less popular, but it’s a more direct path to surviving in the platform age.
The same problem appears in the debates around machine learning models that now have burst into the cultural consciousness, recently overhyped artificial intelligence. Our solutions to the supposed threats of AI don’t need to borrow from the tropes of science fiction. Aligning AI to our values so that we have an artificial general intelligence that creates abundance—to use the pet word of OpenAI founder Sam Altman—is not a matter of some religious mission to ensure the safety of the future of humanity. It’s a trap set by the private enterprises that stand to gain financially from the unfettered explosion of AI products. They want people to think they are hopeless, AIs are superhuman, and it’s mostly a doom and gloom scenario unless the heroic AI company can save the day with alignment. This way, people will abandon the more straightforward path, which is deeply unsexy: cogent regulation and the education of ordinary people who interface with AI products.
The algorithms that run our platforms and our newfound AI bots are systems—artifacts of our morals and values. They do not learn or reason on their own. Moreover, they are predominantly and functionally beholden to their training data, which is again, the cumulative result of many human decisions, a kind of institution unto itself. We might simply ask: What’s in the training data? How are we making the decisions about what is edited out and what is left in?
At the peak of the tech backlash in 2018, journalists finally came around to looking back at the actors involved in the unfolding disaster. Contrary to popular belief, many in the tech industry agree with the contention that runaway platforms have caused more harm than good. They won’t exactly raise it at a company all-hands meeting, but they are sensible enough to see how it was a series of human decisions that lead us to our current predicament. Noah Kulwin interviewed several of these figures for a feature in New York Magazine: “An Apology for the Internet from the People who Built it” (2018). Tristan Harris was an employee at Google during the expansive growth stage of the internet giant. He made a presentation sounding the alarm: “A Call to Minimize Distraction and Respect Users Attention.” It became a big topic internally, apparently reaching all the way to the CEO. Harris was, in his own words, claiming that Google and its peers were “creating the largest political actor in the world, influencing a billion people’s attention and thoughts every day.” Many at Google understood the impact of their work, but not many took Harris’s stand: “We have a moral responsibility to steer people’s thoughts ethically.”
Harris continues,
To Google’s credit, I didn’t get fired. I was supported to do research on the topic for three years. But at the end of the day, what are you going to do? Knock on YouTube’s door and say, “Hey, guys, reduce the amount of time people spend on YouTube.” … You can’t do that, because that’s their business model. So nobody at Google specifically said, “We can’t do this—it would eat into our business model.” It’s just that the incentive at a place like YouTube is specifically to keep people hooked.
Today, Harris runs the Center for Humane Technology, a nonprofit advocacy group that seeks to influence governments and private sector groups through training, publishing, and online courses. The center is a welcome, if not much smaller, counter to the immense lobbying and marketing efforts taking place on behalf of private platforms. Yet the core idea behind the Center for Humane Technology is right: we will never progress out of this doom loop without addressing the people behind the algorithms. Luckily for us, they are a political body and can be appealed to directly, while there is still time, as long as we still have institutions that function without being subsumed into the siren call of platform capitalists.
From Against Platforms. Used with permission of the publisher, Melville House Publishing. Copyright © 2025 by Mike Pepi.
About the Author
Mike Pepi is a technologist and author who has written widely about the intersection between culture and the Internet. An art critic and theorist, he self-identifies as part of the “tech left”—digital natives who want to reshape technology as a force for progressive good. His writing has been published in Spike, Frieze, e-flux, and other venues.