David Kaye's new book addresses how the internet has changed from a decentralized platform for free expression into a stream of propaganda, hate speech, and disinformation, and how to turn back the tide.
Speech Police: The Global Struggle to Govern the Internet by David Kaye, Columbia Global Reports, 144 pages, Trade Paperback, June 2019, ISBN 9780999745489
David Kaye was appointed the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in 2014. I am sure it has been an interesting past five years, to say the least.
Kaye opens his new book, Speech Police, with the story of imprisoned Iranian blogger Hossein Derakhshan, and a reminder of just how much the internet has changed over the past decade. Derakhshan was a prolific and prominent blogger in 2008, influential and controversial enough within Iran that the government in Tehran charged him with “spreading propaganda against the Islamic system” in 2008. By the time he was released in 2014, the decentralized internet on which he had made his name had largely morphed into the world of social media, on which everything is dominated by “The Stream.”
“The Stream means you don’t need to open so many websites anymore. You don’t need numerous tabs. You don’t even need a web browser. You open Twitter or Facebook on your smartphone and dive deep in. The mountain has come to you. Algorithms have picked everything for you.”
The industrialization of the internet was, perhaps, inevitable. But the consolidation and monopolization of American industry and economics, specifically, has left very few big companies to make a lot of very big decisions about our lives.
Uploading your thoughts and opinions to a personal blog or website on a more decentralized internet was a much different animal than arguing on Facebook—in part because you were more responsible for the narrative and persona you constructed on the former, and because what you post on Facebook, YouTube, and Twitter is more ethereal and quickly diluted in The Stream. That doesn’t mean those posts don’t have influence (as has been apparent in the many successful influence campaigns mounted online, from the 2016 presidential election to the Brexit campaign), but that influence is more opaque, harder to track, and more susceptible to manipulation by those trying to sow division in society.
Online platforms have become wide-open spaces for public and private debate; hatred is spreading through them with the help of manufactured amplification; incitement of violence and discrimination seem to flow through their veins; and they have become highly successful and profitable zones for disinformation, election interference, and propaganda.
Big tech companies have long proclaimed that they are not media companies, and therefore not responsible for the content people post there. But they have become the “gatekeepers to the news,” as Maria Ressa, founder of the news outlet Rappler in the Philippines, puts it. The newspapers and other news media outlets on which social media relies for so much of its source material used to act as that gatekeeper. There was no small amount of smug satisfaction and schadenfreude in comment sections online as they lost that lofty perch to Web 2.0 and the rise of citizen-based, amateur journalism online on one hand, and the loss of classified revenue to Craigslist on the other. But rather than ultimately removing a layer of gatekeeping, the new version of the web adds a social media overlay to the traditional news media, and mixes it all together with disinformation, propaganda, conspiracy theories, memes, comments from real individuals and impersonators alike, and so much more. And, because of big tech’s scale and power, their role has become even bigger than that of gatekeeper:
Today’s platform behemoths take it many steps further: They have become institutions of governance, complete with generalized rules and bureaucratic features of enforcement. They have struggled to figure out how to police content at the scale to which they’ve grown.
Kaye reminds us of an era in which it seemed reasonable for Bill Clinton to announce of China’s attempt to control the internet, “Good luck! That’s sort of like trying to nail Jell-O to the wall.” Well, China has not only done so for the most part, the authorities there are now using the internet in an attempt to control their people, as Amy Webb documents so well in The Big Nine.
In Syria, rights groups trying to capture the citizen-filmed footage documenting war crimes perpetrated by the government and groups like ISIS have found that such video is being removed as objectionable before they can get to it—deleting the very film they may need to bring the perpetrators eventually to justice. This began as early as 2011, when footage uploaded to YouTube documenting thirteen-year-old Hamza al-Khatib’s bruised, burned, and mutilated corpse sparked international outrage against the Syrian government he was protesting before it was taken down (and eventually restored). But the practice of removing such citizen footage has accelerated since 2017, with little explanation or ability for human rights groups to recover it.
Meanwhile, in Myanmar, Facebook was largely seen as missing from action as a moderator when its platform was used to spread hate and incite violence against the Rohingya community. And in Kashmir, voices of dissent and those “discussing threats they faced from the government” on Twitter were routinely noticing their tweets being deleted and accounts suspended without explanation or recourse. The mere mention of the political situation in Kashmir seemed enough to flag a post and get the user suspended.
Three different dominant social media platforms, three different countries, and three very different policies all resulting in a stifling of human rights rather than their protection. The absence of a local voice in any of these instances, or real, on-the-ground engagement with and understanding of civil society in these places, creates a danger of “digital colonialism,” and raises a serious question: whether the positive effects of providing a platform for freer speech in such places isn’t outweighed by its abuse by those in power.
Kaye contrasts the situations in Syria, Myanmar, and Kashmir to that in the European Union, where the strength of legal institutions allows individuals to challenge big tech, and “enables individual agency in the face of corporate power.”
Europeans have tools to constrain not only the way the platforms collect and process personal data, but they have tools to constrain how the platforms govern public space.
He also contrasts the realities faced within Facebook, YouTube, and Twitter when they were young, idiosyncratic upstarts in business trying to attract users with the responsibilities they now face having acquired billions of users “in the process of maturing into major global institutions” and “stewards of public space.” The ignorance and arrogance they demonstrated as young companies was unfortunate, but perhaps understandable. Now:
Their decisions don’t just have branding implications in the marketplace. They influence public space, public conversation, democratic choice, access to information, and perception of the freedom of expression. … They have to acknowledge their unusual, perhaps unprecedented, roles as stewards of public space.
Kaye was, at least in part, impressed by the seriousness and breadth he witnessed while sitting in on a content curation meeting at one company, believing the young professionals involved generally asked the right questions and were trying hard to get it right. Yet, he also noted that it “cannot obscure the reality of the legislative role they play for billions of users, who themselves lack input into rule development.” He questions the platforms’ increased reliance on AI moderators, largely hidden from and unaccountable to the public and not very well understood, which nevertheless seems to be making its way into government. Comments like Mark Zuckerberg’s that Facebook does “not want to be arbiters of truth” come in for even closer scrutiny. The simple fact, Kaye insists, is that nobody ever asked or wants them to be. They just want to be able to evaluate their data in order to determine answers to big questions concerning the public good, questions like:
What is the impact of false information on platform users, the broader public, and public institutions? If the impact is appreciable and problematic, what should platform owners do to police this kind of content? What should governments do?
Rather than a private company becoming the “arbiter of truth,” an idea abhorrent to most, researchers simply want access to the treasure trove of information and data Facebook has in order to understand its impact on public life. Kaye discusses how we can curtail legitimately fake news without curtailing dissent, criticism, protest, and investigative journalism. He considers how, in places where Facebook can be the only alternative to state media and “delete Facebook” isn’t really an option, and neither is doing nothing. He explains how, currently, across the globe, the “attempts at regulation are paradoxically increasing corporate power—American corporate power—to be in charge of vast swaths of global public forums.” There are other, and better ways.
He counsels the companies to move from a focus on legal liability to one of public obligation, sketching out ideas that can help both companies and governments police content.
The companies should make human rights law the explicit standard underlying their content moderation and write that into their rules. They are global companies dominating public forums worldwide. International human rights law provides everyone with the right to seek, receive, and impart information of all kinds, regardless of frontiers. It protects everyone’s right to hold opinions without interference. Just as important, human rights law gives companies a language to articulate their positions worldwide in ways that respect democratic norms and counter authoritarian demands.
There is a sense that chaos and control are the two opposing options. Yet we know how authoritarian attempts to control speech and individual expression in society generally devolves into chaos, and we’ve seen the current president of the United States sow chaos in order to control the news cycle and the daily attention of nearly everyone.
In physics, the second law of thermodynamics tells us the real choice is not between chaos and control, but between chaos and order. It states that, “as one goes forward in time, the net entropy (degree of disorder) of any isolated or closed system will always increase (or at least stay the same).” I can’t think of a more compelling reason to keep the system as open as possible, to disentangle the concern for free speech and human rights from the economic interests of big tech companies, to remove the private power over and ostensible ownership of what is surely public speech online. Kaye argues for obeying the rules and standards of the Universal Declaration of Human Rights’s Article 19 protection—that “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive, and impart information and ideas through any media and regardless of frontiers”—rather than “the discretionary rules embedded in their terms of service.”
We don’t need control, but order, and for that, we can turn to standards already in place, enshrined in the Universal Declaration of Human Rights. It is time once again, as Eleanor Roosevelt said in a speech before the UN adopted the declaration in 1948, to "rededicate ourselves to the unfinished task which lies before us." To combat the entropy occurring online—one that currently seems destined toward chaos—to govern speech on a global scale and determine the powerful role platform businesses are bound to have in the process, I can think of no better place to turn for that order than the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. At the end of Kaye's book, he points us to the several reports he has submitted to the Office of the UN High Commissioner for Human Rights, “on issues such as encryption and anonymity, content moderation, AI’s impact on human rights, and other topics,” but the 144 pages of Speech Police are a great place to start.