Editor's Choice

A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are

Dylan Schleicher

October 04, 2019

Share

Flynn Coleman’s new book contends that, if we can focus on purpose, and engage with each other, we have a chance at building AI that is beneficial and equitable to all.

A Human Algorithm book review

A Human Algorithm: How Artificial Intelligence Is Redefining Who We Are by Flynn Coleman, Counterpoint

“Fearing intelligent technology,” writes Flynn Coleman, “is not productive.” She also begins the book with a quote from Stephen Hawking that “AI is likely to be the best or worst thing to happen to humanity.” So while it may be true that fear is not likely to lead to the best result, there is clearly something more to fear than fear itself.  

Just 20 pages into Flynn Coleman’s first book, A Human Algorithm, we have been given a succinct, yet sweeping history of human technology from written language and agriculture, through the renaissance and the industrial revolution, to the atomic bomb and personal computer. She explains how each step not only increased productivity and economic output and growth, but fundamentally rewrote the social contract, moral codes, and societal norms of the day. Scholars, advocates, idealists, revolutionaries, and a broad engagement in the politics of such social change have always had just as much to do with rewriting the rules and remaking the world around new technology as technologists did, and it must be the same as we build what is likely to be the most momentous and impactful technology in our history, artificial intelligence. As Coleman writes:

If we make humanitarian ideals central to how we develop and deploy intelligent technologies now, we have a chance at preserving and advancing humankind and, just maybe, making a leap forward instead of a dystopian step backward.

Not only have we not come up with any ways to regulate, govern, or guide AI development—no “AI rules and treaties”—we don’t even have a clear definition of what it is, or how it works. Even those working on it don’t fully know how the algorithms they feed into a machine (or increasingly, that the machines themselves are able to come up with) arrive at the answers they ultimately reach. The way that synthetic intelligence reaches its conclusions is unknown and may be unknowable. Coleman likens it to understanding how Einstein’s brain came up with the theory of relativity. Neuroscience has come a long way, but how our brains function is still a mystery in many—if not most—ways.

So, while the book contains an explanation and examination of machine learning, deep learning, reinforcement learning, natural language processing, and the next leap needed in our quest for AI, quantum computing, it is more about the search for what it means to be human, and what intelligence is. If we are attempting to model artificial intelligence on human intelligence, we have to grapple with the fact that we’re modeling it on something we don’t fully understand—something Coleman suggests may be a strength:

I contend our current incapacity to definitively model our intelligent technology on our own brains may ultimately prove not to be a failure but rather a unique opportunity to embrace our limitations and expand our viewpoint. If we see intelligence as the ability to solve new problems, this opens the door to honoring the vast variety of intelligence in the world, not just human.

Throughout, she asks the reader to consider what they think, what their definition would be, how they see the world. That is especially important to bring to the fore, and into the discussion, because the group of individuals working within the field today are predominantly white males from two elite universities, working for a handful of tech giants who are building the technology on a proprietary model. 

If the homogeneity in the field of AI development continues, we risk infusing and programming a predictable set of prejudices into our intelligent digital dopplegangers. 

If we don’t believe that can happen, or that the results can’t be all that bad, look at one of the most idealistic developments in social engineering in human history, American democracy. Founded upon the idea that we’re all created equal, our country’s wealthy, white male founders still excluded women and non-property owners from the franchise to vote and denied most black folks their very freedom. The number of enslaved human beings reached four million before liberation. It was a system built on an economic imperative that devalued human life and freedom, specifically black bodies, families, and lives. While we’ve made some progress, that economic imperative hasn’t changed all that much, and we live in a system that still puts economic growth and profits over the health of the planet we rely on for our very existence. So, suggests Coleman:  

Given the economic trends toward tech monopolies and against government intervention in corporate power consolidation, we have to counter not only by investing in creative AI start-ups, but also by educating the public on how important it is to infuse transparency, teamwork, and inclusive thinking into the development of AI.

She stresses the importance of cross discipline, combinational creativity. In order to teach machines to connect the dots in holistic ways that support us all, we must be able to connect the dots ourselves, and together not alone. The humanities, a consideration of human rights, and a greater dialogue with and understanding of our fellow human beings are a critical piece of the development.

Even when we do not agree on the specifics, if we can agree that the dialogue is still worth having, that we want to be better tomorrow than we are today, that our actions and decisions have an impact on ourselves and others, then at least we are moving toward an ideal, even if we never fully reach it. This could be as close as we get, and it might just be enough. 

The power of mass media and entertainment has largely made us “observational players in our own reality.” The promise of social media and Web 2.0 was supposed to change that, but that seems to have gone sideways as big tech found the easiest way to profitability was consolidation, data mining, and a business model predicated on surveillance capitalism. We became the product big tech sold to others. But we must not be passive observers as AI takes shape. We need as many voices and as much empathy as possible built into the algorithms and intelligent machines that are already beginning to impact our own behavior in profound ways as we interact with the world through our devices. We must give AI an ethical and moral underpinning that takes into consideration the varied interests of living beings on this planet.

It is incumbent upon us to attempt to design and implement an ethical framework compatible with synthetic intelligence, ever mindful that we may hold competing views as to what constitutes morals, principles, and values. The schematic for an algorithmic charter that can guide both human and machine cannot be conceived—nor will it approach universality—without acknowledging human biases and without respecting all forms of life with whom we share our home.

But even if we can agree on an ethical framework as a society, it is not likely the entire world will abide by it. Will China end its “social ranking” program? Will Russia end its hacking of democracy around the world? What happens as AI is increasingly embedded into weapons systems? Cyberwarfare is already a reality, and an AI arms race is upon us. And even if governments agree in the open, there will undoubtedly be intelligence operations and unlawful individuals that don’t abide by the rules.

As computers and intelligent algorithms are increasingly able to hack into the deepest reaches of our identities, both online and off, we are seeing that our information can be bought, sold, and stolen on an enormous scale. […] Information is the new currency of power, and there is a bustling black market for it. 

As these technologies monitor us more closely, the very notion of a right to privacy is shifting, and Coleman believes “we are also going to need to secure our mental privacy and cognitive liberty, which is the right to have sovereignty over one’s own consciousness and freedom of thought.” What if we mandated that those working in AI only use synthetic data rather than real-world data collected from us? 

Do robots deserve rights of their own? Coleman points out that: 

Companies have been granted legal personhood; in this sense, corporations could be considered some of the first AIs. Precedent has been established. 

She quotes Weapons of Math Destruction author Cathy O’Neil, who stated that “this is not a math test. This is a political fight.”

I am not scared of intelligent technology. I am scared that it is arriving at a time when the prevailing system of shareholder capitalism will enlist it to benefit the bottom line of an elite few, rather than the diverse interests of all of us. I am scared that “Even such elite universities as Oxford and Cambridge are complaining that tech giants are stealing all of their talent.” I am scared that it is being developed by silos of a homogenous group of individuals and that it will automate the existing and rising inequality within our society. 

But Flynn Coleman offers hope. Her book is focused far more on the upside of getting it right than worrying about what could possibly go wrong (though she doesn’t shy away from it).  

The quest for AI is one that contemplates the oldest questions of consciousness and the soul, explores the nature of existence and existence of nature. A Human Algorithm is ultimately a book about what it is to be human, about how we need not be passive observers of reality but can interject and help shape it. It is about the importance of empathy and the power of storytelling. She cites Kate Raworth’s Doughnut Economics as a model that can free us from a narrow focus on growth and turn us toward the goal of human flourishing on a living planet. She suggests feeding the best of our literary fiction into AI can make it more empathetic, as it has been proven to do so for ourselves. She has a lot of great suggestions, but the book is, above all, a call for inclusion and curiosity:

Today, a human algorithm can only be described in metaphysical terms, not scientific ones. It isn’t a map for how to get out of the forest. It’s a map for going in. 

I am not a scientist and don’t know the odds of us actually developing a general AI, but I know that there isn’t much we can do to stop the attempt. And I agree wholeheartedly with Coleman’s call to incorporate the humanities and other disciplines into the effort. I agree that our economics and politics are going to change as it evolves, and that we should all engage in that conversation and construction. I don’t know if “Smart tech can help unburden our mental lives, giving our imaginations room to roam and giving us more time to connect with people in our material reality.” But I do agree that: 

Beyond profit we find purpose. Beyond our individual selves, we find each other.

And if we can focus on purpose, and engage with each other, I agree we have a chance at building something beneficial and equitable to all. But we don’t have a moment to spare. 

About Dylan Schleicher

Dylan Schleicher has been a part of Porchlight since 2003. After beginning in shipping and receiving, he moved through customer service (with some accounting on the side) before entering into his current, highly elliptical orbit of duties overseeing the marketing and editorial aspects of the company. Outside of work, you’ll find him volunteering or playing basketball at his kids’ school, catching the weekly summer concert at the Washington Park Bandshell, or strolling through one of the many other parks or green spaces around his home in Milwaukee (most likely his own gardens). He lives with his wife and two children in the Washington Heights neighborhood on Milwaukee's West Side.

Learn More

We have updated our privacy policy. Click here to read our full policy.