ChangeThis

The Tech That Comes Next: Technology Development for the Social Impact Sector

Amy Sample Ward, Afua Bruce

March 23, 2022

Share Download

Social impact sector organizations work hard every day to push back against the inequities and injustices in society. … We often assume that the oppressive systems will always continue to exist, and will even strengthen. But what if this weren’t the case?

210.04.TechThatComesNext-cover.jpg

Technology. Just the word itself evokes a range of emotions and images.

For some, technology represents hopes and promises for innovations to simplify our lives and connect us to the people and issues we want to be connected to, almost as though technology is a collection of magical inventions that will serve the whims of humans. To others, technology represents expertise and impartial arbitration. In this case, people perceive that to create a solid technological solution one must be exceptionally smart. Technology, with this mindset, is also neutral, and therefore inherently good because it can focus on calculated efficiencies rather than human messiness. Others have heard that technologists “move fast and break things,” or that progress is made “at the speed of technology”—and accordingly associate the word “technology” with speed and innovation constantly improving the world and forcing humans to keep up.

In contrast, the mention of “technology” fills some people with caution and trepidation. The word can conjure fears—fueled by movies and imaginations—of robots taking over the world and “evil” people turning technology against “good” people. Others are skeptical of how often technology is promised to solve all problems but ends up falling short—and in the many ways it can exclude or even inflict physical, emotional, or mental harm. There are many examples of technology making it more difficult for people to complete tasks, contributing to feelings of anxiety or depression, and causing physical strain on bodies.

The potential for these and other harms are what cause some to be concerned or fearful about technology. And, for some, the mention of technology stokes fears of isolation: for those less comfortable with modern technology, the fear of being left out of conversations or of not being able to engage in the world pairs with the very practical isolation that lack of access can create.

Many people hold a number of these sometimes contradictory emotions and perspectives at the same time. In fact, individuals often define “technology” differently. Although some may think of technology as being exclusively digital programs or internet tools or personal computing devices, in this book we define “technology” in the broadest sense: digital systems as well as everything from smart fridges to phones to light systems in a building to robots and more.

WE LIVE IN A WORLD OF TECHNOLOGY

Regardless of how complicated feelings about tech may be, we all must embrace it: we live in the age of technology. Whether you consider how food travels from farms to tables, how clothes are manufactured, or even how we communicate, tech has changed and continues to change how these processes happen. Certainly, we complete a number of services through technology systems—shopping for clothes, ordering weeknight meals, scheduling babysitters, and applying for tax refunds. We expect the technology tools and applications we use to provide smooth and seamless experiences for us every time we use them. In many cases, with the exception of the occasional glitch or unavailable webpage, technology works how we expect it to; it helps us get things done.

Unfortunately, not everyone has the same experiences with technology. The late 1980s brought us the first commercially available automatic faucets, which promised relief for arthritic hands and a more sanitary process for all. Some people reported sporadic functioning, however; the faucets worked for some but not others. When the manufacturers researched the problem, an unexpected commonality appeared: the faucets didn’t work for people with dark skin. In an engineering environment dominated by white developers, testers, and salesmen—and we deliberately choose the suffix “men”—people with dark skin had not been included among the test users. In a more recent example, in 2016 Microsoft launched @TayAndYou, a Twitter bot designed to learn from Twitter users and develop the ability to carry on Twitter conversations with users. Within one day, Microsoft canceled the program, because, as The New York Times stated, the bot “quickly became a racist jerk.”1

In the name of efficiency and integrity, various technology systems are developed and implemented to monitor the distribution of social benefit programs. Organizer and academic Virginia Eubanks, who studies digital surveillance systems and the welfare system, has remarked that, for recipients of welfare programs, “technology is ubiquitous in their lives. But their interactions with it are pretty awful. It’s exploitative and makes them feel more vulnerable.”2 Technology is used to automatically remove people who are legally entitled to services from systems that furnish government and NGO providers with data regarding the population that needs those services. In her book Automating Inequality, Eubanks describes a state-run health care benefits system that began automatically unenrolling members, and the associated volume of work individuals had to do to understand why they were, often wrongly, unenrolled and how to reenroll. It is also used to prevent someone from receiving services in one part of their lives because of a disputed interaction in a different part of their lives. In this case, notes on unsubstantiated reports of child abuse may remain in a parent’s “file,” and then used to cast suspicion on the adult if they seek additional support services. This is all tracked in the same government system.

“The technology has unintended consequences” is something many people in technology companies say when referring to products that don’t work for a segment of the population, or to systems that leave people feeling exploited. However, these “unintended consequences” are often the same: they result in excluding or harming populations that have been historically ignored, historically marginalized, and historically underinvested in. The biases and systems that routinely exclude and oppress have spread from the physical world into the technological world.

How can we have these uneven, unequal experiences with technology when one of the supposed attributes of technology is impartiality? Isn’t tech based on math and science and data—pure, immutable things that cannot change and therefore can be trusted? There are so many examples of how technology, regardless of how quickly it moved or innovated, repeatedly did not deliver on the hopes and promises for all people. Why?

We’re not the first to ponder these questions. Many people, including ourselves, have concluded that technology is put into use by humans and, accordingly, is good or bad depending on the use case and context. Technology is also built by humans and, as a result, technology reflects the biases of its human creators. Melvin Kranzberg, a historian and former Georgia Tech professor of history of technology, in 1986, wrote about Six Laws of Technology, which acknowledge the partiality of technology within the context of society:3

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity—and so is the history of technology.

These laws are still applicable today. Technology, it turns out, is fairly useless on its own. High-speed trains would be irrelevant in a world without people or products to move. A beautifully designed shopping website is a waste if no one knows about or uses it. Technology exists within systems, within societies. The application of math and science, as well as the structure and collection of data, are all human inventions; they are all therefore constructed to conform to the many rules, assumptions, and hierarchies that systems and societies have created. These supposedly impartial things, then, are actually the codification of the feelings, opinions, and thoughts of the people who created them. And, historically, the people who create the most ubiquitous technology are a small subset of the population who happen to hold a lot of power—whether or not they reflect the interests and feelings, opinions, and thoughts of the majority, let alone of the vulnerable.

IDA B WELLS Just Data Lab founder and author of the book Race After Technology, Princeton University Professor Ruha Benjamin takes it a step further. Because technology and systems are often built on these biased assumptions, “Sometimes, the more intelligent machine learning becomes, the more discriminatory it can be.”4

What constitutes “technology” has evolved over time. Roughly shaped knives and stones used as hammers are widely considered the first technological inventions.5 Fast-forward several millennia to the creation of a primitive internet. What started as a way for government researchers to share information across locations and across computers grew into the Advanced Research Projects Agency Network in the 1960s. From there, additional large and well-funded institutions, such as universities, created their own networks for researchers to share information. Next, mainframes—large computers used by companies for centralized data processing—became popular. With the creation of a standard communication protocol for computers on any network to use, the internet was born in 1983.

Since then, the pace of technology development has only accelerated. The spread of personal computers and distributed computing meant that more individuals outside of institutional environments had access to technology and to information. People quickly created businesses, shared ideas, and communicated with others through the “dotcom” boom of the 1990s. We have more recently seen the rise of cloud computing, on-demand availability of computing power, and big data—the large amount of complex data that organizations collect. Techniques to process this data, learn from it, and make predictions based on it are known as data science, machine learning, and artificial intelligence. As a result, we now have a world where many people have access to a tremendous amount of computing power in the palm of their hands; companies can understand exactly what people want and create new content that meets those desires; and people can envision technology touching, and improving, every aspect of their lives.

In less than a century, we have gone from creating the internet to sending people to the moon with mainframe technology to building smartphones with more computing power than what was used to send people to the moon. And as technology has evolved, so evolve those who develop the technology—the “technologists.” Unfortunately, whereas technological developments increase the percentage of the population who can engage with it, the diversity of technologists has decreased. The large tech companies are overwhelmingly filled with people who identify as white and male, despite the reality that this group doesn’t comprise the majority percentage of humans on earth. But the technology field hasn’t always been this way. The movie Hidden Figures, based on the book by Margot Lee Shetterly, told the story of the African American women of West Area Computers—a division of NACA, the precursor of NASA—who helped propel the space race by being “human computers” manually analyzing data and creating data visualizations. US Navy Rear Admiral Grace Hopper invented the first computer compiler, a program that transforms written human instructions into the format that computers can read directly; this led to her cocreating COBOL, one of the earliest computing languages. Astonishingly, the percentage of women studying computer science peaked in the mid-1980s. We know, intuitively, that talent is evenly distributed around the world, and yet an enduring perception in tech is that the Silicon Valley model is the epitome of success. The Silicon Valley archetype, in addition to still being predominantly white and male, also privileges individuals who can devote the majority of their waking hours to their tech jobs—and who care more about moving fast than about breaking things. The archetype emphasizes making the world conform to their expectations, rather than using the world’s realities to shape and mold their own products. And with a purported state of the world being defined by a smaller proportion of the population, the technology being constructed creates an ideal world for only a limited, privileged few.

TECHNOLOGY TO SUPPORT SOCIAL CHANGE

It’s against a backdrop of all of these factors—the complicated and sometimes inaccurate feelings about technology, the significant benefit that technology can provide, the reality that technology isn’t neutral—that conversations about tech created for and in the social impact sector begin.

We define the “social impact sector” as the not-for-profit ecosystem—including NGOs (nongovernmental organizations) and mutual aid organizations and community organizers— that promotes social or political change, often by delivering services to target populations in order to both improve communities and strengthen connections within societies. As the name implies, organizations in the social impact sector don’t make a profit, but rather apply all earned and donated funds to the pursuit of their mission. Social impact sector organizations can vary in size and scope, from a few people in one location to thousands of people around the world. A common aspect of these mission-driven organizations is that they focus on the mission first—feeding hungry children, promoting sustainable farming, delivering health care equitably, and more.

Often, practitioners start and lead these organizations because of the their knowledge of the social or political issue and their ability to deploy resources to make an impact. This focus on serving the defined clients, combined with the pressure to show that the funds received are directly affecting those who need the support, rather than being allocated to cover administrative overhead, the category that technology services often fall into. The technical and interconnected world in which which we live, however, requires that to remain relevant and effective, the social impact sector must embrace technology to deliver its services—a necessity that has existed for quite some time. But given the global phenomenon of COVID-19 and what it has wreaked, the challenges of operating, organizing, and delivering services during a pandemic have revealed that, in terms of what needs to happen now in the social impact sector, and certainly what comes next, technology must be deeply integrated into how these organizations conduct business.

One of the many ways the pandemic has stressed our society is in significantly changing people’s economic status. Although some have profited as the virus and its variants have spread and claimed live across the globe, many, many more have lost not just accumulated wealth but also vital income. Service providers have struggled to keep up with the vast increase of those in need. And we will not quickly recover; it is predicted that a number of nonprofits will no longer exist five years after the worst of the pandemic has passed. Nonprofits have no choice but to be more efficient.

But the onus isn’t solely on the social impact organizations themselves; many technologists have not considered the social impact sector an applicable setting for their talents.

Fewer are inspired to take the time and care to advance complicated social issues for the benefit of one’s fellow humans, and even fewer actively work to minimize any harm to individuals that the technology could cause. And, even when technologists do want to support the social impact sector, they often don’t know how to support it in helpful ways. As Meredith Broussard wrote in her book Artificial Unintelligence, “There has never been and never will be a technological innovation that moves us away from human nature.”6 The social impact sector reminds us that human nature is to live in community.

When we unpack what it means to be a technologist in the social impact sector, we have to start with the basics. We must understand that technology in social impact organizations is expansive. It includes IT systems, management systems, and products to help the organization deliver services to its clients and supporters. IT systems include tech such as broadband internet, computers and mobile devices, printers, and computing power. Management systems include donor databases, impact tracking systems, performance dashboards, and customer relationship management systems. Products that support service delivery could include a custom-built website to allow people to schedule visits with a caseworker, a route-optimization tool that plans the most efficient delivery routes, algorithms to ensure data integrity in training software, or a tool that processes and presents data to inform policymakers as they legislate. As you can see, this breadth of technology requires a variety of different skills to execute. Add to which—given that the social impact sector exists to improve lives, the security and privacy that organizations implement in their program designs need to be considered in every aspect of the technology design.

The significant issues the social impact sector tackles, combined with the logistical challenges of reaching people in locations far and wide, requires deep technical expertise and sophisticated design. As this has not been readily available, social impact sector organizations have deprioritized and deemphasized technology for decades. But the current climate is such that those organizations must have technology appropriate to their context, even if it isn’t the fanciest technology. This can be a significant challenge—good and bad—for “expert” technologists who are used to entering new environments as tech saviors with an understanding that their expertise will immediately translate into a new space. When speed and immediate contributions are prioritized, the work needed to prevent harmful unintended consequences is often neglected. There is no space for the “tech savior” mindset in the social impact sector, nor for technologists inclined to quickly jump into developing tech because they’ve developed tech elsewhere. The social impact sector has its own expertise—and, while technical skills are transferable, understanding of social problems and community contexts is not. Even within the social impact sector, “design with, not for” has been a mantra of the civic tech world for years, but this idea alone is insufficient. Designing with, not for, does not transfer ownership of information and solutions; long-term ownership, with the ability to modify, expand, or turn off the solutions, is necessary for communities to maintain their own power.

The recognition that expertise does not magically transfer between sectors is only one of the design constraints for developing technology within the social impact sector. Though the sector benefits from government funding, it relies primarily on philanthropic funding. As a result, technology budgets in the social impact sector are perennially tight, leaving tough decisions about whether to develop a more costly custom solution that meets and respects client needs or buy a ready-made, imperfect solution that reaches more clients. When assessing off-the-shelf technology, social impact sector leaders recognize that deploying technology that has a track record of marginalizing and disenfranchising people— such as video conferencing software without closed captioning, making it difficult to use by the Deaf community—will not work for organizations that serve historically marginalized and disenfranchised populations. In addition, because these organizations often deal with different populations with immediate needs, they don’t have the luxury of adopting an “if you build it they will come” mindset, or of deploying a solution that benefits only a portion of their clients simply because it was too difficult to develop something for everyone.

Even once all these factors are addressed, organizations then need to figure out what should happen next. How do they plan for and carry out system maintenance and upgrades? Is what was done relevant only to the particular organization, or is it something that others in the social impact sector can also benefit from? Given their mission-driven nature, many organizations turn their focus back to their direct clients before answering these questions. The “technology versus client support” consideration is a false dichotomy, but it is one that many social impact sector organizations feel nonetheless.

Social impact sector organizations work hard every day to push back against the inequities and injustices in society. With limited budgets they manage to effect real, positive change on a number of social issues and improve the quality of life for many humans; however, this is done in a world where resources are difficult to access and coordinate. We often assume that the oppressive systems will always continue to exist, and will even strengthen. But what if this weren’t the case?

What if we could restructure how we think about developing systems and services to move beyond this picture and truly exist in a world where humans are centered and justice is pursued? We must consider how the different levers in society can work together; we must consider how we build the tech that comes next.

 

Adapted from The Tech That Comes Next.
Copyright © 2022 by Amy Sample Ward and Afua Bruce.
All Rights Reserved.

 

ABOUT THE AUTHORS

Amy Sample Ward is driven by a belief that the nonprofit technology community can be a movement-based force for positive change. They are the CEO of NTEN, a nonprofit creating a world where missions and movements are more successful through the skillful and equitable use of technology. Amy’s second book, Social Change Anytime Everywhere, was a Terry McAdam Book Award finalist.

Afua Bruce is a leading public interest technologist who has spent her career working at the intersection of technology, policy, and society. Her career has spanned the government, nonprofit, private, and academic sectors, as she has held senior science and technology positions at a data science nonprofit, the White House, the FBI, and IBM. Afua has a bachelor’s degree in computer engineering, as well as an MBA.

 

Endnotes
1. Daniel Victor, “Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk,” The New York Times (March 24, 2016).
2. Jenn Stroud Rossman, “Public Thinker: Virginia Eubanks on Digital Surveillance and Power,” Public Books (July 9, 2020).
3. Melvin Kranzberg, “Technology and History: ‘Kranzberg’s Laws’” Technology and Culture 27, no. 3 (July 1986) 544-60.
4. Sanjana Varghese, “Ruha Benjamin: We Definitely Can’t Wait for Silicon Valley to Become More Diverse,” The Guardian (June 29, 2019).
5. Erik Gregerson, “History of Technology Timeline,” Encyclopedia Britannica, accessed September 1, 2021.
6. Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World, (Cambridge: MA: MIT Press, 2019), 8.

We have updated our privacy policy. Click here to read our full policy.