An Excerpt from What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast
An excerpt from What Matters Next by Kate O'Neill, published by Wiley and longlisted for the 2025 Porchlight Business Book Awards in the Leadership & Management category.
In What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast, renowned author and consultant Kate O'Neill delivers a roadmap to achieve business growth, transformation, and innovation through the use of emerging technologies—but crucially, in a human-centric manner that benefits both business and humanity. Drawing on her experience working with organizations like Google, Yale, and the United Nations, O'Neill offers a unique blend of strategic guidance, ethical considerations, and practical application to help organizations not just survive, but thrive through bold and empathetic leadership.
In this book, readers will learn about:
- Making better strategic decisions by moving from questions, to insights, to “Bankable Foresights”
- Creating a model that aligns focus, purpose, values, and resources across an organization
- Understanding the intersection and potential harmony of human knowledge and machine intelligence
- Ensuring digital transformation and innovation efforts frame the future in human terms
What Matters Next is an essential read for all business leaders and individuals interested in the impact of emerging technology on business and humanity and seeking to effect positive change for the benefit of all.
What Matters Next has been longlisted in the Leadership & Management category of Porchlight Book Company's 2025 Business Books Awards. The excerpt below is adapted from the book's Introduction.
◊◊◊◊◊
A Brighter Future Requires Better Decisions
If you sense that things are moving at a staggering speed, you’re not alone, and it’s not your imagination. That acceleration is measurable in at least a few ways, such as computing power. You may be familiar with Moore’s law, which states that every two years, you can assume a doubling of transistors on a microchip—in other words, double the computing power. From 1965 to 2015, computing power grew by a factor of 12 orders of magnitude, or indeed a doubling approximately every 1.3 years.
That’s exponential growth. And the thing about exponential growth is that while it’s not unprecedented in nature, it’s not the model of change we’re adapted to. This presents us with challenges when it comes to making decisions that will make any sense just a few iterations down the road.
It’s not just speed, either; it’s scale, too. For years now data-driven decision-making and algorithmic optimizations set against a globally interconnected mega-network have been hurtling us forward at a dizzying speed on an incomprehensible scale. With this speed and scale comes significant consequences. Missteps can lead to unintended consequences, missed opportunities, or even lasting harm.
And this was before the advent of the generative AI era changed the game—or rather, upped the stakes. Since then, skills that once looked uniquely and safely human have been encroached upon. Capabilities that companies defended as their competitive advantage became tauntingly reproducible by the masses.
All of this is why I’ve heard often from leaders like you: amid all this chaos, decisions are getting harder to make. Why? Increasingly, we’re juggling an intricate balance between the immediate needs of our current realities (what matters now) and our long-term hopes and future visions (what is likely to matter).
In the spirit of the old chestnut, “price, quality, and schedule—pick two,” three factors in the tech-accelerated business environment often misalign, making wise choices feel all but impossible: future, tech, and human. You can choose the path that propels you into the future the farthest and fastest. While future predictions aren’t always accurate, we can typically deduce among a set of options which one seems the most open to change. You’ll also need to pick a winner in the technology you invest in. And how do any of these choices affect human beings over time? Are we inadvertently setting precedents that undermine privacy, that are ripe for misuse by bad actors, or by overzealous law enforcement?
We also have to be honest in confronting our own shortcomings: we don’t always make the most rational decisions. The field of neuroscience has shed plenty of light on how we decide. We humans are constantly collecting information and using it to stack the decks of our perceptions and judgments. When we move to choose, we are evaluating among options to select what matches best. Classical economics, too, would have us believe we are rational beings, only ever making decisions that make sense in an objective model of value and trade-offs.
Except, as the meme goes, that’s not how any of this works.
Why not? What gets in the way? Simply put: our biases.
Evolutionarily, biases have served us well. They’ve helped us discern patterns for survival. That animal is big; don’t go near it. That unfamiliar berry is red and reminds me of the one that made me sick last time; don’t eat it.
But our evolutionary heritage also favors the tendency to play it safe, even when rational assessment suggests opportunities that others might not see.
But what if we could make better, more informed decisions? What if there was a way to minimize these risks? A method to ensure that each step we take is purposeful, meaningful, and aligned with our organization’s vision?
This model offers that clarity. It starts with insights and foresights. These help us leverage data, market trends, and consumer behavior, tap into empathy, understanding, and wisdom, and in so doing, they illuminate the path ahead, enabling leaders to make more informed decisions. These aren’t just reactive responses to immediate problems but proactive steps toward the desired future. It’s about moving forward with purpose, keeping both the present and the future in sight as we transform and innovate.
Yet here is where we take a step back and rethink what we understand by transformation and innovation.
Often, transformation is used to play catch-up, a means of adjusting to the realities of our present situation. Transformation very often is about catching up to “what matters now.”
Innovation, on the other hand, is the kid in us who looked at a cardboard box and saw a spaceship. It’s our leap into the future. Innovation is often about venturing into “what is likely to matter.”
This is why it is so exciting to draw a line between these two ideas and shine a light on the next steps we need to move forward.
What matters next isn’t disregarding the present for the future or being stuck in the now without a vision for tomorrow. It’s understanding that the present and the future are a continuum. It’s planting a seed today and knowing, with patient nurturing, it will grow into a tree tomorrow.
What Matters Next invites you as a reader to pause and reflect, to take a step back from your busy life and consider what truly matters to you and to those you lead and serve.
But more than just a title, it is also a question, a challenge even. What will matter most in your future? As I pondered this question while writing this book, I realized it was a question I had been asking myself throughout my career. And it’s a question I’ve been asked, as well, by leaders like you.
I recall an interaction with a senior operations executive from a tech company who approached me in the labyrinthine halls of a tech conference after I had delivered that day’s opening keynote. She asked a question about one of the main points I’d made, and in answering I invited her feedback, at which point she looked down and got quiet. We were surrounded by exhibitors proclaiming solutions that did enough shouting: their kiosks had all the latest buzzwords—AI, blockchain, quantum computing. Suddenly she waved her hand wildly at it all and asked, “I guess what I’m asking is, what does any of this mean?” At first, I thought she was playing at not understanding the terminology. But as she continued, I saw where she was going: “What does all this mean for any of us? Or for our teams, for the people we serve? How can we possibly know?”
Coming up as a leader in the technology sector, I’ve always been fascinated with the future, always thinking about what’s next. But the future is a tricky thing. It’s a shifting, nebulous entity, always just out of reach. Yet we are asked to make decisions, to take actions that will have lasting impacts on this uncertain future. So how do we navigate this balance between current realities and future visions? And how do we do it so that it has any meaning for us, for others, for society as a whole?
I believe the answer lies in taking the next most meaningful step.
Moving toward what matters next, then, challenges us to break free from the perpetual cycle of problem-solving for the present. It urges us to align with what we believe the future may hold and to take a meaningful step, however small, in its direction.
Excerpted from What Matters Next: A Leader's Guide to Making Human-Friendly Tech Decisions in a World That's Moving Too Fast by Kate O'Neill, published by Wiley. Copyright © 2025 by Kate O'Neill. All rights reserved.
About the Author
KATE O’NEILL is the CEO of the technology and strategy consulting firm, KO Insights, where she provides leaders with strategies on how to navigate the uncertainties that come with such a rapidly changing landscape. Her clients include Google, IBM, Coca Cola, McDonalds, Yale, and the United Nations.




























































































