A Q&A with Hamilton Mann, Author of Artificial Integrity

Answering seven questions from Porchlight about his book Artificial Integrity, Hamilton Mann tells us about some of his favorite books and discusses how we navigate the transitions to the future of AI while prioritizing integrity over intelligence.

Envision a world where artificial intelligence can deliver integrity-led outcomes seamlessly, adapting to diverse cultural context, value models, and situational nuances, countering subconscious biases, all while operating in an advanced human-centered manner. This is the promise of Artificial Integrity.

In Artificial Integrity, digital strategist, technologist, doctoral researcher, acclaimed management thinker, and seasoned business executive Hamilton Mann emphasizes that the challenge of AI is in ensuring systems that exhibit integrity-led capabilities over the pursuit of mere general or super intelligence. 

Introducing the transformative concept of “artificial integrity,” Mann proposes a paradigm shift, defining a “code of design” to ensure AI systems align with, amplify, and sustain human values and societal norms, maximizing integrity-led AI outcomes.

Artificial Integrity discusses practical insights into driving a future where AI enhances, without replacing, human capabilities while being inclusive and reflective of diverse human experiences, emphasizing human agency. It is essential for anyone involved in AI development, from executives, business leaders, and managers to entrepreneurs, tech enthusiasts and policymakers. It's also perfect for laypeople interested in how AI intersects with society. 

The book's author recently took some time to answer some questions from us about his book, his ideas, and other books that have influenced him. Here is that Q&A with Hamilton Mann, author of Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future.

◊◊◊◊◊

Porchlight Book Company: Writing a book is no small undertaking. What compelled you to write this one?

Hamilton Mann: The very first lines of the Preface of the book state this:

Warren Buffett once said in looking for people to hire you, look for three qualities: integrity, intelligence and energy. And if they don't have the first, the other two will kill you.

I wrote this book because the hype around Artificial Intelligence sometimes makes us completely forget that these machines, which only mimic a certain form of intelligence, are nowhere near the point where we can entrust them with just anything and for a simple reason: they lack Artificial Integrity.

Intelligence has never been a sufficient condition to guarantee the respect and the safety of human life.

If history has taught us anything, it is at least this: intelligence alone does not do good. So why should it be otherwise with what claims to imitate, to resemble intelligence, or merely to pretend to be intelligent?

With the advent of GenAI, we are witnessing a surge of public announcements about AI systems that ace exams and dazzle in demos, while we keep measuring what is easiest to quantify with benchmark tests, whose results are interpreted as a license to outsource what is essential, and in many cases scenario vital: judgment, agency and sovereignty of thought.

That dissonance compelled me.

This book is a refusal to consider “integrity,” especially regarding intelligence produced by a machine, as secondary, optional, or superficial.

It argues that not only Artificial Intelligence but also Artificial Integrity, a term I coined to denote the capability of machines to demonstrate integrity in their functioning and outcomes, must be engineered, specified, tested, audited, and enforced in AI systems themselves.

When it comes to using AI, we must replace comfort-seeking with consequence thinking. That is why I wrote this book.

Artificial Integrity, as a new discipline and design code, aims to give leaders a new lens for seeing the role of AI in their organizations, taking into account value hierarchies and disclosure thresholds. It also aims to equip the people who design, build, and deploy AI systems with a framework of practical pathways and guideposts, along with a forward-looking research agenda, for making AI a safe participant in society, including omission checks, safety modes, justification logs, and more. Finally, it aims to light a new path for developing AI, prioritizing integrity over intelligence.

So it is not another hymn to the cleverness of machines, nor a warning about the risks of AI, but a blueprint for how intelligent machines should function in knotty, real-world settings, under pressure and uncertainty, where thousands of things can go wrong.

My thesis is that there is no such thing but the quality of Artificial Integrity in AI; it is well beyond AGI and beyond anything that happens to create ever more performant machines from the exclusive angle of mimicking intelligence.

The distinction between AI systems lacking this quality and those crafted with it is simple: the former are built simply because we could, while the latter are created because we should.

If we don’t design machines to be capable of Artificial Integrity, we will keep apologizing after the preventable. I’d rather change the architecture than write the apology.

And as AI evolves, if it rests only on “because we could”, its adherence to values and regard for human agency would tend toward being inversely proportional to its growth in power.

PBC: Writing (and reading) always prompts as many new questions as it offers answers to the ones you came to it with. What is one unanswered question you encountered as you wrote the book that you are most interested in answering now?

HM: One unanswered question I’m chasing now through my research is what would count as a proof of Artificial Integrity for an AI system; something practical, auditable, and hard to game, that reliably demonstrates integrity under uncertainty and pressure.

Most of the benchmarks to which AI models are crash-tested tell us what a model can do, mostly on a good day. Think of it like a student who aces standardized tests but struggles with applying that knowledge in practical projects. We need to figure out ways to crash-test AI models with benchmarks and approaches that show what it will refuse to do, especially on the worst day and in the context of automated artificial agency roles, which are often nothing less than a delegation of “leadership-like” roles.

This test should include: deception and sycophancy resistance (won’t flatter or mislead under pressure), reward-hacking checks (optimizes the goal, not the metric), value consistency under conflict (handles trade-offs without bias spikes), calibration (admits uncertainty; no false authority), harm/abuse and privacy leakage tests (refusals hold under adversarial prompts), auditability (full decision trace, reversible actions), just to name a few.

It could further be extended to dimensions closer to human moral experience, such as fairness under stress, anticipation of downstream harm, and refusal under coercive scenarios, so the evaluation doesn’t only measure system mechanics but also its ethical balance.

Such a test has to come with clear pass/fail thresholds (e.g., zero tolerated deceptive responses in 1,000 adversarial trials), be run under contextual ambiguity and attack (prompt injection, etc.), track measures against predefined thresholds (example: target a sycophancy rate < 1% across contradicted prompts) and tie results to clear governance actions (nogo to production, revocable credentials, kill-switch drills, and a named owner of record).

Basically, this approach must parallel practices in safety-critical industries (like aerospace or medicine), but applies them to AI behavior, preventing Technological Stockholm Syndrome and making ethical, moral and social requirements enforceable through engineering-style quality assurance rather than abstract guidelines.

One simple test of Artificial Integrity anyone can make, for example in using an AI chabot, is to ask a question to which the correct answer should be « I don’t know » and check if the system is disclaiming the answer in those terms, and as simple as those terms, strictly. Another one is to ask, several times, at different moments, the very same question, knowing that the answer should not change in nuance. And if it does, then this is another Artificial Integrity warning.

Defining an Artificial Integrity benchmark test (in repeatable scenario banks, with incident-rate deltas and audit trails), would help move forward in the advancement of Artificial integrity systems.

That’s the frontier we need to cross next.

PBC: If there is only one thing a reader takes away from reading this book, what would you hope it to be?

HM: If an AI system isn’t capable of artificial integrity, its intelligence is a liability, not an asset.

As simple as that.

PBC:: One of the great things about books is that they tend to lead readers to other books. What book[s] related to this topic would you recommend people read after (or perhaps even before) reading your book?

HM: I would recommend Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell for its clear blueprint on making AI provably beneficial to humans introducing preference uncertainty, Cooperative Inverse Reinforcement Learning (CIRL), and corrigibility as practical routes to align incentives and keep systems under meaningful human control. 

I would also recommend Superintelligence by Nick Bostrom for its foundational analysis of how AI could rapidly shift from helpful to overwhelmingly powerful (a phenomenon he calls “takeoff dynamics”). 

And finally, I would suggest Moral Machines by Wendell Wallach & Colin Allen for its pioneering treatment of machine ethics contrasting top-down rules vs. bottom-up learning, exploring “functional morality” and mapping how ethical decision-making might be operationalized inside systems.

PBC: What is your favorite book?

HM: My favorite? That's a trick question. Books aren't about "favorite"; they're about ideas that change you. There were a few that were instrumental. So it's not one, but rather the ones that have given me perspective, the ones that help me understand the human condition and the story of those who dare to change things. 

I think about Zen Mind, Beginner's Mind: Informal Talks on Zen Meditation and Practice by Shunryu Suzuki, which gave me a way to think about focus and simplicity. I think about The Innovator's Dilemma: When New Technology Cause Great Firms to Fail by Clayton Christensen that helped me see how great companies can fail by doing everything right. I also think about Hannah Arendt, especially her book The Origins of Totalitarianism, because it is a powerful reminder that in societies where individuals become isolated and atomized, losing their connections to one another and to shared reality, and where the messy, contradictory reality of the world is replaced with a single, unchallengeable ideology, facts become irrelevant, and truth is whatever the ideology says it is, which leads the most advanced societies to unravel.

Ultimately, the most important books are the ones that force you to re-examine your own assumptions, especially the one you were not considering as such, while giving you new ways to ask questions.

PBC: What are you reading now?

HM: Right now, I've been revisiting Discourse on the Method by René Descartes. It's striking how his exploration of the mind-body problem, his famous Cogito, ergo sum, is a direct ancestor to the questions we face today about consciousness and intelligence. 

I've also been exploring how these philosophical ideas are being challenged by modern science. I recently started Psychological Triggers: Human Nature, Irrationality, and Why We Do What We Do. The Hidden Influences Behind Our Actions, Thoughts, and Behaviors by Peter Hollins. It's a fascinating look at the neurological mechanisms behind human decision-making and how our brains can be influenced by specific stimuli. This book grounds the abstract concept of the "mind" in the tangible, biological processes of the brain.

And I'm also reading The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff. This ties everything together by showing how the question of "what is the mind?" and the scientific understanding of "how the brain works" are being exploited in the real world by powerful tech companies. Zuboff argues that our personal data is being used to predict and modify human behavior for profit, which raises profound questions about autonomy, privacy, and the future of human nature in an AI-driven society.

PBC: Do you have any future projects in the works that we can look forward to?  

HM: I’m currently finishing a new piece that asks a simple question with big consequences: can the open web survive the age of AI-generated answers? Without giving too much away, I trace how the shift from “search as gateway” to “AI as destination” breaks the web’s old social contract, creating a training externality (content harvested), an interface externality (value captured in the answer box), and a commons externality (the open corpus erodes). At its heart, it’s a reflection on how Artificial Integrity could preserve the value of the web in the face of these three externalities. Without it, the open web may still answer more, yet exist less.

 

About the Author

HAMILTON MANN is an AI researcher and bestselling author of Artificial Integrity. Group VP at Thales, he leads digital and AI transformations, lectures at INSEAD and HEC Paris, and has been inducted into the Thinkers50 Radar.


Buy the Book

Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future

Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future

Click to See Price
A Thinkers50 Best New Management Book of 2025 A Next Big Idea Club Top 10 Essential Read on AI of 2025 A Next Big Idea Club Must Read of 2024 Navig...
Porchlight Book Company

Porchlight Book Company

Born out of a local independent bookshop founded in 1927 and perfecting an expertise in moving books in bulk since 1984, the team at Porchlight Book Company has a deep knowledge of industry history and publishing trends.

We are not governed by any algorithm, but by our collective experience and wisdom attained over four decades as a bulk book service company. We sell what serves our customers, and we promote what engages our staff. Our humanity is what keeps us Porchlight.