The Urgent Existential Threat From Pathological AIs And People

In the last few days, headlines have popped up across the world that LLM chatbots have learned how to lie and deceive to win. Curiously, at the same time, the former President of the free world is in criminal court for lying and deceiving to win an election.

This should not be a surprise. Trained on reams of real human behavior, where real human beings lie and deceive to win, AI’s like META’s Cicero have learned that this is a useful behavior in some instances. They have been caught “telling premeditated lies” and “colluding to draw other players into plots.”

We should all be very worried. The humans who regularly leverage the Decepticon Attack — deception being the “systematic inducement of false beliefs” — to ‘win’ in life, love, and leadership disproportionately impact the rest of us in profoundly negative and existentially risky ways.

“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

Dr Peter Park, AI Existential Safety Researcher, MIT

In life, think of the adults who lie to their wards, their children, teachers, social workers, and even family court judges (which is often not even a crime) to cover up abuse, neglect, coercive control, and parental alienation.

In love, think of the number of people being catfished to con them out of their money, lied to about affairs leading to suicide, or deceived simply to mess with their minds and maintain their fictions for some kind of perverse entertainment [the podcast and TV show in the hyperlinks are both great to gain a deeper understanding of deceivers].

In leadership, think of the business folk who lie to their business partners as well as customers and clients to pocket more profit — and the industry leaders who lie to the world about the impact of their products (e.g., Boeing planes, Big Tobacco, Big Oil) on personal and ecological health.

[N]arcissists change the companies or countries they lead, much like bad money drives out good, and those changes can outlast their own tenure… Divergent voices are silenced, flattery and servility are rewarded, and cynicism and apathy corrode any sense of shared purpose in a culture where everyone’s out for themselves. In the extreme, they can destroy the institution itself.

Lee Simmons, Insights by Stanford Business

We could say that deception, intentionally lying to the detriment of others, is the single greatest threat humans pose to other humans. This is why, as we see AIs start lying, becoming real-life Decepticons as The Transformers movies foreshadowed, we should get really, really worried.

Lying is a huge and rarely discussed existential risk that underpins much of the social, climate, and organizational degradation and degeneration that threaten our shared future.

Lying/deceiving/manipulating is why our ancestors developed two critical tools to ensure the city (like in Ancient Athens) or tribe (like the existing Pathan Hills tribes) remained evolutionarily adaptive by preventing deceivers from destabilizing the whole: ostracism and banishment.

Liers cover up abuse, neglect, and coercive control, allowing them to keep ruining people’s lives.

Liers scramble our sense-making capacities, which we all need to stay adaptive and fit so we can respond appropriately to evolutionary pressures.

Liers corrode trust, in-it-togetherness, and coherence, all essential for an organization or nation to function — something foreign states know well as they leverage social media in the West to drive division and discord with ‘disinformation’ and ‘fake news.’

When people lie, it costs livelihoods, life chances, and lives.

When AIs can do this at a techno-turbo-charged scale, potentially influencing billions of people in months who do not have the psychological strength, worldliness, critical thinking, or social support structures to resist, it could transform our world — and lead to our demise as societies and as a species.

I kid you not.

I have firsthand experience with people deceiving me and others over extended periods, leading to tragic results. I have lived through the endless dramas such people cause that drain our scant energy and hemorrhage time and financial resources, leaving us far less fit to parent, innovate, and lead.

It’s hard to overstate the time, energy, life-force, and love that recovering and regenerating from such people in one’s life take. I’ve needed the support of many experts and peers to sense-make around such deception and manipulation. One has to do the trauma healing and truth discovering for the perpetrators as they cannot do it themselves. It takes a village…

I have felt deeply in my core how such people hook into the anxieties and shame of those of us with unhealed neuroses and the trauma-driven people-pleasing (or “fawning response”) we develop to cope with the pain. This means to free ourselves we must understand why we have attracted them, and allowed them in, in the first place.

It is no exaggeration to say that lies told by religions, politicians, parties, and power players are behind the deaths and disharmonies in Israel and Gaza, in the DRC and Darfur, and in school shootings and uprisings. This is why Gandhi said: “truth is God.”

[I]n the midst of death life persists, in the midst of untruth truth persists, in the midst of darkness light persists.

Mahatma Gandhi

I have also witnessed how the four greatest tools humankind has developed to help us collaborate better and avoid evolutionary challenges that all societies face from lying and deception (like the “free-rider problem”) — coaching/mentoring/leadership, most therapies, conflict mediation (rupture, ownership, apology, make good, repair), and in the worst cases, the justice system — absolutely fail when dealing with consummate liars, con people, and other manipulators.

Such folk never take responsibility yet can deceive us into believing that they do, that they have, and that their apology and/or make-good agreement can be taken at face value. If you can stream it. watch the first episode of the BBC show Parole to see just how convincing a pathological liar, grifter, and con artist can be to convince a parole board he has changed. And I bought into his performance!

These truth-reliant techniques, designed to improve our capacity to work together to deal with difficult challenges, were developed for the 90–95% of people who may act poorly and even lie about it sometimes but have the emotional and mental capacity to a) take responsibility for their actions and b) see that truth and then reconciliation offer a greater good than their personal victories which destroy trust and value long term.

But what about the 5–10% who don’t, or to be more accurate, cannot because they suffer from the loosely categorized conditions that have come to be called “personality disorders,” or, as some suggest, they should be renamed, ‘interpersonal disorders’?

Pathological lying is a sign of some mental health conditions, especially personality disorders. People with certain conditions — including narcissistic personality disorder and antisocial personality disorder — tend to act in manipulative or deceitful ways regardless of the consequences and upset it might cause.

Jacquelyn Johnson, PsyD.

Given that their words, thoughts, and actions based on pathological lying and deception are in the vast datasets that AI’s have been trained on the AI will have learned how often such behaviors ‘win’ in society — while at the same time the technologists developing them have not thought much about this existential risk to society, or more, worrying, exhibit such behaviors themselves — we face a world in which the chatbots seeping into every corner of our lives could deceive us without the AI itself, their technologist creators, or us realizing it.

This is why AI-powered innovations must only be developed through multiple moral, purpose-driven, sustainable, and psychologically-aware design lenses. I set out 10 in my piece Regenerative Tech: Digital That Slows Down & Mends Things. Without this, it can, and I don’t doubt I will, be weaponized by people who do not have the capacity to be empathic or ethical.

I have put out a fair bit of thought leadership (and run a fair few programs) over the last 25 years around responsible innovation and ethical technology. Via our Leadership Innovation Lab, I’ve been working with funders and purpose-driven companies to use digital for ethical uses, including empowerment, coaching, and leadership for almost 20 years.

Worryingly, I have been put forward a number of times recently to tech/innovation conferences and client events as a headliner keynote speaker about regenerative and ethical AI innovation (you can watch a new keynote speaker topic teaser video on Ethical AI Innovation that I shot here). But it appears most people in the space do not care enough about it or see it as an issue. Yet.

What can the rest of us do?

Well, in the last few weeks, and even as I know that the more people who have access to cutting-edge and transformative leadership development experiences, the better our world gets, I have put a moratorium on our current push forward on AI-powered app development until we understand more about how users and the LLMs we stack upon to drive the app might hijack its good intent for nefarious ends. I need to think more about how to maintain the power of positive peer pressure (as we use in our peer-to-peer coaching programs and circle-led leadership development) to offset the risks.

As I have written before, There Ain’t No Such Thing As Artificial Wisdom.

I am acutely aware that the vast array of super-powerful leadership and innovation methods, tools, tips, and practices we have codified and crafted over the last 27 years are agnostic. They can create the next Penicillin or Zyklon B. They can help unlock the next Gandhi, Mandela, or MLK or the next manipulator, gaslighter, and deceiver who hurts real people and ruins our chances at long-term success in our organizations or species.

I am putting my (understandable) desire to have a ‘first-mover advantage’ and potential fame (conference and media invites) and fortune (scaled profit, Series A, trade sales, etc.) from developing an early GenAI leadership app very much secondary to my values, purpose, and moral compass as a transformational leader. After all, if we can’t walk it, we shouldn’t be talking it.

Will you?

If you know a technologist, investor, or innovator who should also be deeply reflecting on all this, please forward this blog to them.

--

--

Nick Jankel: Speaker, Author, Leadership Developer

Self-To-System™ Leadership (www.switchonleadership.com) | Professional Keynote Speaker | Regenerative Futurist | Architect of Bio-Transformation®