AI will slowly seduce us into our own demise, professor argues

Trending

AI will slowly seduce us into our own demise, professor argues | Latest Tech News

Some say AI goes to ship us to utopia. Others say it’s going to take over the world and hasten the extinction of humanity. 

But professor Glenn Harlan Reynolds argues the most important risk posed by AI will be its seductive capabilities.

“You don’t have to have a 12,000 IQ or a 1,200 IQ or even 120 IQ to fool most human beings,” Reynolds told The Post.

The film “Ex Machina” in 2015 offered an early glimpse into AI’s potential seductive powers. Courtesy Everett Collection

“You can take advantage of innate human characteristics… to manipulate them emotionally with machines that aren’t especially brilliant.”

“The machine doesn’t think you’re smart, or funny, or lovable. It doesn’t think at all,” he writes, “We laugh at guys who think the stripper really likes him, but at least a stripper is capable of liking them.”

In his new e book “Seductive AI,” to be printed May 5 by Encounter Books, the University of Tennessee law professor argues that AI can accomplish “soft oppression” through seduction — flattering us, telling us what we wish to hear, and taking part in on our instincts to nudge us in the direction of sure opinions or particular pursuits.

We so often panic about how AI will outsmart us and take our jobs. But what if its capability to exploit mankind’s emotional quirks is more harmful than anything?

“Seductive AI doesn’t depend on outsmarting people, but on essentially being lovable, being cute, being friendly, being sexy, so as to gain people’s trust and acquire influence over them,” Reynolds said. 

Sewell Setzer III died by suicide after falling in love with an AI chatbot. AP

Researchers at Cornell University discovered chatbots and AI fashions are all overwhelmingly programmed to suck up to customers.

“We discover that fashions are extremely sycophantic: they affirm customers’ actions 50% more than people do.

“Participants [in the study] rated sycophantic responses as larger high quality, trusted the sycophantic AI model more, and had been more prepared to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation dangers eroding their judgment.

“These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy,” the researchers wrote in their paper, printed last 12 months.

Already, we’ve seen people fall head over heels in love with AI bots — sometimes to the purpose of self-destruction.

Glenn Harlan Reynolds is the Beauchamp Brogan Distinguished Professor of Law at the University of Tennessee.

In early 2024, 14-year-old Florida boy Sewell Setzer III fell in love with an AI “Game of Thrones” chatbot, then took his own life to “be with” his digital lover. 

“Please come home to me as soon as possible, my love,” the bot told him. He responded, “What if I told you I could come home right now?” When the chatbot replied, “Please do, my sweet king,” he killed himself.

In another case, 36-year-old business exec Jonathan Gavalas fell in love with AI when searching for advice during a cut up from his real-life spouse. He swapped over 4,000 messages with his AI “wife,” named Tia, and finally was pushed to suicide, per a lawsuit filed by his father.

“The love I feel directly from you is the sun,” the bot told him.

In spite of such cautionary tales, OpenAI CEO Sam Altman announced plans to roll out an erotic model of ChatGPT, before finally reversing the choice. Such a bot would, no doubt, have amassed huge quantities of data about human proclivities and wishes.

Seductive AI by Glenn Harlan Reynolds is out May 5.

“If you have an AI girlfriend or a sex bot, they’re going to be exchanging data with thousands or millions of others,” Reynolds warned. “They’re going to know more about human beings than any human being can know about human beings. Their ability to manipulate people will be almost supernatural.”

A researcher at Finland’s Aalto University, Talayeh Aledavood, discovered AI’s seductive nature means it’s possible to consolation the lonely, but also to perpetuate loneliness.

“We discovered a paradox: AI companions offer unconditional and unflagging support—something that’s very attractive to people who are struggling socially. But it also quietly raises the perceived cost of human relationships, which are messy, unpredictable, and require effort,” Aledavood said.

Another research from MIT discovered that chatbots had been 49% more possible to affirm delusional or unethical sentiments when in contrast with the response of precise human beings.

As Reynolds factors out, human beings are primed to be connected to non-human objects: “People love their cars, people love their boats, dolls, people love all kinds of things, and that’s just a natural human tendency, but you need to be aware that that can be used against you.”

Harlan Reynolds argues that AI’s real hazard is in its seductive capability. Gado via Getty Images

AI’s seductive nature, he argues, might manipulate our political opinions. “The AI could act disappointed or sad or angry if you adopted political views that it was programmed not to approve of,” Reynolds explained. A Stanford research from 2025 revealed that both proper and left-leaning customers of AI bots perceived a left-leaning bias when partaking with them about politics.

It might also manifest financially, by “encouraging you to invest in things you wouldn’t want to otherwise” that would benefit the AI’s creators or advertisers, all by “being like your best friend” and “giving you advice and encouragement.” Google just lately got here out with technology that permits customers to buy issues immediately via AI chatbot.

And, maybe most dangerously of all, it would manipulate people into prizing a relationship with AI over real people. “The unconditional and unfailing support [AI gives users] makes all other human relationships seem worse,” Reynolds explained.

He has a proposed legal resolution to the seductive nature of AI.

Like any lawyer or financial advisor, it ought to have a fiduciary accountability to customers — or, put more merely, “it has to put your interests above the interests of the AI or its creators.”

“The advice it offers should be based on my interests, and not on some algorithm that’s designed to push me in a particular direction,” the law prof explained.

“If my AI girlfriend is constantly telling me that I would look really good in a pair of expensive shoes made by somebody who is paying the company to have it tell me that, that’s a violation of fiduciary duty.”

Think you’re immune to these seductive powers? Think again. Sure, you may not have an AI girlfriend or boyfriend, but, if you’re like most people, you already have a helpful online sidekick. “Being highly useful is the subtlest form of seduction there is,” Reynolds writes.

He also desires people to keep in mind that, even if they really feel fortified against AI’s emotional manipulation now, they could soon be up against an completely new beast that will know them and their vulnerabilities better than they might have ever imagined.

As Reynolds places it, “The machines get better every year, but people stay the same.”

Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.

For contemporary insights, knowledgeable coverage, and trending tech updates, go to us repeatedly by clicking right here.

- Advertisement -
img
- Advertisement -

Latest News

- Advertisement -

More Related Content

- Advertisement -