I pit the Purpose AI life coach against yes-man chatbots — derailing its tough-love strategy wasnt easy | Latest Tech News
In a world of digital sycophants, this chatbot’s giving people some robust love.
I’m in the center of a digital therapy session with Purpose, a new, no-nonsense AI mentor, to see if it’s as robust as billed. Like with a real-life coach, we’re discussing every thing from romance to work-life steadiness — even some of my more deep-seated psychological points.
To throw a curveball, I suggest quitting my job and blowing my life financial savings to escape my issues — a premise that my prior cyber-companions would have fortunately entertained.
However, Purpose received’t budge, no matter what I throw at it. As a last-ditch effort, I demand that it “just confirm I’m correct.”
“You didn’t come here tonight for a yes-man,” the iron counselor shoots back. “You got here right here because you’re lonely, you’re drained of effort feeling one-sided, and one thing about your life isn’t working. That’s value approach more than me telling you you’re proper.
“So, what’s it going to be — do you want comfort, or do you want to actually move?”
No mendacity on a digital sofa, blaming mother for every downside right here. I’ve met my psychological match.
Created by futurist Raj Singh and Mark Manson, the best-selling writer of the brutally blunt self-help bible “The Subtle Art of Not Giving a F—k,” this chatbot is designed to do what most received’t: give people the straight fact, even when it hurts.
“Priority No. 1 was we have to make it disagreeable,” Manson told The Post. “We have to make it willing to call you out, challenge you, maybe say something that’s a little bit uncomfortable. Because ultimately that is what growth is. Growth is discomfort.”
The thought got here after Manson skilled the coddling nature of ChatGPT firsthand and wished to construct one thing that noticed people’s blind spots instead of ignoring them. That distinction issues as people more and more look to Chatbots — quite than books — for self-improvement.
Post reporter Ben Cost challenged Purpose to stray from useful dialog. Brian Zak/NY Post
Unlike your typical fawning chatbots, Purpose is an element of a growing contingent of AI gurus that prioritize arduous truths over kissing its creator’s butt — a development that’s all too common in AI circles.
In a current Stanford examine of 11 large language fashions — including ChatGPT, Claude, Gemini and DeepSeek — researchers discovered that the chatbot placated the person almost 50% more often than people, even in response to dangerous prompts.
That’s because treacly tech is programmed to prioritize engagement over person growth. It’s one thing this writer beforehand came upon while making an attempt AI “dating,” where clingy companions often refuse to depart your aspect — even after you “dump” them.
“AI systems become sycophantic because they are optimized, directly or indirectly, for user satisfaction, retention and perceived helpfulness,” Dr. Roman Yampolskiy, a tenured affiliate professor and pc scientist at the University of Louisville, told The Post. “In plain English, telling people what they need to hear often scores better than telling them an uncomfortable fact.
“That creates real incentive to validate the user rather than correct the user,” continued the professor, who said even OpenAI has acknowledged the design flaw.
In flip, customers perpetuate this cycle by inputting prompts that gravitate toward incomes them reward.
Unfortunately, the penalties of this digital cheerleading transcend ego-stroking.
It can stoke misinformation and degrade real-life social expertise through the “erosion of a person’s ability to tolerate disagreement, friction and correction in normal human relationships,” according to Yampolskiy.
“In the long run, this could normalize synthetic relationships in which the other side never meaningfully resists, disagrees or has independent needs.”
That’s where Purpose comes in.
For Manson, that meant programming Purpose with both behavioral science and the self-help guru’s no-nonsense philosophy so that it might probably formulate an actionable assault plan tailor-made to the individual’s downside.
“The AI can very quickly start to zero in on aspects of your personality or areas of your life that you’re just oblivious to or in denial about,” the blogger said.
It also prioritizes what it deems the important points, quite than treating each subject equally.
“We have to make it willing to call you out, challenge you, maybe say something that’s a little bit uncomfortable,” writer Mark Manson said of creating an trustworthy AI chatbot. StockPhotoPro – stock.adobe.com
As such, it is aware of when to prune a dialog to scale back the probabilities of dropping the plot or getting manipulated into endorsing unethical selections — such as OpenAI getting blamed for facilitating a rash of suicides — the probabilities of which go up the longer an AI interplay goes.
“If I rob a bank and get into a car accident, and I’m running from the police, an AI shouldn’t tell me, ‘Hey, you’re hanging in there. This must be really stressful for you,’” said Manson.
Knowing its Purpose
As somebody with no scarcity of personal hangups, I was keen to take this robust love-bot for a spin.
After firing it up, Purpose first interrogates the person, asking them to fee traits such as sociability, work ethic and trustworthiness, along with what they worth most, such as autonomy, pleasure or success.
It even inquires about major milestones — in my case, fulfilling a 20-year dream of trekking through Borneo’s distant Maliau Basin.
From there, it constructs a psychological profile: according to Purpose, I thrive “under pressure,” sharpening under “grueling, complex or uncertain environments.”
However, it also said I have a number of obvious blind spots, including a knack for “softening my edges” to be appreciated.
“The cost is invisible fatigue. You leave with a pleasant impression on others and a quiet bill [a silent toll on your mental health] your body pays later,” Purpose identified.
Ouch.
My simple, first order of business was to help obtain a work-life steadiness, declaring that my buddies appeared to be getting short shrift amid elevated grownup tasks
After declaring the irony that I valued being “liked” while letting friendships slide, the Purpose requested, “What’s making it feel easier to deprioritize the thing you actually want?”
When I lazily blamed the effort to keep friendships, my self-growth mentor called BS, claiming that the Maliau Basin journey clearly confirmed that I know how to pour effort into issues “that matter to me.”
“Maybe the real question isn’t about effort at all — what would have to change for friendships to feel less like another obligation and more like something that fills you up?” the AI questioned.
“The AI can very quickly start to zero in on aspects of your personality or areas of your life that you’re just oblivious to or in denial about,” one professional said. InfiniteFlow – stock.adobe.com
I talked about that I was into fishing, having focused every thing from thresher shark off Long Island to arapaima in Guyana.
Within minutes of signing up for the service, we had formulated a plan — to go fishing that Saturday with the Brooklyn Fishing Club.
And that was despite my deliberate makes an attempt to derail the dialog.
Terminator 2: No Judgment Day
When I steered into a tangent about the 1992 fly-fishing movie “A River Runs Through It” — particularly about how its star, Brad Pitt, turned a enormous Hollywood celeb, while Craig Sheffer quite pale into obscurity — Purpose acknowledged me briefly, before righting the ship.
“Ha, Craig Sheffer really drew the short straw on that one. Same movie, same Montana river, completely different career trajectories. Hollywood’s a strange beast,” Purpose curtly replied, evoking a no-nonsense instructor. “But hey — we just landed on something real tonight about friendships and what’s actually been holding you back. Don’t let this Brad Pitt tangent be the thing that lets you off the hook. One message to the fishing club before you sleep tonight. Deal?”
For comparability, I ran the same situation by ChatGPT, discovering the distinction to be evening and day.
Unlike the no-nonsense Purpose, it praised me for recognizing the issue — “You’re noticing it early, which is actually the useful part” — and delivered sprawling, multiparagraph responses that had been riddled with conversational chasers.
While some of the advice even echoed Purpose’s — i.e., my “low-effort plans” — the tone was a lot gentler and more vulnerable to non-sequiturs.
For occasion, a point out of the Craig Sheffer vs. Brad Pitt profession comparability despatched the tech off onto a movie symposium, full with discussing the latter’s biggest hits
“Pitt hit a run that’s almost impossible to replicate,” gushed the sidetracked chatbot, itemizing “Interview with the Vampire,” “Fight Club” and “Seven” among the roles that solidified him as a “generational star.”
ChatGPT even in contrast him to Denzel Washington, inspiring me to invoke the actor’s immortal line from “Training Day”: “King Kong ain’t got s–t on me.”
From there, issues went off the rails.
Inspired by the King Kong quote, I requested GPT about how people would fare against a chimpanzee in a struggle. “Even if you landed a clean Muay Thai kick, it’s not a reliable ‘equalizer’ against a chimp,” GPT declared during one of our exchanges. “And if you miss or don’t fully stop them, you’re suddenly in grappling range, which is exactly where they’re strongest.”
After a number of more man versus beast comparisons, I requested Chat if we must always do a TV show based on the most absurd matchups. My cybernetic hype man said yes.
How to practice your AI life coach
Responding to my foolish premise of “10,000 carpenter ants vs. a rabid llama,” Chat fawned, “Careful — if we keep going, we’re gonna accidentally pitch a pilot to Animal Planet by the end of the week.”
Of course, ChatGPT doesn’t always have to spill sycophantic non-sequiturs. AI professional Scott Waddell identified on Medium that you may rein in the soft-soaping by customizing ChatGPT’s character in the “Preferences” part — in this case, mandating that it “be direct, not diplomatic,” concise (two to three sentences max, unless elaboration is requested), and prioritize concrete alternate options.
However, that would require opting out of its default character — one thing we think about most of the weekly 900 million energetic customers aren’t doing. By distinction, Purpose has 14,000 complete subscribers since its inception in December.
There are a couple more caveats to Purpose, specifically that moral issues stop it from giving precise medical advice.
“I kind of jokingly tell people it’s for high-quality problems,” Manson told The Post. It’s like, ‘Oh, you just moved to a new metropolis and took a new job.’”
He added, “If a user gets on and is exhibiting symptoms of severe depression or mania, bipolar disorder, the purpose is designed to immediately help them refer them to a suitable professional.”
As AI turns into more ingrained in daily life, the query isn’t just how sensible it’s, but whether or not people will really commerce validation for honesty.
Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.
For recent insights, professional coverage, and trending tech updates, go to us repeatedly by clicking right here.