Report reveals AI chatbots are doxxing users real phone numbers | Latest Tech News
AI chatbots are turning into unintentional snitches — and in some instances, they’re handing out real people’s phone numbers to whole strangers.
Privacy specialists are sounding the alarm over a disturbing pattern dubbed “AI doxxing,” where bots like Google Gemini and OpenAI ChatGPT floor personal contact data without consent.
One Reddit person said their nightmare started when Google’s AI allegedly began giving out their personal quantity as a placeholder for companies and companies.
“Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith – you name it,” the person wrote, including callers stored saying: “I got your number from Google’s AI.”
The Redditor called it a “massive privacy violation and data leak,” saying their phone had turn out to be a nonstop hotline for confused strangers and “My daily life is being completely disrupted.”
“Gemini’s problem is not a defect. It’s the result of unchecked years of data brokerage practices that meet generative AI,” a spokesperson for privateness firm ClearNym told The Independent.
They famous that years of harvested personal data are now colliding with AI systems skilled on large web datasets.
“It now returns as accurate copies or even fabrications and, most recently, as ‘placeholder’ phone numbers for any number of strangers,” they warned.
And it’s not just random glitches inflicting chaos.
Virgin Media O2 also just lately reported that scammers are planting faux customer-service numbers online for AI chatbots to regurgitate back to users.
“Criminals know when people search for help, they’re often looking for a quick answer,” said Murray Mackenzie, the company’s fraud prevention director.
“AI tools are creating new opportunities for fraudsters to create realistic-looking fake numbers that appear through search results or chatbots, putting people at risk of calling a criminal rather than their trusted provider.”
AI chatbots are apparently doxxing users by coughing up real phone numbers to strangers — sparking contemporary privateness fears over instruments like Google Gemini and ChatGPT. fizkes – stock.adobe.com
Researchers at AI security company Aurascape told The Independent that scammers accomplish this by “seeding poisoned content” across the web.
“Attackers are quietly rewriting the web that AI systems read,” said lead security researcher Qi Deng.
“When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support and reservations number that leads straight to a scammer instead of the real company.”
Other instances seem even more invasive.
MIT Technology Review reported that Gemini mistakenly listed Israeli software program engineer Daniel Abraham’s personal quantity as buyer assist for a cost app.
Meanwhile, researchers at the University of Washington found Gemini might expose personal contact data with alarming ease.
“One day, I was just playing around on Gemini, and I searched for Yael Eiger, my friend and collaborator,” said PhD pupil Meira Gilbert.
Gemini also surfaced her personal cell quantity. “It was shocking,” Gilbert said.
Researchers warn scammers are poisoning the web with faux contact data that AI chatbots regurgitate back to users — while other checks discovered Gemini surfacing real people’s personal phone numbers with alarming ease. sitthiphong – stock.adobe.com
Her colleague, Yael Eiger, said the knowledge technically existed online before — but buried deep enough that virtually no one would discover it.
“Having your information be … accessible to one audience, and then Gemini making it accessible to anyone” feels utterly different, Eiger said.
DeleteMe CEO Rob Shavell told the outlet that complaints about AI exposing personal data have surged just lately, with clients reporting chatbots revealing “accurate home addresses, phone numbers, family members’ names, or employer details.”
A spokesperson for Google told MIT Technology Review the company has safeguards in place to forestall personal data from showing in AI options and reviews requests for removing.
Still, some users say help has been laborious to come by.
“Standard support forms are a complete dead end,” the aforementioned Redditor wrote. “I haven’t received a single response, and the harassment continues daily.”
The AI privateness mess comes as scammers are more and more weaponizing the technology in other alarming methods, too.
As beforehand reported by The Post, Long Island officers just lately warned that fraudsters are utilizing AI voice-cloning instruments to impersonate victims’ grandchildren in determined phone calls concentrating on seniors.
The scammers allegedly scour TikTok and other social media platforms for videos of younger people talking, then use the audio to generate reasonable faux voices demanding bail money or emergency money.
The AI privateness nightmare comes as scammers are also utilizing voice-cloning technology to impersonate victims’ grandchildren in panicked calls concentrating on seniors. Eduardo Accorinti – stock.adobe.com
“They’re always trying to stay a step ahead,” Suffolk County Police Commissioner Kevin Catalina beforehand told The Post.
Catalina warned that the schemes are turning into “more and more sophisticated” as AI advances, with aged victims shedding hundreds of {dollars} to convincing artificial voices and spoofed phone numbers.
Stay informed with the latest in tech! Our web site is your trusted source for breakthroughs in artificial intelligence, gadget launches, software program updates, cybersecurity, and digital innovation.
For contemporary insights, knowledgeable coverage, and trending tech updates, go to us commonly by clicking right here.