WhatsApp AI assistant committed a ‘terrifying’ blunder

WhatsApp AI assistant committed a ‘terrifying’ blunder


By using Olivier Acuña Barba
Published: 18 Jun 2025 • 21:50
• 3 minutes read

Meta AI is violating the privacy of users who aren’t even on its social media platforms. Credit: Shutterstock| Credits: Shutterstock

Mark Zuckerberg, CEO of Meta, called it the “most intelligent AI assistant you can use for free.” Barry Smethurst from Saddleworth is a 41-year-old record shop employee who encountered an incident that he called “terrifying”. 

Stranded on a platform waiting for a morning train to Manchester Piccadilly, Barry thought he would rely on Meta’s brand‑new WhatsApp AI assistant to get help. In the end, he was given a phone number that he did not have any right to.

Barry was expecting helpful staff when the chatbot confidently provided a mobile number for TransPennine Express Customer Services. Instead, a bewildered woman on the line from Oxfordshire—170 miles away—answered, saying her number had never been public nor associated with a transit operator. Both parties were embarrassed.

Barry realized too late that the person he called had no connection with his trip, and the woman from Oxfordshire was unwittingly targeted by frustrated travellers looking for train updates.

Barry then tried to connect with another number, but again the chatbox was unable to work. He ended up connecting with a private number that wasn’t a Whatsapp user. James was the recipient of this message. Grey, an Oxfordshire property executive, as The Guardian reported In an article about WhatsApp’s AI feature.

Violation of privacy by people who are not on WhatsApp

Meta’s WhatsApp assistant AI had literally made a number public for the second time. It shows that even a chatbot with a high level of confidence can go horribly wrong.

This gross error has reignited the conversation about artificial intelligence’s reliability, privacy protections, trustworthiness and corporate responsibility. The AI that millions of WhatsApp users have access to is presented as a public benefit: a smart assistant available to all WhatsApp users.

Gray responded to the question of whether Zuckerberg’s claim about AI being “the most intelligent” was valid in this case.

Barry’s experience was anything but. Barry Smethurst told Meta that the experience was “terrifying” after he filed a complaint. “If they made the number up, that would be more acceptable. But the overreach in taking the incorrect number from a database Meta has access to, is particularly concerning.”

Stronger controls needed

Meta’s assistant is a practical and secure innovation. But internal documents reveal some of its limitations. AI systems are often used to scrape information from the internet or internal databases. The assistant, however, is supposed to filter personal information. In this instance, the filtering was not successful, and private information about the user was disclosed to strangers.

Meta could’ve avoided this incident by tightening up controls on personal data or by verifying the answers before releasing them.

Meta’s AI is not new to making mistakes. The company is still grappling with trust, whether it’s chatbots that invent quotes or suggest disallowed actions. Training algorithms to say “I don’t know” rather than inventing details—or to properly anonymise personal information—remains as vital as building the models in the first place.

Meta’s AI helper operates in a high‑stakes context. WhatsApp is a vital part of the lives of billions, allowing them to stay in touch with their friends and families. WhatsApp helps people to organise into communities and also assists businesses in providing customer service and serving their customers.

It is the last thing that people want to happen when an AI assistant shares sensitive data like personal phone numbers. The company now faces pressure to reinforce filters, audit data access points, and commit to user safety as non‑negotiable.

Increased regulatory pressure

Meta and AI language models including GPT-3 ChatGPT and others are coming under increased regulatory pressure. In Europe and the UK lawmakers are discussing frameworks like the Digital Services Act or the AI Act.

In the United States as well, privacy advocates push for explicit rules on data collection and liability. Meanwhile, in the blockchain world, web3 natives advocate for data decentralisation. They want to give users back control of their data, allowing them the freedom to store it, share it, and monetize it if desired.

When private numbers find their way into public conversations—even via misfired AI assistants—it nudges governments and regulators into action, and consumer activists into further pressuring officials to adopt rules that protect the end user.

Barry, meanwhile, wanted to know the latest train information. His morning commute turned into a cautionary story of AI hubris, privacy failure and a lack of respect for human rights.


Free Subscribe

Sign up to stay ahead with the latest news straight to your email.

We respect your privacy and will never spam you!

About David Sackler

Avatar photo
David Sackler, a seasoned news editor with over 20 years of experience, currently based in Spain, is known for his editorial expertise, commitment to journalistic integrity, and advocating for press freedom.

Check Also

What’s that ‘set your PIN’ WhatsApp message? 

What’s that ‘set your PIN’ WhatsApp message? 

The WhatsApp app on a smartphone is shown in detail. Users across Spain are receiving …

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by GetYourGuide