ChatGPT’s Sam Altman warns of upcoming global scam epidemic – Deepfake You

ChatGPT’s Sam Altman warns of upcoming global scam epidemic – Deepfake You


Real AI fraud alert: Our voice, face and money could be next.

In this stock photo, the OpenAI ChatGPT is displayed on a phone with CEO Sam Altman at the back.

Credit: El editorial, Shutterstock

‘Just because we’re not releasing the technology doesn’t mean it doesn’t exist… Some bad actor is going to release it. It’s coming very,very soon.

You might want to think twice before trusting that FaceTime call from your mum – or that urgent voicemail from your boss. OpenAI CEO Sam Altman says that the age of deepfake scams is not coming. It’s already here – and it sounds exactly like you. Altman says that we are only at the beginning.

During a recent event in Washington, DC, Altman issued an ominous warning: generative AI will soon allow bad actors to perfectly imitate people’s voices, faces, and even personalities – and use them to scam you out of your money, your data, or both. Anyone You will be able ot do it.

‘Right now, it’s a voice call; soon it’s going to be a video or FaceTime that’s indistinguishable from reality,’ Altman told US Federal Reserve vice chair Michelle Bowman.

So, what’s going on here – and should you be worried?

Fraudulent voiceprints and fake videos: the new weapons

Altman’s concern centres on the fact that some banks and companies still use voiceprint authentication – that is, they let you move money or access accounts just by recognising your voice. Today’s AI tools can clone anyone’s voice in just a few moments of audio. Now, there are dozens and dozens appssome free – that can do it.

Scammers already call people and record their voice when they answer the telephone. Just one sample is enough for them to get a hold of you. Be able to Create a realistic recording of you saying whatever they desire.

Combining this with AI-generated videos that are becoming increasingly realistic, scammers can create fake FaceTime and video calls that sound and look like your spouse, boss or child. You’re not just getting a suspicious email anymore – you’re getting a fake person.

Real-world scams: When your ‘son’ isn’t You can also find out more about the following: your son

These warnings don’t just exist in theory. AI fraud is happening now.

As reported by CBC CanadaScammers cloned a woman’s son’s voice and called her Claiming he ‘It is needed talk’. ‘It was his voice,’ she said. Manitoba mum hears son’s voice on the phone – but it wasn’t him.

Leann, a mother-of-three from Miami, Manitoba received an odd call a few months ago from a private phone number. What she heard on the other end stopped her in her tracks – it was her son’s voice, sounding distressed.

“He said, ‘Hi mom,’ and I said hi,” Friesen recalled. “He said, ‘Mom, can I tell you anything?’ “Yes, I’m sure. He said, ‘Without judgment?'”

The alarm started to sound.

“I’m getting a little bit confused at that point – like, why are you asking me this?” She said.

There was something wrong with the conversation. Friesen decided to cut it short, telling the caller she’d ring back on her son’s mobile – and hung up.

She dialed his number immediately.

She claimed to have woken him. He had been asleep for the whole time. time, Because he worked shifts. “He said, ‘Mom, I didn’t call you.'”

It was You should definitely consider it. “It was my son on the other line.”

The FBI case and the Hong Kong deepfake Video

A Hong Kong finance worker was duped by a deepfake video into transferring more than $25 Million Dollars They were mistakenly in a Zoom conference with their You can also contact us by clicking here. CFO.

According to the FBIImpersonators in the US used AI-generated phone calls that claimed to be a The Government official to access sensitive information – in one case, even pretending to be Senator Marco Rubio in calls to foreign diplomats.

OpenAI: What is it?

Altman says OpenAI doesn’t build impersonation tools. Technically, that’s true. You can also read about it here Some of their projects are available for purchase This is how you should use it.

OpenAI’s Sora video generator creates realistic videos using text prompts. This is a major leap in AI creativityFraud could make a huge leap forward You can also read more about. Imagine giving it a script, and then asking for “a recording of Joe Bloggs contacting his bank to ask for a password reset.”

Eyeball scanner controversy

Altman also supports Worldcoin’s Orb, an controversial biometric device which scans your eyeball in order to verify your identification. The product is being promoted as a “new kind of proof-of-personhood Critics say it’s dystopian. Answer: A digital problem.

Altman acknowledges that some people might not be able to play. OpenAI does not condone abuse. so nicely.

‘Just because we’re not releasing the technology doesn’t mean it doesn’t exist… Some bad actor is going to release it. It’s coming very,very soon.

The technology is advancing faster than ever The law

The governments are still trying to catch-up. Although Europol and the FBI have warned of AI impersonation, global laws are still patchy. The UK’s Online Safety Act does not yet cover all synthetic media. Regulators are still debating the definition of AI-generated fraud.

Scammers are also active. exploiting The lag.

What can I do to protect myself?

Altman is right to be concerned, but you can protect yourself and your account. What you should do today is:

  1. Stop using voice authentication: Your bank may use it Ask for a Different method. It is no longer safe.
  2. Use unique, strong passwords with two-factor authentication (2FA). App-based 2-FA is preferred over SMS whenever possible. It is still your best protection.
  3. Verify via another channel You can also contact us if you have any questions. You can also get in touch with them on their Facebook page. Call or video message that is suspiciouseven if it looks real – Separately contact the person on You can also find out more about the other Platform or phone number
  4. Teach your family members to read: Some relatives, especially the elderly, are at a higher risk. They need to be made aware of the AI fraud.
  5. Use caution when you speak online A few seconds of audio can be enough to create an impressive fake. Posting long videos or audio messages is not required.

Final thoughts: we’re no longer in Kansas

AI tools that can accurately imitate your voice and face are not science fiction. These tools are already in use. Sam Altman might be self-serving in his warning, but it’s true: things will only get worse.

And while the fraudsters are moving fast, our institutions – from banks to regulators – are moving painfully slowly.

You are your own best security until the system can catch up. The following are some examples of how to use scepticism.

So next time your ‘boss’ sends a video message asking for a wire transfer at 4 AM? Sleep on it.

Find out more technology news.  

Freshen up your life with more fruits and vegetables Celebrity news In the morning.

Free Subscribe

Sign up to stay ahead with the latest news straight to your email.

We respect your privacy and will never spam you!

About David Sackler

Avatar photo
David Sackler, a seasoned news editor with over 20 years of experience, currently based in Spain, is known for his editorial expertise, commitment to journalistic integrity, and advocating for press freedom.

Check Also

Spain confronted by US and EU over Huawei relationship

Spain faces US and EU confrontation over Huawei relationship

Huawei Security. Hand holding a padlock and key with the Huawei logo. EU and US …

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by GetYourGuide