You've probably seen the headlines. AI can clone voices in seconds. Deepfakes are indistinguishable from reality. Scammers have a powerful new weapon.
Some of this is true. Some of it is overblown. But buried beneath the alarming stories is something important: the fundamentals of protecting yourself haven't actually changed.
Let's separate fact from fear.
What AI Has Actually Made Possible
First, the reality. AI has genuinely changed the scam landscape in three significant ways:
Voice Cloning Is Real, and It's Cheap
A few years ago, cloning someone's voice required hours of audio samples and expensive software. Today, publicly available tools can create a convincing voice clone from just a few seconds of audio.
Think about what that means. A short clip from someone's social media video, a voicemail greeting, or a snippet from a video call is enough raw material.
Reports of voice cloning scams have surged. The most common version targets families: a call that sounds exactly like a grandchild or adult child, claiming to be in trouble and needing money urgently. The voice is panicked. It sounds right. And it's fake.
A 2023 survey by McAfee found that 25% of adults had either experienced an AI voice scam or knew someone who had. One in ten reported being targeted personally.
The "Bad Grammar" Warning Sign Is Gone
For years, one reliable way to spot a scam email was dodgy writing: awkward phrasing, spelling mistakes, strange formality. These errors existed because many scams originated overseas, written by people for whom English wasn't a first language.
AI has eliminated this tell almost entirely. Large language models produce fluent, natural text in any style. A scammer can now generate a perfectly worded email that matches the tone of your bank, your GP surgery, or your energy provider.
The "it just doesn't sound right" instinct that protected many people? It's no longer reliable.
Personalisation at Scale
Scammers have always known that personalised attacks work better than generic ones. The problem was scale: crafting individual messages for thousands of targets took time.
AI changes the economics. Feed a system some basic data about a target (easily available from data breaches and social media), and it can generate customised messages automatically. Thousands of highly targeted attacks become as easy to produce as one.
What AI Hasn't Changed
Here's where the fear gets ahead of the facts.
Scams Still Follow the Same Playbook
Strip away the AI polish, and modern scams use the same psychological tactics they always have:
- Create urgency so you act before thinking
- Impersonate authority figures you're inclined to trust
- Isolate you from people who might offer a second opinion
- Trigger emotional responses (fear, excitement, sympathy) that override careful thought
AI makes the delivery more convincing, but the underlying manipulation is unchanged. Which means the defences that worked before still work now.
Verification Still Beats Deception
A cloned voice is convincing until you call the person back on a number you trust. A perfect phishing email falls apart when you log into your account directly instead of clicking the link. A "too good to be true" investment scam is still too good to be true, regardless of how slick the website looks.
The tools that defeat AI scams aren't technical. They're behavioural:
- Pause before acting on any unexpected contact
- Verify through independent channels you control
- Discuss significant decisions with someone you trust
You Don't Need to Understand AI to Protect Yourself
This is crucial. You don't need to know how voice cloning works or what a large language model is. You just need to know that these technologies exist and adjust your habits accordingly.
The adjustment isn't complicated: treat voice and text as unverified by default when the contact is unexpected.
That's it. You don't need to become a technology expert. You just need a healthy scepticism about unsolicited contact, which has always been good advice.
What About Deepfake Videos?
Deepfakes (AI-generated videos that make it appear someone said or did something they didn't) get enormous media attention. And they do exist.
But for everyday scam defence, they're currently less of a concern than voice cloning. Here's why:
Deepfakes require more computational resources and skill to produce convincingly. They're primarily used for celebrity impersonation, misinformation campaigns, and unfortunately, non-consensual imagery. They're less practical for the high-volume scams that target ordinary people.
That said, technology evolves. Video call scams using real-time deepfakes are emerging, particularly in business contexts where a "CEO" joins a video call to authorise a payment.
The defence is the same: verify through trusted channels. If your boss video-calls asking for an unusual payment, call them back on their known number. If it really was them, they'll understand. If it wasn't, you've just avoided a costly mistake.
AI Scam Detection: Help or Hype?
You might be wondering: if AI can create scams, can't AI also detect them?
Yes and no.
AI tools are being developed to identify cloned voices, synthetic text, and deepfakes. Some phone carriers are experimenting with scam call detection. Email providers use AI to filter phishing attempts.
These tools help, and they'll improve. But they face a fundamental challenge: detection is always playing catch-up with creation. As detection improves, so does the generation technology that evades it.
More importantly, the best defences don't require AI. They require habits:
- Pausing before acting on unexpected requests
- Verifying identities through channels you control
- Being sceptical of urgency
- Talking to trusted people before making big decisions
No AI detector needed.
Practical Steps for the AI Era
Here's what actually helps:
Assume voices and text can be faked. Not every call or email is fake, of course. But when the contact is unexpected and involves money, information, or action, treat the medium as untrustworthy until verified.
Establish family verification protocols. A code word. A callback rule. A standing agreement that urgent money requests always get a second check. Whatever works for your family, set it up now, before you need it.
Reduce your raw material. Voice cloning needs audio. Be thoughtful about what voice content is publicly accessible on social media. This isn't about paranoia, just awareness.
Stay informed, not alarmed. AI scam techniques will continue to evolve. Staying broadly aware of new tactics helps you recognise them. But don't let the headlines frighten you into paralysis. The fundamentals of protection remain stable even as the technology changes. (This is a core part of what we do at GranGuard: keeping you updated on emerging threats so you don't have to chase the news yourself.)
Talk about it. Scammers benefit from silence and shame. Families and communities that discuss scam tactics openly are much harder to victimise.
The Bottom Line
AI has made scams more convincing. It hasn't made them unstoppable.
The same principles that protected people before AI still work: pause before acting, verify through trusted channels, and involve others in significant decisions. You don't need to understand the technology. You just need to treat unexpected contact with healthy scepticism.
If anything, AI is a reminder that security has always been more about human behaviour than technical sophistication. The most advanced voice clone in the world is defeated by a callback. The most polished phishing email is neutralised by logging in directly.
The tools have changed. The game hasn't.
This is exactly the kind of threat GranGuard was built for. Our training helps you recognise manipulation tactics, build verification habits, and stay confident online, even as the technology evolves. No technical knowledge required, just practical skills that work.
Sources and Further Reading
- McAfee, "The Artificial Imposter" Report (2023): mcafee.com
- Federal Trade Commission Consumer Sentinel Network Data: ftc.gov
- MIT Technology Review, AI Voice Cloning: technologyreview.com
- Action Fraud AI Scam Warnings: actionfraud.police.uk


