What Are AI Scams?
Artificial intelligence (AI) is a type of computer technology that can learn patterns and create things that look, sound, or read like they were made by a real person. While AI has many helpful uses, scammers have quickly adopted it to make their schemes more convincing than ever before.
Here is what you need to know in plain language:
- AI can clone someone's voice using just a few seconds of audio. A scammer can make a phone call that sounds exactly like your grandchild, your child, or your spouse.
- AI can create fake videos (called "deepfakes") that show a real person appearing to say things they never actually said. These can look very convincing.
- AI can write emails and messages that are polished, personal, and free of the spelling errors that used to be telltale signs of a scam.
- AI can create fake photos of products, people, and documents that do not exist.
The key takeaway: you can no longer trust that something is real just because it looks or sounds real. But do not worry. There are practical steps you can take to protect yourself, and this guide covers all of them.
Did you know? AI voice cloning technology has advanced so rapidly that a convincing clone can now be created from as little as 3 seconds of audio from a voicemail, social media video, or phone call.
AI Voice Cloning Scams
Voice cloning scams are the fastest-growing type of AI scam, and they are particularly devastating because they exploit the trust you have in your loved ones' voices.
How Voice Cloning Scams Work
- The scammer gets a voice sample. They find audio of your family member from social media posts, YouTube videos, voicemail greetings, or even a brief phone call where they pretend to have the wrong number.
- AI creates a voice clone. Software analyzes the voice sample and creates a digital copy that can say anything the scammer types.
- The scammer calls you. You hear what sounds exactly like your grandchild, child, or spouse. They say they are in trouble, there has been an accident, they have been arrested, or they are in the hospital.
- They ask for money urgently. They need bail money, a hospital payment, or money to get home, and they beg you not to tell anyone else in the family. They ask you to send money by wire transfer, gift cards, or a payment app.
Real Example
A grandmother in Arizona received a call from someone who sounded exactly like her granddaughter, crying and saying she had been in a car accident and was being held by police. A man then got on the phone claiming to be a lawyer and demanded $9,000 for bail. The voice was so convincing that the grandmother went to the bank before a family member intervened and called the real granddaughter, who was safe at home.
How to Protect Yourself
- Hang up and call back. If you receive a distressing call from a loved one, hang up and call them directly at their known phone number. If they do not answer, call another family member to verify.
- Ask a personal question. Ask something only the real person would know, such as the name of their first pet or what you had for dinner last Sunday. AI clones cannot answer personal questions.
- Establish a family code word. Choose a secret word or phrase that your family members can use to prove they are really who they say they are (more on this in a later section).
- Be suspicious of urgency. Scammers create panic so you act before thinking. A real emergency will still be an emergency in five minutes, after you have verified the situation.
Warning: Scammers often say "Do not tell Mom and Dad" or "Do not tell anyone." This is a manipulation tactic designed to prevent you from verifying the story. Always verify, no matter what the caller says.
Get the Full Guide — Free
Enter your email to unlock the complete guide. You can also print it or save as PDF.
We respect your privacy. No spam, unsubscribe anytime.
Privacy Policy
Deepfake Video Scams
Deepfake videos use AI to create realistic-looking videos of people saying or doing things they never actually did. While deepfake technology was once limited to Hollywood studios, it is now available to anyone with a computer.
Where You Might Encounter Deepfakes
- Celebrity endorsement scams. Videos of famous people (news anchors, actors, business leaders) apparently endorsing a product, investment opportunity, or cryptocurrency. These are frequently shared on Facebook and YouTube.
- Video call impersonation. A scammer may use deepfake technology during a video call to impersonate a company executive, bank representative, or government official.
- Fake news reports. Fabricated news clips that show trusted anchors reporting on fake stories, often used to lend credibility to investment scams or political misinformation.
- Romance scam video calls. Scammers in romance schemes use AI-generated video to "prove" they are the attractive person in their profile photos.
How to Spot Deepfake Videos
While deepfakes are getting better, they still have telltale signs if you look carefully:
- Watch the eyes. Deepfakes often have unusual blinking patterns (too little or too much) and the eyes may not track naturally.
- Check the edges of the face. Look where the face meets the hair and ears. Deepfakes sometimes show blurring, flickering, or unnatural edges in these areas.
- Listen to the audio. The voice may sound slightly robotic or have inconsistent volume. The lip movements may not perfectly match the words.
- Look for inconsistencies. Earrings, glasses, teeth, and hair may look slightly off. Background items may warp or shift unnaturally.
- Check the source. Is this video on the official website of the person or organization? If it is only on social media or an unfamiliar website, it may be fake.
Tip: If you see a video of a celebrity endorsing a product or investment, go to that person's official website or verified social media account. If the endorsement is real, it will be there. If it is not there, the video is a deepfake.
AI-Written Phishing Emails
For years, one of the easiest ways to spot a scam email was poor grammar, spelling mistakes, and awkward phrasing. AI has eliminated that red flag. Modern AI can write emails that are polished, personal, and nearly impossible to distinguish from legitimate messages.
What Has Changed
- Perfect grammar and spelling. AI-written scam emails no longer contain the obvious errors that used to give them away.
- Personalized content. AI can scrape your social media profiles and create emails that reference your real interests, recent purchases, or family events.
- Matched writing style. AI can mimic the writing style of your bank, employer, or a company you do business with, making fake emails look identical to real ones.
- High volume. AI allows scammers to create thousands of unique, personalized phishing emails quickly, rather than sending the same generic message to everyone.
How to Protect Yourself
Since you can no longer rely on poor writing quality to identify scams, focus on these other indicators:
- Check the sender's email address carefully. The display name might say "Bank of America" but the actual email address might be something like "notice@boa-secure-update.com" instead of a legitimate bankofamerica.com address.
- Never click links in unexpected emails. If your bank emails about a problem, open a new browser window and go to your bank's website directly by typing the address yourself.
- Be suspicious of urgency. "Your account will be closed in 24 hours" or "Immediate action required" are pressure tactics. Real companies give you reasonable time to respond.
- Verify by phone. If an email asks you to take any action involving money or personal information, call the company at a number you know is legitimate (from their official website or your account statement, not from the email).
- Look for emotional manipulation. AI-written scam emails are designed to trigger fear, excitement, or curiosity. If an email makes you feel a strong emotion, slow down and think critically.
Warning: AI can also create convincing fake websites that look identical to your bank or favorite store. Always check the web address carefully before entering any login or payment information.
How to Verify What Is Real
In a world where AI can fake voices, videos, emails, and photos, verification is your most important defense. Here is a simple framework you can use any time something feels off:
The STOP-VERIFY Method
- Stop. Pause before taking any action. Do not respond, click, or send money while you are feeling panicked or pressured.
- Think. Does this make sense? Would your grandson really call from a strange number asking for money without telling his parents? Would your bank really email you demanding your password?
- Obtain a second source. Look for the same information from a different, trusted source. If you received a phone call, hang up and call the person directly. If you received an email, go directly to the company's website.
- Proceed only when verified. Only take action after you have confirmed the request is real through a channel you trust.
Tools That Can Help
- Reverse image search: If you receive a photo that seems suspicious, go to images.google.com, click the camera icon, and upload the photo. Google will show you where else that image appears on the internet. Scammers often use stolen photos.
- Caller ID apps: Apps like Truecaller (free) can identify unknown callers and flag numbers that have been reported as scams by other users.
- Email header analysis: If you suspect an email is fake, you can view the full email header (usually under "View Source" or "Show Original" in your email settings) to see where it really came from. Ask a tech-savvy family member for help with this.
Tip: When in doubt, wait. Scammers need you to act quickly. Legitimate requests can always wait an hour or a day while you verify. If someone pressures you to act immediately, that itself is a warning sign.
The Family Code Word System
One of the most effective defenses against AI voice cloning is establishing a secret code word or phrase with your family. This is a word that only your family knows and that you can ask for during any unusual phone call to verify identity.
How to Set It Up
- Choose a code word or phrase. Pick something memorable but not guessable. Avoid names, birthdays, or anything that could be found online. Good examples: "purple lighthouse," "Tuesday banana," or "grandma's rocking chair."
- Share it in person. Tell the code word to family members face-to-face or during a regular phone call, never by text or email where it could be intercepted.
- Agree on the rules. Everyone in the family should know: if someone calls with an emergency and cannot provide the code word, you hang up and call them directly at their known number.
- Change it periodically. Update the code word every six months to a year, or immediately if you think it may have been compromised.
How to Use It
If you receive an unexpected call from a family member who says they are in trouble, simply say: "What is our family code word?" An AI clone will not know the answer. If the caller cannot provide it, hang up immediately and call the family member at their known phone number.
Talk to Your Family This Week
Do not put this off. AI voice cloning scams are happening right now and the technology is only getting more convincing. Bring this up at your next family dinner, phone call, or group text. It takes five minutes to set up and could save you thousands of dollars.
Did you know? Many law enforcement agencies and cybersecurity experts now recommend the family code word system as the single best defense against AI voice cloning scams.
AI Scam Protection Checklist
Print this checklist and review it with your family:
- I have established a family code word with my immediate family members
- I know to hang up and call back if I receive a distressing call from a "loved one"
- I do not trust videos of celebrities endorsing products without checking their official website
- I check the actual email address of senders, not just the display name
- I never click links in unexpected emails; I go directly to the company's website instead
- I know that good grammar and spelling no longer mean an email is safe
- I verify any unusual request for money through a second channel (phone call, in person)
- I do not share voice recordings, videos, or personal details publicly on social media
- I have reviewed my social media privacy settings to limit what strangers can see
- I take a pause before acting on any urgent request, whether by phone, email, or text
- I know that no legitimate organization will demand immediate payment by gift card, wire transfer, or cryptocurrency
- I have told my family and friends about AI scams so they can protect themselves too
Tip: Share this guide with family members and friends. AI scams are new enough that many people have not yet heard of voice cloning or deepfakes. A few minutes of awareness can prevent thousands of dollars in losses.