How AI Is Making Tax Scams Harder to Spot
Tax time is high season for scammers , who target taxpayers with fraud campaigns ranging from IRS impersonation to "ghost" preparers that take your money and run.
In fact, the IRS publishes its annual "Dirty Dozen," a list of the tax scams consumers should be especially aware of.
And this year's list reveals that, as with romance scams and scams targeting travelers , AI is making tax scams appear more sophisticated—and harder to spot.
One of the IRS' top scams for 2026 is AI-powered impersonation by phone: fraudsters are using AI tools for voice cloning and caller ID spoofing so recipients believe they are talking to a legitimate IRS representative.
These forms of impersonation are so good that it's difficult to distinguish between what's fake and what's real.
Scammers may call you about your tax bill and demand payment, or tell you that your information is being used in a crime and ask you to verify sensitive details.
Of course, AI facilitates other forms of impersonation, spoofing, and phishing.
It's easy to set up an AI-generated fake website (such as for the IRS or other organizations that provide tax prep or support services) that looks nearly identical to the real thing, and which scammers can use to harvest personally identifiable information and login credentials.
The same goes for other communication, such as text messages and notices sent via email or snail mail .
AI isn't just an external threat.
Researchers at McAfee found that 30% of taxpayers plan to use an AI tool like ChatGPT to help them prepare their taxes.
Not only will a chatbot potentially provide incorrect (if relatively harmless) information to users, it could also put personal data at risk in the event of a data breach.
Taxpayers may be particularly vulnerable to these scams because the possibility of getting into trouble with the IRS is especially scary, and fraudsters capitalize on this fear.
People may be more likely to act on an urgent message that comes with the threat of financial penalties, wage garnishment, or a lien being placed on their home.
And since AI can make communication sound more human —and both believable and trustworthy—than an actual human, the typical scam red flags like poor grammar and odd language aren't reliable indicators.
