INDIANA — The digital landscape has shifted, and with it, the tools of deception. Experts are issuing a stark warning to Indiana residents: the rapid evolution of artificial intelligence has made criminal scams cheaper, faster, and nearly impossible to detect.

As of May 2026, the traditional “red flags” of fraud—poor grammar, robotic voices, and generic templates—have largely vanished. In their place is a new generation of AI-driven schemes that are personal, polished, and hitting close to home.
The End of the “Human Lie Detector”
Eva Velasquez, CEO of the Identity Theft Resource Center (ITRC), cautions that the sheer sophistication of modern, “personalized” scams means the average person can no longer rely on intuition alone.

“No one possesses the skills to detect modern scams reliably,” Velasquez emphasized. “The idea that you can act as your own ‘human lie detector’ is a dangerous misconception in the age of AI.”
A High-Tech Threat in Central Indiana
Criminals are leveraging AI to automate outreach while maintaining an eerie level of personalization. In Central Indiana, several specific schemes have gained traction over the last year:
- QR Code Fraud: Scammers are slapping fraudulent QR codes onto parking meters and traffic tickets. Unsuspecting drivers scan the code to pay a fine, only to be redirected to a spoofed site that harvests their credit card information.
- “Pig-Butchering” Investment Scams: Named for the way victims are “fattened up” with trust before being “slaughtered,” these scams use AI to manage long-term fake relationships. Local victims have reported losses exceeding $10,000 to fraudulent cryptocurrency platforms.
- Voice and Video Impersonation: Using “deepfake” technology, criminals can mimic the voice of a bank official or even a family member. One local massage therapist recently lost nearly $19,000 after being targeted by a convincingly mimicked persona.
- Recovery Scams: In a cruel twist, those who have already lost money are being targeted by AI bots posing as “recovery services” that promise—and fail—to recover stolen funds for an upfront fee.
Why AI Has Changed the Game
The barrier to entry for cybercriminals has plummeted. AI allows bad actors to generate high-quality phishing lures and fake identities in seconds. By using automation, a single criminal can target thousands of people simultaneously with messages tailored to their specific interests or locations.
Furthermore, voice cloning has turned a simple phone call into a weapon. With just a few seconds of audio from a social media post, AI can create a perfect vocal replica of a loved one in distress.
While the technology is daunting, experts say a few “low-tech” habits are the best defense:
- Pause and Verify: If you receive an urgent request for money or data, stop. Contact the person or institution directly using a verified number from an official website—never use the contact info provided in the suspicious message.
- Establish Family Codes: Create a secret “code word” with family members to verify their identity during urgent or unusual phone calls.
- Report Immediately: If you’ve been targeted, contact the Federal Trade Commission (FTC), the Indiana Attorney General’s Office, or the ITRC.
“Once the money is sent, no one can guarantee it can be recovered,” Velasquez warned. In 2026, the best defense isn’t a smarter algorithm—it’s a healthy dose of skepticism.


