The Bots are Calling

The Bots are Calling

Something has shifted in the world of financial fraud. The FBI reported $16.6 billion in internet crime losses last year, a 33% increase from 2024. But the more troubling statistic is this: the number of fraud reports stayed flat. Scams are not becoming more common; they’re becoming more effective. According to FTC data, 38% of people who reported fraud in 2024 lost money, up from 27% the year before. The difference? Artificial intelligence.

Experian’s 2026 Future of Fraud Forecast, released in January, paints a stark picture of what lies ahead. The report identifies five emerging threats, and nearly all of them involve AI operating with increasing autonomy. Among the most alarming are those that can run familiar scams but without a human behind the keyboard. “Emotionally intelligent bots” powered by generative AI can now operate romance shakedowns and “relative-in-need” schemes. These bots respond convincingly, build trust over time, and manipulate victims with precision and emotional nuance.

Voice cloning technology has crossed what researchers call the “indistinguishable threshold.” A few seconds of audio, often pulled from social media or voicemail greetings, is now enough to generate a clone complete with natural intonation, pauses, and even breathing. Some major retailers report receiving over 1,000 AI-generated scam calls per day. Voice cloning fraud jumped more than 400% in 2025, and deepfake incidents have exploded from roughly 500,000 in 2023 to an estimated 8 million in 2025.

Experian also warns of a new category of threat: “machine-to-machine mayhem.” As businesses adopt “agentic AI” systems that can take actions autonomously, booking travel, processing transactions, and managing workflows, fraudsters are finding ways to exploit these digital agents. When AI initiates transactions without clear human oversight, questions of liability become murky. Experian predicts 2026 will be a “tipping point” that forces serious conversations about regulation and responsibility in this new landscape.

Perhaps most unusual is the rise of deepfake job candidates. Employment fraud is escalating as AI generates polished resumes and creates synthetic candidates capable of passing remote interviews in real time, giving bad actors access to sensitive company systems. The classic ‘tells’ that once exposed fraud (awkward grammar, unnatural pauses, and inconsistent stories) are rapidly disappearing.

In this environment, our defenses must also evolve. Establish family “safe words” for emergency calls. If you receive an unexpected request for money or sensitive information, even if the voice sounds familiar, hang up and call back using a number you know to be legitimate. Be skeptical of any unsolicited contact. The technology may be new, but the psychology remains the same: urgency, secrecy, and pressure are always red flags.

Be safe out there!

Whitney Butler