The FBI's Internet Crime Complaint Center released its 2025 report in April. The headline number, more than one million complaints and $20.9 billion in reported losses, is up 26 percent year over year. Inside that headline, two trends are doing most of the work. AI-enabled fraud explicitly accounts for $893 million in reported losses across roughly 22,400 complaints. Government-impersonation fraud nearly doubled, from around 17,300 complaints in 2024 to over 32,400 in 2025, with losses jumping from about $405 million to roughly $797 million. (FBI press release on the 2025 IC3 report, Nextgov on government impersonation doubling)
For small business owners, the operational reading of those numbers is straightforward. The base rates of basic fraud (phishing emails, urgent invoices, fake supplier requests) have been displaced as the primary risk by something harder to detect: AI-generated content tailored to your specific situation, often delivered through voice or text channels that used to be relatively safe.
The reflexes most owners trained ten or fifteen years ago do not work against this. The new ones are not complicated, but they have to be in place before the attack arrives. This is the operational checklist I would build into any small business this quarter.
Why "watch for spelling errors" stopped being useful advice
The traditional fraud playbook had two main tells. First, the email had typos, weird grammar, or an obviously machine-translated quality. Second, the request was suspiciously generic - "dear customer," "your account," "this important matter."
Both tells came from the same constraint: human attackers were a bottleneck on volume. A scammer who wrote a thousand bespoke emails per day was not affordable, so the economics drove them toward generic templates that anyone could see through with a moment's attention.
That bottleneck is no longer real. An AI agent can produce a thousand bespoke emails per hour, each tailored to its target's name, employer, recent business activity, and vocal cadence. The text quality is indistinguishable from a careful human writer. The personalization is closer to "we know what you bought last week" than to "dear customer." The old tells have stopped being tells.
Reinforcement Learning from Human Feedback, the same training technique that makes ChatGPT and Claude fluent, has been applied to the kind of scam content that used to be obvious. Every output sentence is, in the literal mathematical sense, optimized for engagement. When a scammer writes their own pitch, they are limited by their personal creativity. When they use a model, they are drawing on the aggregated judgment of thousands of human raters about what sentences hold attention. (SecureWorld coverage of FBI 2025 AI-fraud findings)
That is the structural change you have to absorb before any specific tactic.
The voice-cloning piece, in plain neuroscience
The single highest-value attack vector is voice cloning. The FBI's 2025 report calls out distress scams specifically, with $5 million in losses identified, and notes the tactic has spread well beyond the classic grandparent scam.
The reason voice cloning is so effective is not that scammers are getting cleverer. It is that human threat detection is built around voice recognition that runs faster than conscious thought. A familiar voice triggers safety in well under 200 milliseconds. That window evolved because a known voice in the dark was the difference between safety and a predator. Your brain trusts a familiar voice before your prefrontal cortex even joins the conversation.
A scam call that uses a cloned voice of someone you know exploits exactly that pathway. By the time you are consciously evaluating whether the situation makes sense, the trust signal has already fired. You are not gullible if you fall for a well-executed voice clone scam. You are running thirty-thousand-year-old hardware against a twenty-first-century attack tool.
The implication is that "be more careful" is not a strategy. The brain does not let you choose to be more careful at sub-200ms timescales. The strategy is to insert a deliberate mechanical step that runs after the trust signal fires - one that does not require you to second-guess the voice in real time, only to follow a rule.
The OpenClaw context
The reason the attack surface scaled this hard, this fast, is not that AI got smarter. It is that AI got installable.
OpenClaw, the open-source autonomous agent framework that hit 369,000 GitHub stars within five months of its November 2025 launch, is the reference example. It runs on a laptop, accesses your browser, your email, and your messaging apps, and works on your behalf around the clock for free. It is the productivity tool of the year. It is also, with no modification, the scam-volume tool of the year. The same mechanism that lets a small business owner have an agent reply to customer emails overnight lets a scammer have an agent run a hundred personalized phishing campaigns simultaneously. (NVIDIA blog on what OpenClaw means for organizations, byteiota on the GitHub-stars record)
The change is not that AI is now possible. The change is that deploying AI is now under-an-hour for anyone with a laptop. The barrier to running a sophisticated, multi-channel, voice-and-text-blended scam campaign has dropped to roughly the cost of a small consumer GPU.
That economic shift is the substrate for the FBI's 26 percent year-over-year loss increase. It will probably not slow down in the next 12 months.
Six reflexes worth building this quarter
These are not exotic. They are mechanical rules that, if practiced once or twice, become the kind of reflex that runs after the trust signal fires but before the money moves.
1. The callback rule, for any voice contact about money or urgency.
Whenever a phone call asks you to take a financial action, send funds, or release information, hang up and call back using a number from your existing records. Not the number that just called. Not the number the caller gave you. The number you saved before the call.
This rule is the single most effective defense against voice cloning. The cloned voice can sound exactly like your daughter, your CFO, your accountant, your bank's fraud department. It cannot survive your independent callback. The few seconds of friction is the entire defense.
The reason this works is that voice cloning attacks are typically executed from a different phone number than the legitimate one. Calling back through your saved contact route bypasses the attacker's pipeline entirely. If the request was legitimate, the person you reach via your saved number can confirm it.
This is also the rule the FBI's 2025 IC3 advisories recommend explicitly. (IC3 PSA on AI-facilitated impersonation)
2. The dual-channel rule for wire transfers and any change to payment instructions.
If a vendor, supplier, customer, or internal stakeholder requests a wire transfer or asks to change payment instructions, confirm the request through a second channel that is not the same one the request arrived on. If the email asks you to wire to a new account, do not reply to the email. Pick up the phone (using a saved number, see rule 1) and confirm.
Business email compromise, with confirmed AI components, generated more than $30 million in 2025 reported losses against small and mid-sized businesses. The dollar amount in any individual case is often in the tens or hundreds of thousands. The friction of a five-minute phone call is not in the same dollar order.
3. The "did I initiate this" reflex.
For any inbound communication that triggers urgency - the IRS calling, a court summons, a vendor demanding immediate payment, a sweepstakes notification, a missing-person social media post - the first question is "did I initiate this?" Did I call the IRS, or did the IRS call me? Did I file suit against this party, or did they file against me? Did I enter a sweepstakes, or did they email me?
Inbound contact about urgent matters is the dominant pattern in impersonation fraud. Government agencies do not call to demand wire transfers. Courts do not text summonses. Banks do not email links to "verify" your account. If the contact was inbound and the matter is urgent, treat that combination as the alarm.
4. Voice-print agreement with anyone who has authority over your money.
Establish a per-relationship verification phrase with the small set of people who have authority over your finances. A business partner. A spouse. A CFO. An accountant. A specific phrase that the legitimate person knows and an attacker would not.
If you get a call from any of them asking for an unusual financial action, ask for the phrase. If they cannot provide it, the call is fraudulent.
This sounds paranoid. It is not. It is the same operational pattern that intelligence agencies have used for decades for high-trust verbal communication, and it is now appropriate for any small business with funds material enough to be a target.
5. Deferred response on anything that arrived perfectly tailored to your specific situation.
If a message arrives that seems unusually well-suited to your business, your role, your recent activity, or a specific person you know, that is no longer a positive signal. Sophisticated attacks are now perfectly targeted. The "this person really gets my situation" feeling is the new attack vector.
The reflex is to wait. Twenty-four hours of delay on a non-urgent request costs nothing. It also breaks the time-pressure dimension of most scams. If the request remains the same in 24 hours, evaluate it normally. If the request has evaporated or escalated, you have your answer.
6. The clean-channel reflex for email and document review.
Use a known-good email client (Gmail web, Outlook web, your email provider's official mobile app) when reviewing anything financially material. Do not preview attachments in third-party apps that may render dangerous content. Do not click links in messages that ask you to log in to a service. Always navigate to the service yourself, by typing the URL or using a saved bookmark.
This is the reflex that defends against phishing-driven account takeover, which is still the entry point for many of the larger fraud cases the FBI tracks.
Operational hygiene in the background
The reflexes above are the personal layer. There is also a business-side hygiene layer that compounds with them.
- Two-factor authentication on every account that touches money. Hardware keys are stronger than SMS, and SMS is stronger than nothing.
- A documented payment authorization workflow, even if "documented" means a one-page Google Doc that everyone with bank access has read.
- Vendor changes go through a documented review, not a single email confirmation.
- Quarterly review of who has access to what. Account access drift is one of the underrated fraud entry points.
- A no-blame internal culture for reporting near-misses. The most dangerous outcome of a near-miss is silence, because silence prevents the rest of the team from updating its reflexes. (CISA on small business cybersecurity practices)
If the personal reflexes are the seatbelt, the operational hygiene is the roadworthy vehicle. Both are needed.
What this is not, in honest terms
This post is not arguing that small business owners should be paranoid in their daily life. The point is that fraud has industrialized, and the response has to be operationalized at the same level. Five mechanical rules consistently applied are far more effective than constant vigilance, because vigilance fades and rules do not.
This post is also not arguing that AI is bad, or that OpenClaw should not exist, or that the agent infrastructure shipping in 2026 is a net negative. The opposite case is also strong. Autonomous agents are a productivity step-change, the same way electricity was. The economic value to legitimate users is enormous. The window to learn how to use these tools well is short, as covered in the broader infrastructure shift on this site.
Both things are true at once. The technology is genuinely useful. The technology has also lowered the cost of running fraud. The right response is not to abandon the tools. The right response is to build the reflexes that operate alongside them.
Related reading
- The Conversation Has Moved Past The Model for the agent infrastructure shift that scaled the attack surface
- The Best MCP Servers By Industry for the productivity side of the same agent infrastructure
- Your Site Might Be Invisible To ChatGPT on the related infrastructure shift in AI-driven content access
- What Actually Fixed My Claude Code Sessions on the discipline of operating with AI tools day-to-day
Fact-check notes and sources
- FBI Internet Crime Report 2025: FBI press release, SecureWorld coverage of AI fraud topping $893M, Nextgov on government impersonation doubling, SpyCloud on key takeaways for security teams, Biometric Update coverage, AARP report summary
- IC3 advisories on AI fraud reflexes: Senior US Officials Impersonated PSA, Generative AI financial fraud PSA
- OpenClaw context: NVIDIA on what OpenClaw means for organizations, byteiota on the 369K GitHub-stars record, OpenClaw on GitHub, OpenClaw blog introduction
- Operational hygiene reference: CISA cybersecurity best practices, FTC small business guidance on phishing and BEC
This post is informational, not security or legal advice. The reflexes above are widely recommended industry practices but cannot eliminate fraud risk on their own. For incident response, contact local law enforcement and file a report at ic3.gov. For business cybersecurity, consult a qualified specialist for guidance specific to your operations.