- The Fraudfather's Dead Drop
- Posts
- The Fraud Vendor Who Cried Wolf: Inside the Business of Fake Safety
The Fraud Vendor Who Cried Wolf: Inside the Business of Fake Safety
Ninety percent of firms feel protected, almost all are getting hit, while pandemic scammers buy Porsches with lunch money.


Ninety percent of firms feel protected, almost all are getting hit, while pandemic scammers buy Porsches with lunch money.
Tech moves fast, but you're still playing catch-up?
That's exactly why 100K+ engineers working at Google, Meta, and Apple read The Code twice a week.
Here's what you get:
Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.
Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.
Research papers and insights decoded - We break down complex tech so you understand what matters.
All delivered twice a week in just 2 short emails.
GM, Welcome Back to the Dead Drop
Let me tell you about a scam more dangerous than any deepfake: the one your security vendor is running on you right now.
In 2024, businesses lost an average of nearly $500,000 per deepfake-related incident, and for large enterprises, losses reached up to $680,000. Those numbers doubled from just two years ago. But here's what keeps me up at night: while 56% of businesses claim they're confident in their ability to detect deepfakes, only 6% actually avoided financial losses.
Read that again. Nine out of ten "protected" companies got hit anyway.
This is the dirty little secret your fraud prevention vendor doesn't want you to know: the tools you're paying for are theater, not security. And while you're sitting in vendor demos watching AI catch test cases, criminals are draining accounts with techniques your expensive software never sees coming.
The Confidence Con
As you know, I have been in this racket for over twenty years. I've seen every flavor of criminal enterprise from street-level identity theft to nation-state financial warfare. But I've never seen a gap this dangerous between what people believe and what's actually happening.
Despite the growth of cyber fraud methods, 90% of executives report confidence in their ability to spot deepfakes and business email compromise scams. Meanwhile, 90% of U.S. companies were successfully targeted by cyber fraud in 2024.
That's not a coincidence. That's a feature.
Security vendors have built an entire industry on selling confidence instead of competence. They show you pretty dashboards. They demonstrate AI models catching obvious fraud. They present case studies where their tools stopped attacks. What they don't show you: deepfake fraud attempts surged by 3,000% in 2023, and your tools missed most of them.
Here's the mechanism behind this con:
The Testing Gap: Your fraud tools excel at catching fraud they've been trained to recognize. Vendors demonstrate detection rates on historical attack patterns. But criminals aren't rerunning 2023's playbook. Humans correctly identify high-quality deepfake videos only 24.5% of the time, while AI detection accuracy can drop by up to 50% when confronted with new, real-world deepfakes.
The Confidence Cascade: Company leadership buys tools, feels protected, and broadcasts that confidence to the board and throughout the organization. 76% of business owners believe their company can detect threats, but only 47% of managers agree. That gap isn't about perspective; it's about proximity to reality. The people actually working fraud cases know the tools are failing.
The Vendor Incentive Structure: Your security vendor's business model depends on selling you tools, not solving fraud. They optimize for impressive demos and renewal rates, not for stopping next-generation attacks. When losses mount, they sell you upgrades and additional modules rather than admitting their core product has fundamental limitations.
The Post-Pandemic Gold Rush
Coming out of the pandemic, venture capital money flooded into fraud prevention like water rushing through a broken dam. Every startup promised the same thing: we can stop fraud 100%. We've solved the problem. Our AI is different.
Here's what none of them told you: nothing stops fraud 100%. Not their tool. Not anyone's tool. If you actually stopped fraud completely, you'd lock out legitimate customers from earned benefits they desperately need.
Think about what fraud detection tools actually look for. VPNs. Proxies. Bad geolocation. Social Security numbers with typos. Multiple failed login attempts. IP addresses that don't match historical patterns. Devices that look suspicious. Every one of these "warning signs" can also describe a legitimate user having a bad day.
The homeless veteran trying to access VA benefits from a library computer? That's a VPN the library uses for security. The single mother who fat-fingered her SSN while her toddler screamed in the background? That's a typo, not fraud. The traveling businessman trying to file an expense report from a hotel in Singapore? That's an unfamiliar IP address and device.
Your fraud tool flags all of them. And when you lock these people out, you're not stopping fraud. You're denying service to the people you're supposed to serve.
The Financial Devastation You're Not Seeing
Let's talk numbers that should terrify every CFO reading this.
Nearly 60% of companies said the financial impact of payment fraud in 2024 exceeded $5 million, compared to just a quarter who reported this the previous year, a 136% increase. Consumer losses to fraud jumped to $12.5 billion in 2024, representing a 25% increase over 2023. And those are just reported losses.
The real cost is higher. Much higher.
For every dollar of fraud loss, financial services firms incur $3.41 in additional costs related to labor, investigations, legal fees, and recovery expenses. But even that accounting misses the knock-on effects: reputational impact with customers, investors, vendors, and suppliers is what keeps executives up at night.
Here's what's actually happening while your fraud detection tools are "protecting" you:
Deepfake Operations at Scale: In the first half of 2025 alone, deepfake fraud losses reached $410 million, compared to $359 million for all of 2024. This isn't linear growth. This is exponential acceleration. The technology that cost thousands of dollars two years ago now costs pennies and produces results indistinguishable from reality.
Synthetic Identity Networks: An estimated 85-95% of applicants identified as potential synthetic identities are not flagged by traditional fraud models. Your tools are missing the vast majority of identity fraud because they're looking for stolen identities, not manufactured ones.
Cryptocurrency as the Exit Ramp: Cryptocurrency was involved in $2.8 billion of fraud losses, representing 58% of total fraud payments. Once money moves to crypto, recovery becomes next to impossible. Your fraud tools might catch suspicious activity, but by the time they alert you, the funds have already crossed into an irreversible payment system.
Why Your Tools Are Failing
The fundamental problem isn't that your fraud prevention tools are badly designed. It's that they're fighting an arms race they cannot win, and they're optimized for the wrong battlefield.
The AI Paradox: The same machine learning technologies powering your fraud detection are powering the attacks against you. In 2024, generative AI tactics such as deepfakes and deep audio increased by 118%. Criminals are iterating faster than vendors can update models. By the time your tool learns to catch today's deepfakes, criminals are already using techniques that won't be widely understood for another six months.
The Millisecond Problem: Here's what the fraud prevention vendors don't want you to understand: their tools have to make decisions in milliseconds. When a transaction happens, the machine has a fraction of a second to analyze hundreds of data points and decide: legitimate or fraud?
In that timespan, a machine cannot detect the finer points of a fraudulent transaction. It's looking for patterns, anomalies, statistical outliers. It's not conducting an investigation. It's running a probability calculation. And when you force complex human behavior into binary yes/no decisions measured in milliseconds, you get it wrong. A lot.
The Digital Equity Disaster: Digital equity and access should go hand in hand with fraud detection. But many fraud prevention builders don't recognize this, and they'll leave you hanging when a congressman is grilling you on why their constituent was denied a benefit they earned.
Does the homeless veteran on the street have access to a unique smartphone and the ability to take a high-resolution selfie for your liveness detection? Probably not. Does the elderly woman in rural Appalachia have a stable internet connection for your real-time verification process? Unlikely. Does the immigrant applying for benefits have a credit history your tool can verify? Not a chance.
Your fraud tool treats all of these people as suspicious. It flags them. It creates friction. It denies access. And when those denials land on the desk of an elected official, your vendor will blame "edge cases" and offer to sell you another module.
Fraud detection must be tailored to meet your customers where they are. Not where your vendor's ideal user profile says they should be. Not where your AI model was trained to expect them. Where they actually are, with the devices they actually have, using the internet connections they can actually afford.
The Psychology Exploit: Here's what twenty years investigating fraud taught me: the most sophisticated attacks don't try to defeat your technology. They bypass it entirely by exploiting the humans operating the systems.
Consider the Arup case: a finance worker was tricked into wiring $25 million during a deepfake video conference call featuring AI-generated likenesses of the CFO and other senior executives. The fraud tools didn't fail. They were never engaged. The criminals created a scenario where the victim had authorization to make the transfers, so there were no technical red flags.
This is the future of fraud: attacks that exploit process gaps, social dynamics, and human judgment rather than technical vulnerabilities. Companies cite their biggest challenge in fraud prevention as employees not always following fraud prevention policies.
The Economic Reality: The deepfake robocall of President Biden that disrupted the 2024 New Hampshire primary cost just $1 to create and took less than 20 minutes. When attack costs approach zero while defense costs remain high, the economic advantage shifts permanently to attackers.
Your fraud vendor is asking you to pay millions annually to defend against attacks that cost criminals dollars to execute. That math doesn't work. It will never work.
Field Manual: Actual Defensive Protocols
Everything I've told you so far is diagnosis. Here's the prescription:
Protocol One: Abandon Blind Trust in Tools
Your fraud detection software is not a solution. It's one input in a decision-making process that must include human judgment, multiple verification layers, and systematic skepticism.
Action: Audit your current fraud prevention stack against this question: "If this tool completely failed, would we know before significant losses occurred?" If the answer is no, you don't have redundant verification systems.
Protocol Two: Build Review Processes for Legitimate Denials
Your fraud tool will generate false positives. That's not a bug; it's an inevitable feature. The question is whether you have processes to identify legitimate users who got flagged and manually override the decision before they give up and go elsewhere.
Action: Track your false positive rate religiously. If you don't know this number, you're flying blind. Institute mandatory human review for any denial that involves benefits, high-value customers, or populations with known access barriers.
Protocol Three: Implement Multi-Channel Verification for High-Value Transactions
Any transaction over $50,000 requires verification through at least two separate communication channels that cannot both be compromised by a single attack vector.
Action: If someone requests a wire transfer via email, verification cannot happen via email response or phone call to a number provided in the email. Use a phone number from your independently maintained records, then verify through a separate channel like a face-to-face conversation or video call initiated by you.
Protocol Four: Create Friction in Authorization Processes
The Arup $25 million loss happened because the authorization process had insufficient friction for high-value transfers. Speed is the enemy of security.
Action: Institute mandatory cooling-off periods for any transaction over $100,000. No same-day execution. This creates time for verification and gives potential victims a chance to recognize manipulation.
Protocol Five: Train for Psychological Manipulation, Not Technical Fraud
Of victims who were targeted by a voice clone and confirmed they lost money, 77% reported a financial loss. Your team needs to recognize social engineering tactics, not just technical fraud indicators.
Action: Run quarterly red team exercises where you attempt to socially engineer your own staff. Anyone who falls for the test gets mandatory training on manipulation psychology, not software features.
Protocol Six: Build Recovery Infrastructure Before You Need It
The time to plan fraud response is not after a successful attack. Nearly 60% of companies experienced payment fraud impact over $5 million in 2024. Recovery speed determines whether you eat the full loss or claw back some funds.
Action: Establish relationships with law enforcement cyber units, have legal counsel experienced in fraud recovery on retainer, and maintain documented escalation procedures that everyone knows without having to look them up.
Protocol Seven: Design for Your Actual Users, Not Ideal Ones
Stop building fraud detection for the customer you wish you had. Build it for the customer sitting in front of you right now: the one with the cracked phone screen, the unreliable internet, the language barrier, the disability that makes your "simple" verification process impossible.
Action: Test your fraud prevention processes with users from your most vulnerable populations before rolling them out. If homeless veterans, elderly users, or immigrants can't successfully navigate your verification, you're creating more problems than you're solving.
The Fraudfather Bottom Line
Your fraud prevention vendor sold you confidence, not competence. They gave you dashboards that make you feel secure while criminals drain your accounts with techniques those tools never see coming. And worse, they gave you tools that lock out the people you're supposed to serve while letting sophisticated fraudsters walk right through.
Fraud losses facilitated by generative AI are projected to climb from $12.3 billion in 2023 to $40 billion by 2027. The threat is accelerating. The tools are not keeping pace. And the confidence gap between what companies believe and what's actually happening is getting people robbed blind.
Here's what you do tomorrow:
First, stop treating fraud prevention as a solved problem because you bought expensive software. While many executives express confidence in their organizations' ability to identify sophisticated fraudsters, nearly the same percentage said their organizations experienced successful attacks, indicating the confidence is misplaced.
Second, build your defensive strategy around the assumption that your tools will fail. Because they will. The question isn't whether fraud will penetrate your technical defenses; it's whether your human processes can catch it before catastrophic loss.
Third, understand that this is not a technology problem with a technology solution. This is a human problem requiring human judgment backed by multiple verification layers, systematic skepticism, and genuine consideration for the people you're supposed to serve.
The criminals already know your tools don't work. The legitimate users you're denying know your tools don't work. It's time you figured it out too.
Warning Signs Your Fraud Tools Are Failing:
Leadership confidence exceeds frontline staff confidence by >20%
Your tools catch test fraud but miss real attacks
False positive rate is unknown or not tracked
Losses are increasing despite "improved" detection rates
Vendor's solution to failures is selling more modules
You can't explain how your fraud detection actually works
Reporters or Congressional offices are calling about denied benefits
Never:
Assume AI detection alone is sufficient
Trust single-channel verification for high-value transactions
Believe vendor claims of 100% fraud prevention
Treat fraud prevention as "solved" because you bought tools
Skip human verification due to confidence in technology
Design fraud detection for ideal users instead of actual users
Ignore false positives as "acceptable losses"
Got a Second? The Dead Drop reaches 5,250+ readers every week including security professionals, executives, and anyone serious about understanding systemic wealth transfers. Know someone who needs this intelligence? Forward this newsletter.


When Good Intentions Become Criminal Opportunity: The Feeding Our Future Breakdown
Seventy-eight defendants and counting. Two hundred fifty million dollars stolen. Over 250 fake meal sites. Claims of serving 91 million meals to hungry children during a pandemic. And a 78th person was just charged this morning.
This is Feeding Our Future, the largest (so far) pandemic fraud scheme in American history, and it's still metastasizing.
The case that started with 47 indictments in September 2022 has exploded into a sprawling criminal enterprise that reaches from Minneapolis to Moorhead, from Rochester to Pelican Rapids. Federal prosecutors charged the 78th defendant on November 24, 2024, a man who claimed to be serving 6,000 meals daily in a town with a population of 2,600. The investigation isn't slowing down. Court documents reference unnamed co-conspirators, suggesting a 79th, 80th, and more defendants are coming.
Here's how a program designed to feed hungry children became the perfect crime.

Federal prosecutors allege Abdimajid Nur submitted the bulk of the fraudulent invoices used to support the fake claims that fueled the scheme.
The Operation: Industrial-Scale Fraud
Aimee Bock founded Feeding Our Future in 2016 as a Minnesota nonprofit sponsor for federal child nutrition programs. When COVID hit and the USDA relaxed oversight to quickly get food to desperate families, Bock didn't see a crisis. She saw an opportunity to print money.
Between 2020 and 2022, Feeding Our Future grew from distributing $3.4 million in federal funds to nearly $200 million. They opened more than 250 "meal sites" across Minnesota, claiming to serve 120,000 meals per day. FBI surveillance told a different story: one site claiming 6,000 meals daily served 40 visitors. Another site claimed to feed 1,500 kids per day. A third, in Pelican Rapids, claimed 6,000 daily meals in a town where the entire population is 2,600 people.
Federal prosecutors estimate only 3% of the money went to actual food. The rest? Defendants used $240 million in federal funds to buy luxury vehicles, commercial real estate across Minnesota, Ohio, and Kentucky, property in Kenya, Turkey, and Somalia, and to fund international travel. One defendant paid off his $173,000 mortgage and bought a Porsche. Another wired $400,000 to China. A third purchased Mediterranean coastal property.
The Network: Shell Companies and Kickbacks
This wasn't 78 people acting independently. This was an organized criminal network.
Defendants created shell companies with legitimate-sounding names: S&S Catering, Empire Cuisine & Market, Dua Supplies & Distribution, Afrique Hospitality Group, Shamsia Hopes, Oromia Feeds LLC. They submitted attendance rosters filled with fabricated names: "Man Sincere," "Ron Donald," "John Doe." They created fraudulent meal count sheets, forged invoices, and paid kickbacks to Feeding Our Future employees to keep fraudulent applications flowing.
The scheme spread across the state. Abdiaziz Farah ran Empire Cuisine & Deli in Shakopee, stealing $42 million and personally pocketing $8 million. Salim Said operated Safari Restaurant in Minneapolis, claiming to serve 3.9 million meals. Haji Osman Salad owned Haji's Kitchen and fraudulently obtained $11.4 million. Sahra Nur operated S&S Catering, claiming 1.2 million meals served and stealing $5 million.
Political connections ran deep. An aide to Minneapolis Mayor Jacob Frey pleaded guilty to wire fraud after using political influence to pressure the Minnesota Department of Education not to shut down Feeding Our Future. A Minneapolis council member's wife operated a site that received over $400,000.
The System Failure: Litigation as Shield
When Minnesota's Department of Education tried to stop the bleeding, Feeding Our Future weaponized litigation.
In November 2020, after the state began delaying grant applications over fraud concerns, Feeding Our Future sued, claiming racial discrimination because they primarily served Somali communities. A judge ordered the state to process applications promptly. When the state declared Feeding Our Future "severely deficient" and tried to terminate the partnership in January 2021, the judge held the Department of Education in contempt and fined them $47,500 payable to Feeding Our Future, the organization actively stealing from taxpayers.
The fraud continued for another year while the FBI investigated and built cases. By the time agents raided in January 2022, the damage was done.
The Audacity: Bribing Jurors Mid-Trial
The first trial began in April 2024 with seven defendants facing charges for stealing $40 million. As closing arguments approached, defendants didn't just hope for acquittal. They tried to buy it.
Abdimajid Nur, 23 years old and himself on trial, orchestrated a bribery scheme targeting the youngest juror, the only person of color on the panel. Nur recruited Ladan Ali to deliver $120,000 in cash in a Hallmark gift bag to the juror's home. He provided Ali with photos of the juror's car, a parking ramp map, and detailed surveillance of her daily routine. He instructed another conspirator to videotape the delivery as proof.
The juror reported it immediately. The scheme unraveled. Five people have since pleaded guilty to jury tampering, including Nur, who also faces sentencing for 10 fraud convictions in the underlying case.
The Ongoing Reckoning: Fifty Guilty Pleas and Counting
As of November 2024, more than 50 defendants have pleaded guilty. Seven were convicted at trial, including Bock herself, found guilty on all counts in March 2025 after a six-week trial. More trials are scheduled through 2026.
Sentencing began in October 2024. Mohamed Jama Ismail received 12 years and was ordered to pay $47 million in restitution. Abdiaziz Farah got 28 years. Mukhtar Shariff received 17 and a half years. The sentences are harsh, but recovery of stolen funds remains dismal. Of $250 million stolen, prosecutors have recovered approximately $75 million. The rest disappeared into overseas accounts, unrecoverable luxury expenses, and real estate beyond U.S. jurisdiction.
The case continues to expand. The 77th defendant was charged November 20. The 78th was charged November 24. Court documents reference additional unnamed co-conspirators. Federal prosecutors have made clear: if you touched this fraud in any way, they're coming.
The Bottom Line
This wasn't sophisticated cybercrime or complex financial engineering. This was criminals exploiting pandemic chaos with fake invoices and fictional children while real kids went hungry. They stole a quarter-billion dollars, sued the government to force continued payment while actively defrauding it, then tried to bribe their way out of accountability when caught.
Seventy-eight people. Two hundred fifty meal sites. Zero shame. And the investigation isn't finished.
The system designed to help the most vulnerable became the perfect target for the most greedy. And three years after FBI raids, prosecutors are still finding more criminals to charge.
The Fraudfather combines a unique blend of experiences as a former Senior Special Agent, Supervisory Intelligence Operations Officer, and now a recovering Digital Identity & Cybersecurity Executive, He has dedicated his professional career to understanding and countering financial and digital threats.
This newsletter is for informational purposes only and promotes ethical and legal practices.


