Follow the Money to Nowhere

Two-week pilots become $100 million unicorns, the worst-performing state gets carved out of accountability, and systematic fraud wins when enforcement becomes selective

In partnership with

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

GM, Welcome Back to the Dead Drop. While venture capitalists celebrate AI startups going from zero to $100 million in annual recurring revenue faster than you can say "ChatGPT," I'm watching founders count two-week pilot programs as permanent contracts and wondering which investigative technique works better: following the money or just waiting for gravity to reassert itself.

Vibe Revenue: How Silicon Valley is Reinventing Accounting Fraud for the AI Era

When billions chase artificial intelligence and nobody wants to miss the next OpenAI, founders discovered they don't need actual revenue. They just need numbers that look like revenue for exactly as long as it takes to close the funding round.

I've investigated enough financial fraud to recognize the pattern before the collapse. Healthcare executives billing Medicare for treatments never delivered. Defense contractors inflating progress reports to justify continued funding. Real estate developers manufacturing property valuations through circular transactions between shell companies.

Every scheme shared the same operational reality: create numbers that pass cursory inspection, exploit the time lag between reported performance and verification, and extract maximum value before reality catches up. The sophisticated criminals understood something critical about human psychology in capital markets: nobody wants to be the skeptic who missed the rocket ship.

In 2025, Silicon Valley productized that insight. AI startups began reporting growth metrics so explosive they made cryptocurrency's 2021 run look conservative by comparison. Midjourney: zero to $200 million in annual recurring revenue in under three years. ElevenLabs: zero to nearly $100 million ARR in 20 months. Cursor: zero to $100 million ARR in one year. Lovable: zero to $100 million ARR in three months. Cluely: doubled ARR to $7 million in one week.

Read those timeframes again. One week. Three months. Twenty months.

When I investigated financial fraud, we looked for exactly this pattern: reported performance that defied fundamental business constraints. Because sustainable growth follows physics. It requires time to build infrastructure, develop customer relationships, prove product value, and generate the operational leverage that produces exponential returns.

Unless the growth isn't real. Then it only requires creative accounting and investors desperate to believe.

The $325,000 ARR That Wasn't: How the Fraud Actually Works

A top-tier venture capital firm partner described evaluating an early-stage defense tech startup. First meeting, the founder mentioned in passing: "Oh, by the way, we have a contract with this company worth $325,000 in ARR."

Second meeting, the VC asked the obvious question: "Let's go back to that customer, that big contract. How did that deal happen?"

The founder's response revealed everything: "Oh, it was super easy. It was a two-week pilot. And we have it on good authority that they're going to keep paying us that much."

The VC's internal reaction: "I was like, what does that mean? Hold the phone, man. The good authority I subscribe to is a signed piece of paper."

That's not annual recurring revenue. That's a pilot program with an optimistic forecast that may or may not materialize. In traditional accounting, you recognize revenue when service is delivered and payment is contractually obligated. You don't count projected future revenue from trial periods as if it's guaranteed income.

But in AI startup accounting, two weeks of pilot usage becomes $325,000 in annual recurring revenue because nobody wants to report that they're still in the "prove the product actually works" phase when competitors are announcing nine-figure ARR milestones.

The pressure creates fraud. When every week brings announcements of startups hitting massive ARR numbers in impossibly short timeframes, founders face a choice: match those metrics or explain to investors why your growth looks pedestrian by comparison.

One VC explained the psychology: "There is all this pressure from companies like Decagon, Cursor, and Cognition that are just crushing it. There's so much pressure to be the company that went from zero to $100 million in X days."

That pressure manufactures the fraud infrastructure.

Booked ARR: The Contract Clause Nobody Mentions

The second technique for inflating revenue involves counting "booked ARR," which means revenue from signed contracts before customers actually activate service or make payments. Sounds reasonable until you examine what's actually in those contracts.

"Companies are signing contracts with kill provisions," explained one early-stage VC. "So they're claiming booked ARR, but giving their customers an out. They're claiming that million-dollar-a-year contract, but it says in three months you can cancel for no reason. Does that count?"

In traditional SaaS accounting, no. Booked contract value typically converted to recognized revenue at 80-90% rates because contracts had actual commitment periods and meaningful cancellation penalties.

In AI startup accounting, founders count the full contract value as ARR despite giving customers the ability to walk away at any time with zero penalty. That's not recurring revenue. That's a provisional trial arrangement dressed up as a binding commitment.

The fraud works because venture capital due diligence rarely involves deep contract review at early stages. VCs trust founder representations about ARR, check a few references, and move forward based on growth trajectory. By the time anyone examines the actual contract terms, three funding rounds have already priced the company at a billion-dollar valuation based on inflated metrics.

Why Your Portfolio Is Paying for Fake Revenue

Here's where this stops being a Silicon Valley curiosity and becomes your problem: technology now represents over 25% of the S&P 500's total weight. NVIDIA alone accounts for 8% of the entire index. When you hold diversified index funds, pension allocations, or retirement accounts, you're heavily exposed to AI valuations.

Those valuations assume the growth metrics are real. When Cursor reports $100 million in ARR, public market investors use that data point to model how quickly AI adoption is accelerating, what kind of revenue multiples are justified for AI companies, and how to value NVIDIA's position as the infrastructure provider enabling this explosion.

If the ARR numbers are inflated by 50% because founders are counting pilots as permanent contracts and booked deals with kill clauses as guaranteed revenue, every valuation model built on those assumptions is wrong by the same percentage.

The correction doesn't happen gradually. It happens when renewal rates reveal the truth. When those two-week pilots don't convert to annual contracts. When customers exercise their kill clauses. When "booked ARR" becomes "churned ARR."

In traditional SaaS, companies had 80-90% annual retention rates because customers integrated the software into core workflows and switching costs were high. AI tools face the opposite dynamic: customers are experimenting with multiple competing products simultaneously, integration is minimal, and switching costs are near zero.

When a company reports $100 million in ARR but has 60% annual churn because most "customers" were running short-term pilots, the business isn't worth the 10x revenue multiple the market assigned. It's worth substantially less. Maybe nothing.

That revaluation cascades. AI startup valuations compress. Late-stage investors mark down portfolios. Venture funds report lower returns. Pension allocations to venture decline. Public market comparables reprice. NVIDIA's forward revenue projections decrease because AI companies aren't buying as many chips as circular financing suggested they would.

Your index funds don't care whether you understood the fraud. They reprice based on actual cash flows, and actual cash flows reveal themselves eventually.

The Circular Financing Amplifier

The fraud operates inside a larger circular financing loop that makes the metrics look more credible than they are. NVIDIA invests over $100 billion in OpenAI, tied directly to how many chips OpenAI purchases. OpenAI needs that capital because the company isn't profitable despite massive revenue.

Other AI startups use NVIDIA's investment as validation. If the largest chip company is putting $100 billion into AI infrastructure, that proves the market is real. Except NVIDIA isn't just betting on the market, it's creating artificial demand for its own products by financing the customers who buy them.

Think about this: NVIDIA gives OpenAI $100 billion. OpenAI uses a substantial portion to buy NVIDIA chips. NVIDIA records that as revenue. Wall Street sees the revenue growth and values NVIDIA higher. NVIDIA's market cap increases by more than the $100 billion it invested. Rinse and repeat with CoreWeave, Applied Digital, and a dozen other infrastructure companies.

This isn't fraud in the legal sense. It's circular financing that creates the appearance of organic market demand when a significant portion of "demand" is self-generated through investments that flow back as revenue.

When I investigated money laundering operations, we called this "layering": moving money through multiple transactions to create the appearance of legitimate business activity. The same $100,000 would flow through six shell companies, each recording it as revenue, until the paper trail suggested $600,000 in total economic activity when only $100,000 actually existed.

NVIDIA's circular financing doesn't quite reach that level, but the principle applies: capital flows create revenue flows that justify valuations that enable more capital flows. The system works until the underlying economics can't support the cycle.

For AI startups, the underlying economics depend on converting pilots to recurring contracts at high retention rates. If that doesn't happen, the revenue isn't recurring, the valuations are inflated, and the circular financing loop breaks.

What Comes Next: The Unit Economics Nobody's Discussing

AI businesses operate nothing like traditional SaaS companies, and the differences all work against sustainable unit economics.

Token-based pricing means unpredictable costs. A few large customers can dramatically increase computational demands and destroy margins overnight. One month your gross margin is 70%, the next month it's 40% because three customers started using inference-heavy features.

AI startups don't control their core product. They rent access to large language models from OpenAI, Anthropic, or other providers who can change pricing unilaterally. When those providers raise prices, AI startup margins evaporate instantly. There's no switching cost protection, no proprietary moat, no pricing power.

Customer concentration among "inference whales" creates revenue volatility that makes financial forecasting impossible. Traditional SaaS thrived on large numbers of small customers with predictable usage patterns. AI startups depend on a handful of large customers with unpredictable usage that can swing 10x month-to-month.

Most critically: customers remain in experimental phases. They're testing five competing products simultaneously with 60-day pilots. Whatever "ARR" a startup reports this quarter might be zero next quarter when those pilots end and customers haven't decided which solution to actually deploy at scale.

The venture capital industry knows this. They're not stupid. But they're also not incentivized to care.

Why VCs Enable Fraud: The Capital Has to Go Somewhere

Venture capital grew from 700 firms managing $143 billion in the 1990s to more than 3,000 firms managing over $360 billion today. Some projections suggest venture will exceed $700 billion by 2029.

That's a staggering amount of capital that needs deployment. With AI capturing one in every two VC dollars, the competition to fund potential winners creates overwhelming pressure to identify and back companies demonstrating "traction" early.

VCs aren't evaluating AI startups against traditional financial metrics because traditional metrics would disqualify most deals. Instead, they're evaluating growth velocity: how fast is ARR increasing, regardless of what's actually behind that number.

This creates the perfect fraud ecosystem. Founders know VCs need deals. VCs know Limited Partners demand returns. Limited Partners know alternative allocations aren't producing better results. Everyone agrees to pretend that explosive ARR growth from pilots and provisional contracts represents sustainable business reality.

The fraud isn't a bug in the system. It's a feature that allows capital to flow at the speed required to maintain the cycle.

Until it doesn't. And when these cycles break, they break fast.

The Fraudfather Bottom Line

"The problem is that so much of this is essentially vibe revenue," one VC admitted to Fortune. "It's not Google signing a data center contract. That's real shit. Some startup that's using your product temporarily? That's really not revenue."

When founders need one week to generate $7 million in annual recurring revenue, you're not watching innovation. You're watching the setup for a correction.

Three immediate risks for anyone with capital in public markets:

First: AI valuations assume infinite growth at reported rates. When pilots end and actual retention reveals itself, the revaluation will be swift. Late-stage private investors mark down. Public comparables reprice. Your index funds follow.

Second: Circular financing between NVIDIA and customers creates correlation risk. If AI monetization fails, chip demand collapses. If chip demand collapses, NVIDIA's returns evaporate. If NVIDIA reprices, it takes 8% of the S&P 500 down with it.

Third: This fraud infrastructure teaches founders that metrics matter more than fundamentals. When you reward appearance over substance, you guarantee companies optimize for the next funding round rather than sustainable profitability.

The dot-com bubble taught investors that eyeballs don't pay bills. The AI bubble is teaching the same lesson with ARR replacing pageviews as the metric du jour.

Smart investors ignore reported ARR and focus on signed contracts with verified payments, retention rates demonstrating actual product value, and unit economics that work without assuming perfection. Everything else is vibe revenue.

And vibes don't survive contact with reality. They just determine how much your portfolio loses when reality finally shows up.

Got a Second? The Dead Drop delivers fraud intelligence to 5,700+ readers who understand the game: when politicians manipulate monetary policy for wealth extraction, when state officials tolerate $1 billion in welfare fraud, when doctors steal $14.6 billion from healthcare systems, the suburban mom shoplifting mascara isn't the problem; she's the predictable result. While most people argue about petty theft, our readers track the institutional fraud that creates the rationalization framework for crime at every level. Know someone who needs to understand why everyone's becoming a scammer? Forward this newsletter.

President Trump just froze $10 billion to California, Colorado, Illinois, Minnesota, and New York over fraud concerns. Meanwhile, Alaska, with the worst payment error rate in America at 24.66%, got a legislative carve-out. When fraud prevention becomes selective based on political affiliation rather than actual performance data, the real criminals win.

When Fraud Becomes a Weapon: The $10 Billion Freeze That Proves Nobody Cares About Accountability

Trump targeted five blue states for systematic fraud while protecting the worst offender in the nation. The data reveals this isn't fraud prevention; it's political theater that guarantees real criminals keep stealing.

The Trump administration announced Monday it's freezing over $10 billion in federal childcare and social services funding to five states: at least $7.35 billion in TANF (Temporary Assistance for Needy Families), $2.4 billion in Child Care Development Fund money, and $869 million from Social Services Block Grants.

The Department of Health and Human Services cited fraud concerns, pointing to Minnesota's Feeding Our Future scandal where federal prosecutors convicted defendants for stealing $250 million, with estimates suggesting as much as $9 billion total. A YouTube video from Nick Shirley visiting 10 Minnesota childcare centers that received $111 million, with less than half appearing open, triggered the administration's action.

Here's what HHS didn't mention: the USDA's FY 2024 SNAP payment error rates, released June 2025, which reveal which states actually have the worst fraud infrastructure in America.

The Error Rates Nobody's Discussing

Alaska: 24.66% error rate. Worst in the nation. Nearly one in four benefit dollars contains eligibility or benefit determination errors.

The five states Trump just targeted:

Colorado: Data not specifically disclosed in recent reports, but below the 10% tier Illinois: Error rate in the 8-10% range
Minnesota: 14.61% error rate (legitimate concern given Feeding Our Future)
New York: 14.09% error rate
California: Error rate requiring corrective action but not in top tier

Compare that to Alaska's 24.66%. Washington DC hits 17.38%. Georgia reaches 15.65%. All three exceed every targeted state except possibly Minnesota.

The national average sits at 10.93%. Only eight states met the 6% federal threshold: Idaho, Nebraska, Nevada, South Dakota, Utah, Vermont, Wisconsin, Wyoming. Forty-four states need corrective action plans.

The Carve-Out That Exposes Everything

The One Big Beautiful Bill Act passed in 2025 mandates states pay portions of SNAP benefits based on error rates. States with 6-8% pay 5% of benefits. States with 8-10% pay 10%. States with 10% or higher pay 15%.

Except Alaska. Senate Republicans added a specific carve-out for Alaska Senator Lisa Murkowski: states with error rates (multiplied by 1.5) exceeding 20% can delay cost-sharing until 2029 or 2030.

That protects Alaska's 24.66% error rate while Minnesota's 14.61% gets a $10 billion funding freeze. The math isn't complicated: Alaska's systematic waste gets legislative protection. Minnesota's fraud, already under aggressive federal prosecution with dozens of convictions secured, justifies punishing four other states alongside it.

What the Administration Actually Said

HHS spokesman Andrew Nixon told Axios the five targeted states represent "where our highest suspicion is." The agency implemented a "defend the spend" system requiring additional administrative data from all 50 states before accessing childcare and TANF funds: attendance records, inspection reports, parent complaints.

But certain Minnesota childcare centers face "even more requirements," while Alaska received permanent legislative protection before the bill even passed. That's not fraud prevention prioritized by systematic data. That's selective enforcement driven by political calculation.

Governor responses revealed the chaos: Colorado's Jared Polis said the state hasn't been "officially notified." California's Gavin Newsom administration said it received no guidance. Illinois called it "yet another politically motivated action that confuses families and leaves states with more questions than answers."

Minnesota Governor Tim Walz, who announced Monday he won't seek re-election, acknowledged fraud concerns but claimed "political gamesmanship we're seeing from Republicans is only making that fight harder."

Trump responded on Truth Social: "Minnesota's Corrupt Governor will possibly leave office before his Term is up but, in any event, will not be running again because he was caught, REDHANDED, along with Ilhan Omar, and others of his Somali friends, stealing Tens of Billions of Taxpayer Dollars."

Why This Destroys Real Accountability

Minnesota's Feeding Our Future fraud is absolutely real. First Assistant US Attorney Joe Thompson called it "staggering, industrial-scale fraud." The prosecutions demonstrate systematic theft enabled by state officials who tolerated criminal behavior to avoid accusations of racial discrimination.

But when Alaska maintains 24.66% error rates and receives protection while New York (14.09%) and Colorado (below 10%) face funding freezes, the enforcement standard isn't fraud prevention. It's political alignment.

Three immediate consequences follow:

States learn political affiliation matters more than actual performance. Alaska can waste one in four benefit dollars and get legislative carve-outs. Blue states with better metrics face funding cuts.

Real fraud gets buried in political theater. The Somali-linked organizations that stole billions benefit when responses look like partisan retaliation rather than systematic law enforcement.

States have zero incentive to improve when enforcement is selective rather than data-driven. Why invest in eligibility verification and fraud detection when the determining factor is whether your governor endorsed the right candidate?

The Fraudfather Bottom Line

When you freeze $10 billion to five states based on one state's fraud and a YouTube video while protecting the worst offender in the nation, you're not preventing fraud. You're weaponizing it.

The actual criminals don't care which party enables their theft. They find jurisdictions where enforcement is inconsistent and accountability is selective. Trump's freeze teaches every state that political fraud matters; systematic fraud gets carve-outs.

Alaska's 24.66% error rate will continue. Minnesota's prosecutions will proceed. And four states with better metrics than Alaska just learned that data doesn't matter when fraud becomes political weapon rather than law enforcement priority.

 

The Fraudfather combines a unique blend of experiences as a former Senior Special Agent, Supervisory Intelligence Operations Officer, and now a recovering Digital Identity & Cybersecurity Executive, He has dedicated his professional career to understanding and countering financial and digital threats.

 This newsletter is for informational purposes only and promotes ethical and legal practices.