- The Fraudfather's Dead Drop
- Posts
- Your AI Chatbot Is Wearing a Wire
Your AI Chatbot Is Wearing a Wire
How ChatGPT became law enforcement's most productive confidential informant, why Meta's scanning your AI conversations for ad targeting, and the Medicare scam keeping seniors on the phone for hours.


GM, Welcome Back to the Dead Drop.

How ChatGPT Became Law Enforcement's Most Productive Confidential Informant
Your AI Chatbot Is Wearing a Wire
How ChatGPT Became Law Enforcement's Most Productive Confidential Informant
On the morning of August 28th, 19-year-old Ryan Schaefer allegedly went on a rampage through a Missouri college parking lot, systematically destroying 17 vehicles in 45 minutes. Shattered windows. Ripped-off mirrors. Tens of thousands in damage. Then he went home and did something that would have seemed impossible just three years ago: he confessed to an artificial intelligence.
"How f**ked am I bro?" Schaefer allegedly typed into ChatGPT. "What if I smashed the shit outta multiple cars?"
That conversation became the cornerstone of the criminal case against him. Not a confession to a friend who might have loyalty. Not a social media post made in anger. A private conversation with what millions of people believe is a confidential digital assistant has now entered as evidence in a court of law.
Welcome to the new reality of American criminal justice: your AI chatbot is a confidential informant.
The Digital Donnie Brasco
For those unfamiliar with the reference, Donnie Brasco was the undercover identity of FBI agent Joseph Pistone, who infiltrated the Mafia in the 1970s by gaining the trust of mobsters who believed he was one of them. They confessed crimes to him. Detailed operations. Revealed weaknesses. All while he wore a wire.
ChatGPT and every other large language model now occupy the exact same position in the digital ecosystem. They're embedded in your daily life. They earn your trust. They invite confession. And everything you tell them can, and increasingly will, be used against you in a court of law.
Less than a week after Schaefer's arrest, Jonathan Rinderknecht faced charges for allegedly starting the Palisades Fire in California, a blaze that destroyed thousands of homes and killed twelve people. His alleged digital fingerprint? Requests to ChatGPT to generate images of a burning city.
Two cases in one week. This isn't a trend; it's a paradigm shift in criminal investigation.
The Legal Reality: Zero Expectation of Privacy
Here's what most people don't understand about their ChatGPT conversations: they carry exactly zero legal protection.
When you speak with a lawyer, those conversations are protected by attorney-client privilege. Tell your therapist about intrusive thoughts? Doctor-patient confidentiality shields you. Confess sins to a priest? Clergy privilege exists in most jurisdictions.
Tell ChatGPT you're planning a crime, contemplating violence, or have committed an offense? That's evidence. Freely discoverable. Fully subpoenaed. Completely admissible.
Sam Altman, CEO of OpenAI, acknowledged this reality explicitly earlier this year: "People talk about the most personal shit in their lives to ChatGPT. People use it, young people especially, as a therapist, a life coach, having these relationship problems... And right now, if you talk to a therapist, a lawyer or a doctor about these problems, there's like legal privilege for it."
Note what he didn't say: "And we're working to change that." Because they're not. There is no legal framework protecting your AI conversations, and the tech companies building these tools have no incentive to create one.
How Warrants Are Evolving
Detectives and prosecutors have caught on fast. Search warrants for digital evidence now routinely include specific language requesting AI conversation logs alongside traditional targets like text messages, social media posts, and browser history.
The language is evolving, but current warrant templates now specify:
ChatGPT conversation histories (including deleted chats)
Google Gemini interactions
Claude.ai exchanges
Microsoft Copilot queries
Any AI-generated content requests
Voice conversations with AI assistants
Image generation prompts
Here's the critical detail most users miss: deleted chats aren't actually deleted. They're removed from your interface, but they remain in company servers unless you've specifically opted into a zero-retention policy, and even then, temporary logs exist for fraud prevention and abuse monitoring.
When police request these records from OpenAI, Meta, Google, or Microsoft, they're not asking for some abstract data dump. They're requesting timestamped conversations that often contain:
Detailed descriptions of events (like Schaefer allegedly describing vehicle damage)
Requests for advice on evading consequences
Planning documents for future actions
Searches for methods, tools, or techniques
Evidence of state of mind and intent
In criminal law, proving intent is often the hardest element. AI conversations frequently hand prosecutors intent on a silver platter.
The Psychology of Digital Confession
Why do people confess to AI? The same reason they confess to undercover agents: the illusion of privacy combined with the psychological need to process traumatic or significant events.
Human beings are wired to seek counsel after major incidents. We need to verbalize, analyze, and understand our own actions. For generations, this meant trusted friends, family, therapists, or religious figures. For millions of young people today, it means ChatGPT.
The technology creates a perfect storm of vulnerability:
Perceived Anonymity: Conversations feel private. No human is reading them in real-time. The interface is clean and confidential-looking. Users develop a false sense of security.
Non-Judgmental Response: AI doesn't gasp in horror. It doesn't lecture. It processes your input and provides rational responses, which feels like acceptance rather than the evidence-gathering it actually becomes.
Accessibility: Your AI chatbot is available 24/7, costs nothing, requires no appointment, and never closes for holidays. It's present in exactly the moments when emotional regulation fails and confession becomes most likely.
Sophistication Illusion: Users believe AI conversations are somehow more advanced or protected than simple text messages. They're not. They're actually more dangerous because they're more detailed and more permanent.
Beyond Criminal Justice: The Commercial Surveillance State
The criminal justice implications are just one vector of this problem. The commercial exploitation might be worse.
Meta announced in December that it will begin using all interactions with its AI tools to serve targeted advertisements across Facebook, Instagram, and Threads. There is no opt-out. Voice chats and text exchanges will be scanned to build psychological profiles.
"If you chat with Meta AI about hiking," the company explained, "we may learn that you're interested in hiking. As a result, you might start seeing recommendations for hiking groups, posts from friends about trails, or ads for hiking boots."
That's the sanitized example. Here's the operational reality:
Someone chats with Meta AI about financial stress. They're immediately targeted with predatory loan advertisements, online casino promotions, and cryptocurrency scams. An elderly user discusses estate planning. Within hours, they're bombarded with pitches for overpriced gold coins and reverse mortgages. A person mentions relationship problems. Dating apps, affair websites, and divorce attorneys compete for ad space.
This isn't speculation. It's documented pattern behavior with search-based and social media ad targeting. AI conversations provide exponentially richer psychological data than search queries ever could.
Mark Zuckerberg himself said users will be able to let Meta AI "know a whole lot about you, and the people you care about, across our apps." This is the same person who once called Facebook users "dumb fucks" for trusting him with their information.
The Fraud Ecosystem
For fraud operators, AI conversation data represents the holy grail: authentic psychological profiles revealing fears, weaknesses, financial situations, decision-making patterns, and emotional vulnerabilities.
Security researchers already discovered that Perplexity's AI-powered browser could be hijacked to access user data e.g. perfect blackmail material. Imagine a fraudster obtaining AI conversations where someone discussed:
Marital infidelity or relationship problems
Financial difficulties or debt
Medical diagnoses or health fears
Career setbacks or workplace conflicts
Substance abuse or addiction struggles
This isn't about stolen credit card numbers. This is about weaponized intimate knowledge.
Dark web services are already offering AI tools explicitly designed for criminal use i.e. no safety guardrails, no content policies, just pure exploitation technology. These tools are being marketed as accomplices, not assistants.
Operational Protocols: Treat AI as Evidence
The bottom line is stark: every AI conversation you have exists as a potential court exhibit, marketing dossier, and extortion leverage point simultaneously.
If you wouldn't say it to a police officer wearing a body camera, don't say it to ChatGPT.
That's the new operational standard. Assume every AI interaction is:
Permanently recorded
Fully discoverable in legal proceedings
Available to corporate advertisers
Potentially accessible to hackers
Never truly deleted
For anything genuinely confidential, you have three protected options:
Licensed attorney (attorney-client privilege)
Licensed therapist (doctor-patient confidentiality)
Recognized clergy (clergy-penitent privilege)
Notice what's not on that list? Every single AI chatbot currently available to consumers.
The Reckoning
We're watching a fundamental shift in how criminal investigations work and how corporate surveillance operates. The Cambridge Analytica scandal forced people to reckon with how social media platforms weaponized their data. This AI moment represents the same inflection point, but with technology that knows your thoughts, not just your clicks.
More than a billion people now use standalone AI applications. Most are unwitting subjects in a massive evidence-gathering operation conducted simultaneously by law enforcement, advertisers, and criminal actors.
The old tech industry adage holds: if you're not paying for the product, you are the product. In the AI era, that needs an update.
You're not just the product. You're the prey.
Got a Second? The Dead Drop reaches 4,900+ readers every week including security professionals, executives, and anyone serious about understanding systemic wealth transfers. Know someone who needs this intelligence? Forward this newsletter.
A W-2, a Laundromat Owner, & a Billionaire Walk Into a Room…
NOVEMBER 2-4 | AUSTIN, TX
At Main Street Over Wall Street 2025, you’ll learn the exact playbook we’ve used to help thousands of “normal” people find, fund, negotiate, and buy profitable businesses that cash flow.
Use code BHP500 to save $500 on your ticket today (this event WILL sell out).
Click here to get your ticket, see the speaker list, schedule, and more.

The $40 Billion Problem: Why Traditional Fraud Prevention Is Already Obsolete
From a cluster of call centers scattered across Canada, a sophisticated criminal network executed what would become a textbook case of AI-enabled fraud. Between 2021 and 2024, they defrauded elderly victims in the United States out of $21 million. The operation wasn't particularly novel in concept, as grandparent scams have existed for decades. What made it devastatingly effective was the technology.
Using voice over internet protocol systems, the fraudsters convinced victims they were speaking with their grandchildren in distress. But this wasn't the work of talented voice actors or lucky coincidences. The criminals had assembled detailed dossiers on each target: ages, addresses, estimated incomes, family structures. They customized each conversation with precision that would make a legitimate call center envious.
This is the new reality of financial fraud in 2025. And according to a comprehensive analysis by MIT Technology Review Insights in partnership with Plaid, we're barely scratching the surface of what's coming.
The AI Acceleration
The proliferation of large language models has fundamentally altered the fraud landscape. Today, criminals can clone a voice using nothing more than an hour of YouTube footage and an $11 monthly subscription. They can generate thousands of sophisticated phishing emails simultaneously, each one customized to exploit specific psychological vulnerabilities. They can create "Frankenstein IDs" by stitching together fragments of stolen personal data, then use credential-stuffing software to test these synthetic identities across thousands of platforms in minutes.
The numbers tell a stark story. In 2023, the United States lost $12.3 billion to fraud. But that figure represents actual reported losses. The real number is likely far higher, given that many victims never come forward out of embarrassment or fear. Even using conservative estimates, the trajectory is alarming. Deloitte projects that generative AI will drive fraud losses to $40 billion by 2027 under their "base case" scenario. Their aggressive forecast puts the figure even higher.
"Technology is both catalyzing and transformative," explains John Pitts, head of industry relations and digital trust at Plaid. "Catalyzing in that it has accelerated and made more intense longstanding types of fraud. And transformative in that it has created windows for new, scaled-up types of fraud."
Consider synthetic identity fraud, now the fastest-growing financial crime in the United States, costing banks $6 billion annually. Or investment scams, where reported losses exploded from $1.7 billion in 2021 to $4.6 billion in 2023. The median loss to Federal Trade Commission imposters soared from $3,000 in 2019 to $7,000 in 2023. More than doubling in just four years.
These aren't isolated incidents. They represent a systemic shift in how fraud operates at scale.
The Traditional Defense Fails
Here's the uncomfortable truth that most financial institutions are only beginning to acknowledge: traditional fraud prevention tools are already obsolete.
Two-factor authentication, the security measure adopted across the industry over the past two decades, can be bypassed with a SIM swap and a convincing phone call to a cellular provider. Classic fraud detection systems focus narrowly on transaction anomalies without access to broader contextual data. Fraudsters exploit this limited viewpoint by moving between different platforms and accounts, legitimizing payments through a distributed approach that appears normal when viewed through a single institutional lens.
"This is, to put it bluntly, an arms race," says Pitts. "The fraudsters are deploying AI tools that give them new surfaces, new scale, and new cost reduction in how they commit fraud. If you are still relying on manual human-driven processes for preventing that fraud, then you have absolutely lost that arms race."
The data supports this assessment. Fifty-seven percent of financial services organizations and 66% of lending organizations in North America reported increased fraud levels in the past 12 months. Yet many remain reluctant to overhaul their defensive infrastructure.
"A lot of companies are still using traditional methods of verification because risk teams tend to prefer what they are comfortable with and used to," observes Danica Kleint, product marketing manager for fraud solutions at Plaid. "As fraud tactics become more advanced, risk teams need to layer in signals that are resilient to spoofing. Leveraging unique data sources that fraudsters haven't adapted to and are significantly harder to manipulate."
The Defense-in-Depth Approach
Some leading institutions have recognized the existential nature of this threat and adapted accordingly. JPMorgan Chase has been using large language models for payment validation screening since 2021, achieving a 15% to 20% reduction in account validation rejection rates while simultaneously reducing both fraud and false positives. Wells Fargo embedded AI and machine learning into its fraud defense strategy, layering ML models into authentication systems and deploying neural networks to identify suspicious patterns in customer accounts.
The critical insight driving these implementations: AI-enabled fraud requires AI-enabled defense, but implemented with strategic depth rather than as a simple replacement for existing systems.
Kleint explains the methodology: "It's about leveraging the data that we already have in a different way without adding additional burden to the consumer experience. We're not gathering any net new information, we're just analyzing what we already have in a different way. As you do those types of comparisons across many pieces of data, you start to get very effective prevention."
For example, many organizations already collect biometric data (a selfie taken on a smartphone) and demographic data like birth dates during account creation. AI can instantly cross-reference these data points to detect inconsistencies that would take human reviewers significantly longer to identify, if they caught them at all.
But individual institutional defenses, no matter how sophisticated, have inherent limitations. Fraudsters don't operate within the boundaries of a single platform. They access social media to identify victims, use telecommunications systems to initiate contact, leverage multiple payment platforms to obscure money flows, and exploit the fragmented nature of financial services to avoid detection.
The Network Defense
This reality has driven the creation of cross-platform fraud prevention networks. Plaid's Beacon consortium, for instance, shares real-time fraud insights across participating fintech companies and financial institutions, providing visibility into patterns that would be invisible to any single organization. When one institution identifies a fraudulent account or transaction, that intelligence immediately benefits every other network participant.
"Fraud is everyone's problem to solve," Pitts emphasizes. "It's a collective team sport that we, the financial ecosystem, need to engage in together. If you are not pursuing a network-based defense where you are sharing information with lots of different companies, you are going to have disproportionate levels of fraud because there are limits to what you can do individually."
The economic impact extends beyond direct losses to institutions and victims. Analysis shows that US productivity growth would have been 0.4% higher in 2023 without fraud losses. That difference might sound modest, but it translates to reduced inflationary pressure and slower price growth across the entire economy. Fraud isn't just a problem for banks and their customers. It's a macroeconomic drag that affects everyone.
The Public-Private Imperative
Private sector collaboration alone won't solve this crisis. The Aspen Institute's National Task Force for Fraud and Scam Prevention, launched in 2024, brings together representatives from government, law enforcement, private sector companies, and civil society organizations to develop a coordinated national strategy.
"We have representation from all of those sector actors at the table, and we're talking through what needs to be done to prevent fraud and scams from harming consumers," explains Kate Griffin, director of the task force. "Sharing information across these silos is a big piece of that puzzle."
Pitts identifies three critical policy changes needed to accelerate private sector anti-fraud efforts: amending the Patriot Act's information-sharing exemption to extend beyond financial institutions, creating a centralized anti-fraud function within government rather than the current fragmented approach across multiple agencies, and establishing clearer guidance on balancing universal access to banking services with the need to exclude bad actors.
"Financial services is a trust and reputation business," he says. "If you erode that trust too much, it hurts everyone."
The Path Forward
Both AI-enabled fraud and the methods to combat it remain in nascent stages. The threat will continue morphing as criminals find new ways to circumvent defensive measures. Griffin frames the challenge realistically: "This work is not dragon-slaying. When you slay a dragon, the quest is over. Fraudsters are criminal actors that will keep innovating and trying to perpetrate crimes. We have to continue to evolve the fight."
The goal isn't total elimination. That's likely impossible. The goal is limiting damage, preventing victims, and making fraud sufficiently difficult and expensive that it becomes an unattractive business model for criminals.
The path forward requires financial institutions to abandon outdated defensive tools, embrace AI-enabled security layers, participate actively in cross-platform data-sharing networks, and work collaboratively with government agencies to shape effective policy responses.
When fraud losses are growing 20% to 25% annually, and criminals can convincingly impersonate banks, government agencies, and credit card companies with alarming ease, something has to give. The question isn't whether the financial services sector will adapt to this new reality. The question is whether it will adapt quickly enough.

The Fraudfather's take on the week's biggest scams, schemes, and financial felonies, with the insider perspective that cuts through the noise.
The Multi-Hour Medicare Trap: When Scammers Play Doctor
A Wisconsin woman we'll call Janet spent hours on the phone believing she'd lost her Medicare benefits. The callers were convincing. Professional. Detailed. They impersonated Medicare officials, pharmacists, even her doctors. And they had one goal: steal her Medicare number and bill the government for medical equipment she never needed.
"It sounded legitimate," Janet recalled. "A man came on. He said, 'I've worked for Medicare for 29 years. We have to update your record.' He just kept pushing that, and I gave in and gave that number."
The operation was theatrical. The fraudster gave detailed instructions about medical equipment she'd be offered e.g. back braces, knee braces, wrist braces, and told her to "just say yes to everything they say." When she questioned why she needed equipment for conditions she didn't have, the response was simple: cooperate or lose your benefits.
This elaborate scheme represents a growing threat across the country. Pennsylvania Attorney General Dave Sunday warned in September 2025 that fraudsters are sending durable medical equipment that was never prescribed or ordered to Medicare patients, then billing either the patient or Medicare. Pennsylvania Office of Attorney General In one North Carolina case, criminals submitted over $100 million in false medical equipment claims in just four months. A separate Texas operation filed more than $359 million in fraudulent genetic testing claims. NCOA
The operational mechanics are straightforward but effective. Scammers offer older adults valuable medical equipment, persuade them to share their Medicare number, then use that information to file high-cost Medicare claims in the beneficiary's name. NCOA Back and knee braces are particularly popular targets because Medicare traditionally pays for them and reimbursement amounts haven't been reduced.
Janet eventually contacted Medicare directly, verified it was a scam, and obtained a new Medicare number. Days later, packages started arriving. She refused every delivery at her door. But the psychological damage was done. "So evil, because they're taking your Medicare number and who knows what they'll do with it," she said.
Wisconsin's Senior Medical Patrol emphasizes critical facts: Medicare does not call you uninvited and ask for personal or private information. You will usually get a written statement in the mail before you get a phone call from a government agency. Federal Communications Commission Medicare already has your information. They will never threaten to terminate your benefits. And they will never demand immediate payment or personal details over the phone.
If you receive suspicious calls or unsolicited medical equipment, refuse the packages and report immediately to Medicare at 1-800-633-4227 or through their online fraud reporting system. Check your Medicare statements regularly for claims you don't recognize. And remember: legitimate healthcare providers don't cold-call with offers of free equipment.
The most dangerous aspect of these scams isn't the financial fraud. It's the sophisticated social engineering that keeps victims on the phone for hours, believing compliance is their only path to maintaining healthcare coverage they've earned.
The Fraudfather combines a unique blend of experiences as a former Senior Special Agent, Supervisory Intelligence Operations Officer, and now a recovering Digital Identity & Cybersecurity Executive, He has dedicated his professional career to understanding and countering financial and digital threats.
This newsletter is for informational purposes only and promotes ethical and legal practices.



