Chirp LogoChirp

Stop Leaking Your Private Data to AI Companies

Real incidents where AI companies exposed user data — from chat histories to credit cards to confidential emails. Your words deserve better.

Published: February 25, 2026

Every time you speak into a cloud-based dictation tool, your words leave your device and land on someone else's server. That server might get hacked. The company might change its terms of service. A bug might expose your data to strangers. These are not hypotheticals — they have already happened, repeatedly, at the biggest AI companies in the world.

If you use voice-to-text for anything personal — medical notes, journal entries, legal documents, business strategy, private messages — you should know exactly what you are risking. Here are four widely reported incidents that show why sending your voice data to the cloud is a gamble you do not need to take.

Want to skip the horror stories? Chirp processes your voice entirely on your device using local AI. No internet, no servers, no cloud accounts. Your words never leave your machine. Download it free →

1. ChatGPT Allegedly Exposed Payment Info and Chat Histories to Strangers

On March 20, 2023, a bug in OpenAI's Redis caching layer reportedly caused ChatGPT to show one user's conversation titles to a completely different user. But the chat titles were allegedly the least of it. According to OpenAI's own incident report, approximately 1.2% of ChatGPT Plus subscribers who were active during a nine-hour window that day may have had their first and last names, email addresses, payment addresses, and the last four digits of their credit card numbers exposed to other users.

OpenAI took ChatGPT offline to patch the issue. CEO Sam Altman reportedly said he felt “awful” about it. Italy's data protection authority responded by temporarily banning ChatGPT entirely, and later fined OpenAI €15 million for alleged GDPR violations. (Infosecurity Magazine, Help Net Security)

The takeaway: Even if you trust a company with your data today, a single caching bug can reportedly put your name, email, and payment info in front of a stranger tomorrow.

2. Samsung Engineers Reportedly Fed Trade Secrets to ChatGPT

According to reports from TechCrunch, Bloomberg, and CNBC, within weeks of Samsung's semiconductor division allowing employees to use ChatGPT, three separate engineers allegedly leaked confidential data to OpenAI's servers:

  • One engineer reportedly pasted proprietary source code from Samsung's semiconductor measurement database to debug an error.
  • A second allegedly entered chip defect detection algorithms to get code optimization suggestions.
  • A third reportedly recorded an internal meeting, transcribed it, and pasted the full confidential transcript into ChatGPT to generate meeting minutes.

Under ChatGPT's default settings at the time, everything submitted could reportedly be used to train future models. Samsung could not retrieve the data from OpenAI's servers. The company subsequently banned ChatGPT entirely and began building an internal alternative.

The takeaway: Once your words hit a cloud AI service, you may lose control of them permanently. There is no “undo send” for data that has already been ingested into a training pipeline.

3. Zoom Quietly Claimed Rights to Train AI on All Your Calls

In March 2023, Zoom updated its Terms of Service to reportedly grant itself a “perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license” to all customer content — including video calls, audio recordings, chat messages, and shared files — for “machine learning, artificial intelligence, training, testing.”

The change flew under the radar for months until it went viral in August 2023, as reported by CBS News and Variety. At the time, Zoom reportedly had over 300 million daily meeting participants. Healthcare providers raised HIPAA concerns. Educators worried about student data. Entertainment professionals flagged intellectual property issues.

Zoom CEO Eric Yuan reportedly admitted it was “a mistake” and a “process failure.” Zoom updated the ToS, but the episode exposed how easily a company can claim ownership of your voice and video data with a quiet policy change.

The takeaway: Cloud services can change their terms at any time. Data you shared under one privacy policy can be reclassified and repurposed without your knowledge.

4. Microsoft Copilot Allegedly Read Confidential Emails for Weeks

In January 2026, a bug in Microsoft 365 Copilot Chat reportedly allowed the AI to read and summarize emails marked with sensitivity labels like “Confidential” — even when organizations had Data Loss Prevention policies explicitly configured to block automated access to that content.

According to reports from TechCrunch, TechRadar, and CX Today, the bug allegedly persisted for weeks before Microsoft began rolling out a fix. Copilot was reportedly ingesting emails from Sent Items and Drafts regardless of their confidentiality classification, and summarizing them for anyone who asked. Microsoft has not disclosed how many users or organizations were affected.

The takeaway: Even enterprise-grade AI tools with dedicated security teams can allegedly bypass the very protections they promise to enforce.

The Pattern Is Clear

These reported incidents are not isolated edge cases. They point to a systemic risk with how cloud AI works. Every time you send data to a remote server, you are trusting that:

  • The company's infrastructure has zero bugs
  • Their caching and data isolation is flawless
  • Their terms of service will never change
  • Their employees will never access your data
  • They will never be hacked, subpoenaed, or acquired

That is a lot of trust to place in companies that have reportedly already stumbled on several of these fronts.

The Fix: Keep Your Data on Your Device

The simplest way to protect your voice data is to never send it anywhere. That is exactly how Chirp works.

Chirp runs a local AI model (whisper.cpp) directly on your Mac or PC. When you press a hotkey and speak, the audio is processed on your hardware and the transcription is pasted wherever your cursor is. The entire loop happens locally:

  • No internet connection required — works on planes, in basements, off-grid
  • No cloud accounts — nothing to sign up for, nothing to get breached
  • No server logs — your audio is never recorded, stored, or transmitted
  • No terms of service changes can retroactively claim your data

Your voice goes in, text comes out, and nothing ever leaves your machine. That is how voice-to-text should work.

Stop Gambling with Your Privacy

The incidents above are just the ones that made the news. For every disclosed breach, there are bugs that were quietly patched, terms that were silently updated, and data access patterns that were never audited.

If you dictate anything you would not post publicly — patient notes, legal strategy, private thoughts, business plans — the safest move is to keep it off the cloud entirely.

Download Chirp and start dictating privately. It takes two minutes to set up and your first 28 transcriptions are free.

Try Chirp Free

28 free transcriptions included. No account required. 100% offline and private. Available for macOS and Windows.

Download Free

Then $9.99/year for unlimited transcriptions