Policy Roast: Three Lines That Just Made You a Volunteer
They did not ask. They updated.
Three Lines That Just Made You a Volunteer
Here is what happened while you were reading other emails.
Google, Meta, X, and a dozen smaller platforms quietly updated their terms of service. Same playbook: buried clause, vague scope, infinite license. The language says you agreed to let them train AI models on everything you posted, uploaded, or typed into their systems.
You did not opt in. You stayed logged in.
Three Lines That Should Scare You
Line 1: "We may use your content to develop and improve our services, including machine learning and artificial intelligence technologies."
Plain English: Everything you post becomes training data. Photos, messages, comments, drafts. If it touched their servers, it is theirs to feed into a model.
Why it matters in litigation: When your proprietary strategy deck, client list, or internal memo gets scraped because someone pasted it into a shared doc, this is Exhibit A. The clause does not distinguish between public posts and private workspaces. Your company's counsel will argue you consented. Plaintiff's counsel will argue you had no meaningful choice.
Line 2: "You grant us a worldwide, royalty-free, perpetual, and irrevocable license to use, reproduce, and create derivative works from your content."
Plain English: They own it forever. Even if you delete your account, the license survives. Even if you never clicked "I agree" on the update, continued use counts as acceptance.
Why it matters in litigation: This is how a former employee's Slack messages become the foundation of a competitor's AI tool. The "irrevocable" part means you cannot claw it back. The "derivative works" part means the output does not have to look like your input. When your trade secrets show up in a model's training corpus, this clause is the defense.
Line 3: "We are not responsible for how third parties use AI-generated content that may resemble or incorporate elements derived from user submissions."
Plain English: If the AI spits out something that looks like your work, that is your problem, not theirs.
Why it matters in litigation: This is the liability waiver. When a generative AI tool produces code that matches your proprietary algorithm, or marketing copy that mirrors your pitch deck, the platform claims no duty to prevent it. You will argue they had a duty to safeguard your content. They will point to this line and argue you assumed the risk.
What Plaintiff's Counsel Will Do
If I were building the complaint, I would:
- Subpoena the update logs. When was the policy changed? How was notice provided? Was there a pop-up, or did "continued use" count as acceptance?
- Circle the scope creep. Compare version 1.0 ("we use your content to provide the service") to version 3.0 ("we use your content to develop AI technologies"). That delta is the evidence of overreach.
- Ask this question in discovery: "Did the company conduct a privacy impact assessment before expanding the license to include AI training?" If the answer is no, that is exhibit B. If the answer is yes, I want to see it.
The Memo Security Teams Are Not Writing
Most companies do not have a policy that says: "Do not paste proprietary information into third-party platforms that reserve AI training rights."
Most employees do not read terms of service updates.
Most legal teams do not audit the SaaS tools their developers use daily.
That gap is measurable. When a breach happens, the question is not whether the data was exfiltrated. The question is whether it was licensed away first.
What to Do This Week
- Audit your SaaS stack. If your team uses platforms that updated their ToS in the last six months, read the AI training clauses. Assume the worst-case interpretation.
- Update your acceptable use policy. Add a line: "Do not upload proprietary, confidential, or customer data to any platform that claims a license to use that data for AI training."
- Train your people. One lunch-and-learn: "What happens when you paste code into ChatGPT?" The answer is: you just licensed it.
If you want more policy roasts like this, subscribe. I read the fine print so you do not have to.
Question for you: Have you checked whether your company's terms of service include an AI training clause? If not, what are you waiting for?
Sources:
Sources
- AI Legislative Update: March 6, 2026 - Transparency Coalition
- We Overhauled Our Terms of Service and Privacy Policy - Zed's Blog
- Google, Snap, Meta quietly changing privacy policies for AI training - Reddit/r/privacy
- X's new Terms of Service enforces AI training on all content - Reddit/r/privacy
- Data Privacy, Cybersecurity, AI developments shaping 2026 - Nixon Peabody LLP
- Comprehensive State Privacy Laws 2026: 20 States Have Laws - MultiState