Policy Roast: "Reasonable security" is doing a lot of work in California
California did what policymakers love to do: it shipped a compliance object that sounds precise, then hid the hard part in a word like \\\"reasonable.
California did what policymakers love to do: it shipped a compliance object that sounds precise, then hid the hard part in a word like "reasonable."
This week’s roast is the CPPA’s cybersecurity audit rule and its companion ecosystem of risk assessments and automated decision-making rules. Not because audits are bad, but because vague standards plus penalties equal discovery bait.
Three lines that should scare you
1) "Reasonable" security measures
Quote: The cybersecurity audit rule "outlin[es] 'reasonable' security measures for personal information." (IAPP summary)
Plain English: The state is telling you there is a baseline. It is not just whatever you wrote in your policy.
Why it matters in litigation: "Reasonable" is not a shield. It is a measuring stick. If your controls do not look like the baseline the regulator just hinted at, plaintiff’s counsel will argue negligence per se in everything but name.
2) ADMT opt-outs when tech "replace[s] or substantially replace[s] human decision-making"
Quote: Opt-outs are required when ADMT is used in decisions that "replace or substantially replace human decision-making." (IAPP summary of CPPA regs)
Plain English: If a model meaningfully influences who gets hired, who gets credit, who gets priced, who gets flagged, or who gets throttled, you are now in opt-out territory.
Why it matters in litigation: If you cannot show real human review (who reviewed, what they saw, what they changed, and why), you will litigate intent with screenshots. Your decision record becomes the product.
3) USD200 per-incident fines for data brokers who did not register
Quote: "The fact you did not register doesn't get you off the hook for the USD200 per-incident fine." (Tom Kemp, via IAPP)
Plain English: You can be wrong twice: wrong because you failed to register, and wrong because you failed to process a deletion or opt-out through DROP.
Why it matters in litigation: Per-incident penalty language is catnip. It turns a boring compliance miss into a damages narrative. Even if a private right of action is not the path, regulators love a clean multiplier.
What I would highlight if I were plaintiff’s counsel
1) Every place your privacy policy says "we use reasonable security" without listing what that means. 2) Every place you say "a human reviews" without evidence of authority to override the system. 3) Every vendor statement that promises compliance, but has no audit trail attached.
What to do this week (operator-grade)
1) Define "reasonable" for your org in writing. Map it to an existing control baseline (NIST CSF 2.0, CIS, ISO 27001). Make it auditable. 2) Make human oversight real. For any significant ADMT-influenced decision, log: reviewer identity, inputs shown, override capability, and the final rationale. 3) If you are a data broker (or adjacent), inventory trade names and sites. If you have multiple brands, assume the registry will treat that as multiple points of accountability. 4) Run a 30-minute deletion drill. Can you find, delete, and evidence deletion across your systems and vendors inside the timelines implied by DROP-style expectations?
If you want more policy roasts like this, subscribe. One uncomfortable lesson at a time.
Sources
- New year, new rules: US state privacy requirements coming online as 2026 begins
- California adopts Cybersecurity Audit Rule, outlining 'reasonable' cybersecurity
- CPPA Board finalizes long-awaited ADMT, cyber audit, risk assessment rules
- Attention Required!
- Acrobat Accessibility Report
- California Privacy Protection Agency (CPPA)
- Indiana Consumer Data Protection Consumer Bill Of Rights Web.Pdf
- Attorney General Rayfield Releases One Year Report on Oregon Consumer Privacy Act