Explain This: New York’s RAISE Act (what it actually requires, and where teams will screw it up)

New York’s RAISE Act is not a vibes bill. It is an assignment of duty, and it will define what ‘reasonable’ AI governance looks like when something breaks.

Explain This: New York’s RAISE Act (what it actually requires, and where teams will screw it up)

Explain This: New York’s RAISE Act (what it actually requires, and where teams will screw it up)

New York just moved the AI compliance goalposts again.

The RAISE Act is not a vibes bill. It is an assignment of duty. If you build, deploy, or buy high-risk AI, you are going to inherit new documentation obligations and new plaintiff-friendly language about foreseeable harm.

What it is

The RAISE Act is New York’s attempt to regulate frontier AI systems through a safety-and-governance frame.

It is built for two audiences at once: regulators who want enforceable obligations, and courts who want a clear yardstick for what was reasonable.

Why it matters

Every AI incident has two tracks.

Track one is technical: what did the model do.

Track two is legal: what did you know, what did you do about it, and what did you write down.

The RAISE Act hardens track two. It pushes organizations toward formal risk assessments, documented controls, and “you knew this could happen” evidence.

Where teams screw up

1) They treat this like a procurement checkbox.

If your vendor risk workflow ends at “SOC 2 attached,” you will not have the artifacts this law implicitly expects when something goes wrong.

2) They confuse model risk with data risk.

Data governance is necessary, but it is not the whole story. Frontier AI failures are often about emergent behavior, misuse, and downstream reliance. That is a different risk register.

3) They assume “we did not intend harm” is a defense.

In litigation, intent is rarely the point. Foreseeability is.

If credible experts are warning about a class of harm and your governance docs are empty, that becomes Exhibit A.

What “reasonable” looks like

Reasonable looks like being able to answer five questions without improvising.

  • What AI capability did we deploy, and where.
  • What harms did we anticipate (not just privacy).
  • What controls did we put in place to reduce those harms.
  • What monitoring tells us the controls still work.
  • What triggers a pause, rollback, or customer notice.

If you cannot produce those answers in writing, you are gambling on memory. Courts do not reward that.

What to do this week

1) Inventory “frontier adjacent” AI.

If you do not train models, you can still deploy them through platforms, copilots, or embedded features. List what you are using, and who owns it.

2) Create the two-page artifact.

For each high-risk AI use case, write a two-page brief: intended use, excluded use, top harms, top controls, and escalation path. Make it boring and signed.

3) Add an incident clause to your AI governance.

Define what counts as an AI incident, who is paged, and what evidence you preserve. Model behavior logs are evidence.

Subscribe if you want more of the legal version of engineering work.

What is the AI decision your org is making right now that will look indefensible on a deposition transcript.

Sources