Policy Roast: The FTC's AI Enforcement Unfairness Doctrine Is Dangerously Vague

When 'unfair AI practices' means whatever the FTC decides it means this week, compliance becomes a moving target.

Policy Roast: The FTC's AI Enforcement Unfairness Doctrine Is Dangerously Vague

Policy Roast: The FTC's AI Enforcement Unfairness Doctrine Is Dangerously Vague

The FTC has a new favorite weapon for AI enforcement: the unfairness doctrine. The problem? It's so vague that companies can't tell whether they're compliant until they're already being sued.

The Unfairness Standard Is Deliberately Ambiguous

Under Section 5 of the FTC Act, the commission can pursue "unfair" practices without defining what "unfair" actually means. The three-part test (substantial injury, not reasonably avoidable, not outweighed by benefits) sounds objective until you realize each element is entirely subjective.

For AI systems, this becomes absurd. Is it "unfair" to use a hiring algorithm that correlates with protected characteristics, even if it's more accurate than human reviewers? The FTC says yes. Is it "unfair" to use customer data for model training when your privacy policy allows "service improvement"? Sometimes, depending on how the FTC feels that day.

The commission has issued enforcement actions against AI-powered hiring tools, algorithmic pricing systems, and personalization engines using identical legal reasoning: "We think this is unfair, therefore it is." No regulations. No advance guidance. Just retroactive enforcement based on vibes.

The Practical Impossibility of Compliance

Here's the compliance trap: the FTC won't tell you what "unfair AI" means until after they've caught you doing it. There are no safe harbors. No pre-approval mechanisms. No binding guidance that protects you if you follow it.

This creates impossible risk management. Legal teams can't assess liability when the standard changes case by case. Engineering teams can't build compliant systems when compliance requirements are revealed retroactively. The only winning move is not to innovate, which is probably the point.

Companies are stuck choosing between three bad options:

  1. Build conservative AI systems that underperform (and lose to competitors who take more risk)
  2. Build aggressive AI systems and pray the FTC doesn't notice (gambling on enforcement capacity)
  3. Build aggressive AI systems and budget for settlements (treating FTC fines as a cost of doing business)

None of these outcomes serve consumers or competition. They just create regulatory uncertainty that advantages incumbents who can afford to fight the FTC in court.

What Should Exist Instead

If the FTC actually wanted compliance, they'd issue clear, binding guidance on what makes AI systems "unfair" before pursuing enforcement. They'd create safe harbor provisions for companies that follow best practices. They'd distinguish between harmful AI and AI that just produces outcomes the commission dislikes politically.

But clarity isn't the goal. The unfairness doctrine exists precisely because it's vague enough to attack any AI system the FTC decides to target, without being constrained by actual regulations or legislative authority.

Until that changes, "AI compliance" will remain an expensive guessing game where the FTC holds all the cards and changes the rules whenever it suits them. Good luck building anything innovative in that environment.

Sources