Can AI flag non-compete clauses that may be unenforceable under the FTC rule and state laws automatically?

Jan 13, 2026

Non-compete agreements are getting a lot of heat right now. Between the FTC’s 2024 rule and a maze of state laws, it’s hard to know what’s safe and what’s going to cause headaches later.

So, can AI automatically spot non-compete language (and the sneaky de facto versions) that probably won’t hold up? Short answer: yes—when it combines clause detection, location-aware rules, and clear explanations. Let’s walk through how that looks in real life.

Here’s what you’ll find below:

  • An easy rundown of the FTC rule and the biggest state trends
  • What typically makes a non-compete—or an overbroad NDA/no-service/TRAP—fall apart
  • How AI actually flags risk (NLP + policy rules + scoring) with examples
  • The data you’ll want handy for better accuracy and faster review
  • Tricky scenarios like remote employees, contractors, and sale-of-business covenants
  • How this works in ContractAnalyze, plus audits and notice workflows
  • What accuracy and ROI look like once it’s up and running for buyers of automated contract review software

If you’re weighing an enterprise-grade, state-aware non-compete checker for FTC-era compliance, this guide will help you cut review time, reduce risk, and get to consistent answers faster.

Key points

  • AI can flag likely unenforceable non-competes and de facto restraints (like overbroad NDAs, no-service terms, and TRAPs) by pairing clause detection with a jurisdiction-aware rules engine tied to the FTC framework and state law. Expect big cuts in manual review.
  • Accuracy depends on a few basics: governing law and forum, worker location, role and pay, and any sale-of-business context. Good systems catch choice-of-law conflicts, default to stricter rules when data is missing, and handle remote workers and contractors.
  • Explainability is everything: show the exact sentence, the rule it triggers (including blue-pencil vs red-pencil), and offer quick fixes like tighter non-solicits, shorter terms, or garden leave. ContractAnalyze also supports portfolio audits, redlines, and notice automation.
  • ROI shows up fast: fewer unlawful clauses, less outside counsel spend, quicker offers, and cleaner diligence. Ongoing policy updates and re-checks keep your contracts current as wage thresholds and statutes shift.

Executive summary for decision-makers

If you’re racing to clean up restrictive covenants, you’re in good company. With the FTC rule shaking things up and states moving in different directions, AI non-compete clause detection helps you find the real problems fast and move the easy ones out of the way.

Teams typically see a big drop in full legal reviews once high-risk clauses get flagged with reasons and ready-to-use alternatives. The best setups catch obvious non-competes and the disguised stuff buried in NDAs, no-service language, or training repayment terms. Then they line up what matters—duration, geography, worker type, pay band—against current state and federal rules.

Think of the mix you probably have: California bans most post-employment non-competes; Minnesota did too in 2023 (sale-of-business aside). Washington ties enforceability to wage thresholds. Massachusetts wants garden leave or equivalent consideration. It’s a lot, and it changes.

One big win many teams overlook: fix choice-of-law and forum clauses that undercut your carefully drafted restrictions. Get the “plumbing” right, and your risk drops fast.

The legal backdrop in plain English

Quick status check. In April 2024, the FTC adopted a nationwide non-compete rule covering new agreements and calling out “de facto non-competes” (like overly broad NDAs or punitive training repayment). Later in 2024, a federal court in Texas vacated the rule nationwide. Appeals are in motion. In practice, treat the FTC framework as a forward-looking risk lens, and work closely with counsel while courts sort it out.

Day-to-day, state law still decides most outcomes:

  • California Business & Professions Code §16600 voids most employee non-competes (see Edwards v. Arthur Andersen, 2008). Related statutes also limit out-of-state law for California workers.
  • Minnesota banned most non-competes starting July 1, 2023, with sale-of-business exceptions.
  • Washington’s RCW 49.62 ties enforceability to wage thresholds that adjust yearly and includes contractors.
  • Illinois’ Freedom to Work Act (2022) sets wage floors and notice rules for non-competes.
  • Massachusetts requires garden leave (often 50% pay) or agreed consideration, plus notice.
  • D.C. allows non-competes for highly compensated workers subject to notices.

Also relevant: blue-pencil vs red-pencil doctrines. Some states let courts narrow overbroad terms; others toss them entirely. Your AI should know which applies before suggesting edits. And yes—keep counsel looped in; this area moves.

What makes a non-compete likely unenforceable

Courts look at what a clause does, not what it’s called. Common problems include:

  • Scope that’s too broad: language like “in any capacity,” nationwide coverage, or bans that don’t match the person’s actual competitive role.
  • Duration that’s too long: multi-year limits for non-executives are a red flag; many states expect 6–12 months at most.
  • Worker and pay factors: wage thresholds in states like Washington and Illinois; limits for non-exempt or lower-wage workers.
  • Process gaps: missing advance notice or lack of extra consideration for mid-employment covenants.
  • Choice-of-law/forum issues: e.g., California Labor Code §925 restricts forcing out-of-state law on California-based employees.
  • De facto non-competes: an NDA that blocks use of general skills or public info, or a no-service term that bans serving any customer, can function like a non-compete.

One practical move: replace risky non-competes with narrow customer non-solicits tied to accounts the worker actually touched, with clear activity limits. You protect the business and lower the chance of a fight later.

How AI flags risk: the core architecture

Here’s the basic flow. First, models find the right families of language—non-competes, customer and employee non-solicits, no-service terms, NDA breadth, TRAPs, moonlighting, and the boilerplate like governing law and forum. Then they pull facts you care about: duration, geography, scope, worker type, pay references, triggers, and carve-outs.

Next comes the policy brain. It applies state-by-state rules and the FTC framework, checks blue-pencil vs red-pencil, and lines everything up against wage thresholds and notice/consideration rules. You get a risk score with reasons you can point to.

Example: “In any capacity,” “two years,” and “United States” for a non-executive will light up as High risk in a lot of places. And if an NDA blocks use of general skills or public info, expect a de facto non-compete warning with an edit suggestion.

Data the system needs for high accuracy

You don’t need perfect data to start, but a few fields go a long way:

  • Contract details: governing law, forum selection, addresses in headers or signature blocks, effective date, any related exhibits (equity docs can signal seniority).
  • Worker info: location, role, exempt status, compensation band, executive designation. These feed wage threshold checks and worker coverage rules.
  • Transaction context: signs of a sale-of-business (purchase agreements, ownership percentages).
  • Repository context: where the file came from (offer letter vs separation) and known templates.

Example: Washington’s rules shift with annual wage updates. If an employee moved to Washington after signing, enforceability changes. Good systems pick up location clues from HRIS or a recent address block and ask you to confirm before finalizing.

When information is missing, smarter tools don’t guess. They default to the stricter rule, flag the uncertainty, and ask for one piece of data—like pay band or location—to lock in the result.

Handling edge cases and complexity

This is where the payoff shows up:

  • Overbroad NDAs: language that bans using “any information related to customers, pricing, or methods,” including public data, often chills lawful work. That’s a de facto non-compete risk.
  • Contractors: many states protect “workers,” not just employees. Misclassification won’t fix an overbroad restraint.
  • Remote teams: a Minnesota-based remote hire means Minnesota rules, even if HQ is in another state. Choice-of-law might not rescue you.
  • TRAPs: training repayment that exceeds real costs or looks like a penalty invites trouble under several regimes, including FTC analysis.
  • Sale-of-business: covenants tied to a real sale with meaningful ownership transfer get different treatment; the system should detect that context.

Watch for “no-service” language that bans serving any customer or prospect. Narrow it to active customers the worker had material contact with in the past year, and in many places you’ve turned a High risk into a Low.

Also keep an eye on blanket moonlighting bans. Without a legitimate reason—conflict, confidentiality, safety—those can function like shadow non-competes.

Explainability and human-in-the-loop review

Lawyers don’t want a mystery box; they want receipts. Each flag should show the exact sentence, the facts pulled out, and the rule that’s being triggered. That way counsel can agree, tweak, or reject without spinning cycles.

Example: For a Massachusetts employee non-compete with no garden leave, the tool calls out the missing consideration and offers two paths: add 50% pay during the restricted period or swap in a narrow customer non-solicit. Much easier to review when the fix is right there.

Two habits help teams move faster:

  • Capture decisions: if counsel approves a close call (say, a tight non-solicit), save that logic as a reusable policy so you don’t debate it again.
  • Use confidence thresholds: auto-clear Low risk with full logs, and route Medium/High to humans. Over time, measured overrides train the system to your comfort level.

Implementation with ContractAnalyze

ContractAnalyze puts this into daily use without piling work on your team. Pre-trained models find the clauses that matter—non-competes, non-solicits, no-service, NDA breadth, TRAPs, moonlighting—and the “plumbing” like governing law and forum. State-by-state packs and an FTC layer handle enforceability now and re-check when thresholds or cases change.

You get a side-by-side view: clause text, the facts we pulled, the rule that fired, and suggested alternatives you can drop into redlines. In practice, you can:

  • Scan your portfolio and see High/Medium risks by state, worker type, and contract stage.
  • Run non-compete audits and, where required, send employee notices with tracking (think California AB 1076/SB 699).
  • Add guardrails to templates and your CLM so business users see clear green/red signals while drafting.
  • Apply playbooks: switch to a narrow non-solicit, add garden leave, trim duration/geography, or fix choice-of-law.

It’s built with governance in mind. Policy versioning and approvals turn counsel’s guidance into rules the system can reuse, so you’re improving with every decision.

Security, privacy, and deployment options

You’ll be handling sensitive employment and pay data, so security needs to be rock solid. ContractAnalyze supports SSO/SAML, role-based access, least-privilege, and encryption in transit and at rest, with customer-managed keys available. If you need it, choose data residency, private cloud, or on-prem.

By default, your data doesn’t train shared models. If you opt into fine-tuning, it’s isolated to your tenant. Two privacy practices clients lean on:

  • Clause-only processing for certain workflows—evaluate restrictive covenants without pulling in the whole contract.
  • Automatic redaction at ingestion (names, SSNs, addresses) while still supporting choice-of-law and forum analysis with normalized location fields.

Everything’s auditable: policy versions, what got flagged, who approved, and when. That helps with internal audits and regulator questions.

Step-by-step walkthrough: from upload to remediation

Here’s how it usually goes:

  1. Ingest: connect your CLM/HRIS/DMS or drop in files. The system recognizes employment, contractor, NDA, offer, separation, and M&A docs.
  2. Detect: find non-competes, non-solicits, no-service, NDA breadth, TRAPs, moonlighting, plus governing law and forum.
  3. Extract: pull duration, geography, scope, worker type, pay band, triggers, and carve-outs into structured fields.
  4. Map jurisdiction: read governing law and forum; infer worker location from metadata and ask for a quick confirmation if unclear.
  5. Evaluate: apply FTC framework and state rules, including wage thresholds, blue-pencil vs red-pencil, and notice/consideration.
  6. Prioritize: queue High/Medium risks for review; auto-clear Low with explanations.
  7. Remediate: use one-click redlines and clause libraries to replace or narrow language.
  8. Notify: where laws require it, generate and track employee notices and amendments to completion.

Example: A Minnesota-based remote contractor with a 24-month, nationwide no-service clause gets flagged as a de facto non-compete under Minnesota’s ban. The tool suggests a 6–12 month customer non-solicit limited to accounts with prior material contact.

Accuracy expectations and measurement

After a short calibration on your templates, here’s what teams usually see:

  • Detection: standard non-competes and non-solicits are easy to find; the hidden stuff buried in “Confidentiality” improves fast with a few examples.
  • Classification: High-risk precision in the mid-to-high 80s when location and pay data are available. Many false positives disappear once missing metadata is filled in.
  • Drift control: a legal content service updates thresholds and rules, then re-checks your portfolio so accuracy doesn’t quietly slide.

To measure impact, run a small A/B by state and worker type. Track legal hours saved per 100 agreements, the share of High-risk clauses that get fixed or replaced, and counsel override rate. Over time, you should see fewer High-risk flags because your templates get better—exactly what you want.

Common pitfalls and how to avoid them

  • Judging by headings: overbroad restraints hide inside “Confidentiality” or “Non-Solicitation.” Always check the function.
  • Old wage thresholds: states like Washington, D.C., and Illinois update numbers every year. Automate updates and re-checks.
  • Skipping choice-of-law/forum checks: California Labor Code §925 and similar laws can void out-of-state picks for local workers.
  • Over-flagging narrow non-solicits: focus on active customers and recent contacts to keep low-risk terms green.
  • Missing process details: mid-employment agreements often need extra consideration or notice (Massachusetts, Illinois).
  • Forgetting contractors and interns: many rules cover “workers,” not just employees.
  • Remote work blind spots: execution location isn’t always where the work happens. Sync with HRIS to keep location current.

One line to retire: “anywhere the Company does business.” As you grow, that turns into a sprawling restraint. Use specific territories, counties, or customer lists tied to the role.

ROI and business case

Here’s how buyers usually justify it:

  • Cost: big reduction in review hours for restrictive covenants, which cuts outside counsel spend and improves internal SLAs.
  • Risk: fewer unlawful clauses out in the wild, better recruiting posture in ban states, and lower chance of claims or regulator attention.
  • Speed-to-hire: cleaner templates and guardrails mean faster offers and fewer back-and-forth edits.

Quick math: 3,000 agreements a year at 45 minutes each is a lot of time. If you drop that to 10 minutes on average, you save roughly 1,750 hours. At $200/hour blended legal cost, that’s around $350k a year—before counting disputes you never have.

Bonus: tidy portfolios get better treatment in diligence. Buyers discount for messy covenants and shaky choice-of-law. A dashboard showing exposure by state, worker type, and contract stage turns a liability into something you can be proud of.

Buyer’s checklist and evaluation criteria

When you demo tools, check for:

  • Clause coverage: non-compete, customer/employee non-solicit, no-service, NDA breadth, TRAPs, moonlighting, and the boilerplate (choice-of-law/forum).
  • Jurisdiction engine: current state packs, FTC layer, blue-pencil vs red-pencil, wage thresholds, and auto re-checks.
  • Config: your counsel can adjust rules, risk tiers, and playbooks without filing tickets.
  • Explainability: sentence-level highlights, extracted facts, rule citations, and one-click alternatives.
  • Workflow fit: CLM/HRIS/DMS integrations, intake gatekeeping, drafting guidance, bulk remediation, and notices.
  • Security: SOC 2 posture, SSO/SAML, private cloud/on-prem, zero-train by default, per-tenant fine-tuning.
  • Proof-of-value: short pilot, baseline metrics, and a clear path to rollout.

One good test: throw it a contract with Delaware governing law and a California employee. The best systems raise the conflict, apply the stricter rule by default, and ask for one confirmation to finish the call.

FAQs

Does the FTC rule wipe out all existing non-competes?
As of late 2024, a federal court vacated the FTC’s rule nationwide, and appeals are pending. Even so, many states already restrict or ban non-competes. Stay close to counsel and keep policies current.

Are customer non-solicits and NDAs still okay?
Often yes—if they’re narrow. Target customers the worker had real contact with, and keep NDAs focused on true confidential information. If an NDA blocks using general skills or public info, it may act like a non-compete.

How do choice-of-law clauses affect enforceability?
They can make or break it. States like California limit employers from forcing out-of-state law or venue on local workers. Good tools flag mismatches and recommend fixes.

What about sale-of-business non-competes?
Those are usually treated differently if tied to a genuine sale with meaningful ownership transfer. The system should spot that context before calling it risky.

How often do laws and thresholds update?
Wage thresholds adjust yearly in several places, and statutes or cases can change anytime. Your rules should auto-update and trigger re-checks.

Next steps

  • Run a quick exposure audit on a representative set of contracts. You’ll see High/Medium risks by state, worker type, and clause family.
  • Align with counsel on policy settings: wage thresholds, acceptable durations, go-to fallback clauses.
  • Roll out in phases: block risky language at intake, add drafting guidance to templates/CLM, and tackle legacy remediation with notices where needed.
  • Set governance: policy versioning, quarterly re-checks as rules shift, and leadership dashboards.

Ready to operationalize a state-aware non-compete checker your legal team will actually use? Start with 200–300 agreements in key jurisdictions. In two weeks, you’ll know your real exposure, your fast wins, and the playbooks to scale.

Conclusion

AI can spot likely unenforceable non-competes—and the de facto versions—by combining clause detection, location-aware rules, and clear explanations. With a few key inputs (law, forum, worker location, pay), teams cut review time, standardize decisions, and keep contracts current as thresholds change. Want to see it on your own docs? Connect your CLM/HRIS and run a two-week exposure audit with ContractAnalyze. You’ll get prioritized risks, clean alternatives, and automation for notices and fixes. Book a demo and put AI non-compete review to work across your portfolio.