Court Rules
Developer Guide• 6 min read

Your AI Agent Doesn't Know Court Rules. Now It Can.

AI agents are answering regulatory questions, scanning contracts for compliance gaps, and monitoring enforcement actions. But they're working without structured data: no enforcement history, no violation taxonomies, no source citations. Ask about a recent CCPA settlement and you get outdated or fabricated information. Ask about a judge's page limits and you get the federal default, not the judge-specific rule. This MCP server fixes both problems.

The hallucination problem in legal AI

AI agents are showing up in legal workflows. Contract review, compliance monitoring, regulatory research, pre-filing checks. The tools are real, the adoption is growing, and the outputs are often good enough to be dangerous. Because "good enough" in legal work means "plausible but uncited," and plausible but uncited is how you get sanctioned.

Ask an AI agent about recent California privacy enforcement and you might get a case from 2022 when the latest is a $2.75M Disney settlement from February 2026. Ask about FTC actions against health data brokers and you get fabricated case names with plausible-sounding fine amounts. The agent doesn't know what it doesn't know, and neither does the person reading its output.

The same problem hits court rules. Ask about filing requirements and you get reasonable defaults from the Federal Rules of Civil Procedure: 25 pages, 12pt font, 14-day opposition deadline. But federal litigation doesn't run on defaults. It runs on 670+ individual judges who each publish their own standing orders, overriding the baseline rules however they see fit.

Try this yourself:

Ask Claude: "What is the page limit for summary judgment briefs in EDNY before Judge Brown?"

Without structured data, you might get "25 pages" (the FRCP default). The actual answer: 20 pages, per Brown's Standing Order §2.B. A five-page difference that gets your motion returned.

Giving agents access to real legal data

MCP (Model Context Protocol) is an open standard from Anthropic that lets AI agents call external tools during a conversation. Instead of relying on training data, the agent makes a structured request to a server, gets back real data, and uses it to answer. The agent asks a question, the MCP server fetches structured data from the source, and the agent responds with sourced answers that link back to official documents.

The server exposes 7 tools across two domains:

  • Enforcement data: 21 jurisdictions (FTC, HHS OCR, state attorneys general), structured actions with fines, violation types, remedies, and source URLs
  • Court rules: 20+ federal districts, 630+ judges, rules traced to specific PDF pages and section numbers
ToolWhat it does
search_enforcement_actionsSearch by jurisdiction, date, industry, violation type
get_enforcement_detailsFull event with laws cited, remedies, source quotes
get_enforcement_statsTrends by jurisdiction, violation type, or time period
list_courtsFederal district courts with status
search_judgesFind judges by district, name, or type
get_judge_rulesAll rules for a judge (page limits, format, procedures)
check_complianceValidate a document against judge-specific rules

What this looks like in practice

Your in-house legal team uses Claude. Someone asks: "What privacy enforcement actions happened in California this year?"

Without the MCP server, Claude guesses. With it, Claude calls search_enforcement_actions and returns structured data:

[
  {
    "entity_name": "The Walt Disney Company",
    "event_date": "2026-02-11",
    "fine_amount": 2750000,
    "violation_types": ["opt_out_failure"],
    "summary": "Disney's opt-out processes failed to stop the sale
      of consumer data across devices. $2.75M penalty, largest
      CCPA settlement in California history.",
    "primary_source_url": "https://oag.ca.gov/news/press-releases/..."
  },
  {
    "entity_name": "Jam City, Inc.",
    "event_date": "2025-11-21",
    "fine_amount": 1400000,
    "violation_types": ["opt_out_failure", "children_data"],
    "summary": "Mobile gaming company failed to offer opt-out
      across 21 apps. Sold children's data without consent.",
    "primary_source_url": "https://oag.ca.gov/news/press-releases/..."
  }
]

This matters because of what the agent can now do with the data:

  • Every field is structured (not prose that needs parsing). Fine amounts are numbers, not "$2.75 million" strings.
  • Every event links to the official press release. The agent can cite its source, and the lawyer can verify it.
  • Violation types come from a fixed taxonomy, not free text. So the agent can filter and compare across events.
  • Follow-up questions work: "Which of these involved children's data?" or "What were the total fines in California this year?" The agent can compute answers instead of guessing.

Get running in 2 minutes

Three steps.

Step 1: Add to your Claude Desktop config

{
  "mcpServers": {
    "court-rules": {
      "command": "npx",
      "args": ["tsx", "apps/mcp/src/index.ts"],
      "env": {
        "SUPABASE_URL": "your-supabase-url",
        "SUPABASE_SERVICE_ROLE_KEY": "your-service-role-key"
      }
    }
  }
}

Step 2: Set environment variables

You need SUPABASE_URL and SUPABASE_SERVICE_ROLE_KEY pointing to a database with court rules and enforcement data.

Step 3: Try it

Open Claude Desktop and ask:

  • "What enforcement actions happened in California this year?"
  • "What are Judge Brown's page limits for briefs in EDNY?"
  • "Show me enforcement trends by jurisdiction."

The agent will call the appropriate tools, fetch structured data, and respond with sourced answers. No hallucinated case names. No fabricated fines. Every claim traced to an official government source.

How legal workspace platforms use this data

If you're building a legal platform (case management, workspace, compliance tool), the MCP server opens four patterns worth considering.

1. Regulatory feed widget. Surface enforcement events in your workspace hub. Use search_enforcement_actions with jurisdiction and industry filters to show your users what regulators are doing in their practice area. A labor and employment team at a mid-market firm sees FTC actions; a healthcare compliance team sees HHS OCR breaches. The data is already structured and filterable.

2. Auto-matter creation. When a relevant enforcement event drops, create a matter in your legal workspace with the structured data pre-populated. The entity name, jurisdiction, violation types, fine amount, and source URL all map directly to matter fields. Assign it to the right team member based on the jurisdiction or violation type.

3. Pre-filing compliance. Check documents against judge-specific rules before filing. Use check_compliance to catch page limits, format violations, and procedural requirements before they cause rejections. This is the gap between drafting and filing that nobody is checking today.

4. Agent-powered research. Let AI agents answer regulatory and compliance questions with sourced data instead of hallucinated answers. When a partner asks "what enforcement precedent exists for this type of violation," the agent queries real enforcement data instead of guessing from training data.

What's next

  • More jurisdictions: state courts, international regulatory bodies
  • AI compliance tracking: Colorado AI Act, NYC Local Law 144, EU AI Act enforcement
  • Webhook notifications: get alerted when new enforcement events land in your tracked jurisdictions
  • Contract gap analysis: identify compliance gaps by comparing your agreements against recent enforcement patterns

Try it now

Browse the data, test the tools, and copy the config for your agent.


Data sources:

  • Federal court rules: Individual judge standing orders and local rules from 20+ districts
  • Privacy enforcement: Official press releases from FTC, HHS OCR, and 16+ state attorneys general
  • All data traced to official government sources with direct links