Skip to main content

Documentation Index

Fetch the complete documentation index at: https://motherfuckingsideproject.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

How It Works

Understand AgentXP’s detection and transformation pipeline.
AgentXP runs entirely in Next.js middleware. It intercepts every incoming request, checks whether it comes from an AI agent, and either passes it through unchanged or transforms the response to markdown before returning it.

Request flow

Human browser  →  middleware (pass-through)  →  Normal HTML page
AI agent       →  middleware (intercept)     →  Clean markdown
Human visitors are never affected. The middleware exits in a single conditional check for non-agent requests, so there is no measurable overhead on your page load.

Agent detection

AgentXP uses two detection signals:
  1. Accept: text/markdown header — Any agent that explicitly requests markdown is identified with full confidence. This is the definitive signal.
  2. User-Agent pattern matching — AgentXP matches the User-Agent string against 20 known AI crawlers, including GPTBot, ClaudeBot, ChatGPT-User, PerplexityBot, Anthropic, Cohere, Google-Extended, Meta-ExternalAgent, and others.
Detection runs in middleware before your page ever renders. Agents identified by User-Agent alone are assigned 90% confidence; agents that send Accept: text/markdown are 100%. For the full list of recognized agents, see Supported agents.

Transformation pipeline

When AgentXP identifies an agent, it fetches your page’s HTML from the origin and runs it through a four-stage pipeline:
1

Hidden content expansion

Accordions, tabs, carousels, modals, and tooltips are expanded so the agent sees all content on the page — not just what is visible in the default collapsed state.
2

Content extraction

Navigation, headers, footers, sidebars, ads, cookie banners, chat widgets, and scripts are stripped using Readability. Only the main content remains.
3

HTML-to-markdown conversion

The extracted content is converted to clean markdown using Turndown. Headings, lists, links, code blocks, and tables are preserved in their natural structure.
4

Cleanup and truncation

The markdown is post-processed to remove artifacts. If you have configured a maxTokens limit, the output is truncated at that boundary.
The result is returned as a text/markdown response with headers that report the token estimate, your content permissions, and the detected agent name. See Response headers for details.

Special endpoints

AgentXP serves two endpoints automatically — no additional configuration required:
EndpointPurpose
/llms.txtSite index for AI agents, following the llmstxt.org specification
/.well-known/agent-experience.jsonMachine-readable capabilities manifest listing your site’s AgentXP features and permissions
You can populate /llms.txt with a page list via the llmsTxt configuration option. See llms.txt configuration.

What stays unchanged

  • Regular browsers receive normal HTML responses — AgentXP never touches them
  • No changes to your build configuration or output
  • No performance impact on human page loads
  • Your existing middleware, if any, continues to work alongside AgentXP