There's a discovery pattern the modern web hasn't had to think about before: an AI agent visiting your site, looking not for a page to render but for an action to call.
The agent's question isn't "what does this site say?" — that's the RAG question, solved by content. The agent's question is "what can this site do for me on behalf of my user?"
Answering that requires a new kind of advertising. Not marketing. Machine-readable advertising. A tiny set of well-known JSON files that say: here's who I am, here's what I can do, here's how to call each action, here's what auth I need.
Most sites have zero of these files. Every one will need at least one by the end of 2026.
The well-known discovery surface
There are (as of early 2026) five manifest paths that matter, plus a handful of secondary ones:
High priority:
/.well-known/ai-plugin.json— the ChatGPT-plugin format. Oldest, most widely supported. Describes auth, API location, logo, legal info./.well-known/mcp.json— Model Context Protocol server manifest. Declares tools, resources, and prompts an MCP client can invoke. Used by Claude Desktop, Cursor, and increasingly by OpenAI's Responses API and Gemini's agentic layers./.well-known/agent-card.json— Agent Card format (emerging convergent spec). Describes capabilities, endpoints, auth, rate limits. Used by agent-framework vendors (LangChain, LlamaIndex, CrewAI).
Medium priority:
/.well-known/openapi.jsonor/openapi.yaml— OpenAPI spec defining your HTTP API surface. Needed if ai-plugin.json references it./.well-known/ai.txt— your general AI policy (separate from this, covered by other tools in the jwatte set)./llms.txt— LLM-readable sitemap; also covered separately.
Low priority / emerging:
/asyncapi.json— event-driven API contracts (rare in the SMB context)./.well-known/agents.txt— very early proposal for agent-specific robots-style rules.
What the MCP Advertising Audit does
You paste your domain. The tool:
- Probes each of those paths. Reports 200 OK + whether the response is valid JSON.
- Scores your MCP-readiness (weighted by importance) from 0-100.
- Crawls your homepage's JSON-LD and counts Schema.org
PotentialActionnodes — because a site that already advertisesReserveActionorSearchActionin schema is ahead of the curve. - Generates starter snippets for any missing high-priority manifest. You copy, customize with real tools/auth, deploy at the path.
- Emits an AI fix prompt that reasons about which of these you actually need for your specific business type.
Who actually needs each manifest
E-commerce sites need ai-plugin.json + OpenAPI more than MCP. Agents need to search products, add to cart, check availability. The ChatGPT plugin format has been optimized for this workflow for two years.
Service businesses (legal, medical, trades) need agent-card.json + MCP. Agents need to book consultations, request quotes, check availability. MCP's tool schema is better for structured, multi-step conversational flows.
Publishers / content sites need llms.txt + agent-card.json. Agents aren't buying from you; they're citing you. Your manifests should optimize for accurate citation and freshness signaling.
SaaS companies need all three, ideally. They're being called by copilot clients that could replace chunks of human SaaS work. First-mover advantage is real here.
The "what tools should I expose" problem
Every SMB owner asks the same three things:
- Which of my actions should be callable by an agent?
- How do I prevent agents from abusing the endpoint?
- What do I get out of it?
For #1: start with public, read-only actions. A quote calculator. A product search. An availability check. A service-area verification. These have zero risk and maximum upside.
Actions requiring auth (booking, ordering, payment) come next. Require API keys or OAuth. Rate-limit aggressively. The MCP spec supports auth; ai-plugin.json supports it; agent-card.json supports it.
For #2: three layers of defense.
- Origin allowlist (same pattern the jwatte Netlify Functions use — see feedback_cto_post for the security pattern)
- Per-IP rate limits per tool
- Manifest-declared
rate_limitfield so honest agents back off before hitting your server-side limits
For #3: the short-term payoff is being cited in agent answers. The medium-term payoff is being the preferred vendor an agent picks when its user says "find me a roofer in Twin Falls." The long-term payoff is survival in a world where AI-mediated commerce is the dominant retail interface.
The 30-day rollout plan
Week 1: Probe current state with this tool. Pick the 2-3 manifests matching your business type. Week 2: Write an OpenAPI spec for your 2-3 most callable actions (usually: search, quote, contact). Week 3: Deploy the manifests and a backing MCP server (or HTTP API) that implements the declared tools. Week 4: Test with Claude Desktop + ChatGPT + one agent framework. Fix the things that break.
At day 30 you're ahead of ~98% of your competitive set. The window for that asymmetry closes by end of 2026 as manifest authoring gets baked into every SaaS offering.
Why SMBs should care now, not later
Fortune 500 companies are adopting MCP on the order of weeks — by the time they notice the standard exists, they can deploy in a sprint. An SMB that adopts first gets to be the answer when an agent asks "find me a local [category] that my user can book directly." There's no rematch on first-mover position; once the pattern of "this vendor's agents book directly" sets in, incumbents keep winning.
The technical cost is a day of work. The upside is structural. That ratio is nearly unique in SMB tooling history.
Related reading
- MCP Server Audit — upstream, if you've stood up an MCP server already
- Agentic Commerce Readiness — broader readiness check beyond manifests
- Well-Known Audit — general audit of /.well-known/* paths
- AI Bot Policy Generator — the robots.txt + ai.txt layer that gates everything else
Methodology: the manifest set was curated from the actual well-known paths major agent frameworks probe as of early 2026 (OpenAI Responses API client, Anthropic Claude Desktop MCP client, LangChain Agent Connector). The "starter snippets" are minimum-viable scaffolds, not production-ready — every field marked as placeholder needs real values.
Fact-check notes and sources
- Model Context Protocol spec: modelcontextprotocol.io
- OpenAI ai-plugin.json spec: documented at platform.openai.com (legacy but widely implemented)
- Well-known URI registry (IETF): iana.org/assignments/well-known-uris
- Schema.org PotentialAction: schema.org/potentialAction
This post is informational, not engineering or agent-integration consulting advice. Mentions of Anthropic, OpenAI, Google, LangChain, LlamaIndex, CrewAI, Cursor, and the Model Context Protocol are nominative fair use. No affiliation is implied.