If you ran a URL through the Agent Runtime Readiness audit and the third check came back amber, you saw this:
Host did not return Markdown content when Accept: text/markdown was requested. Enable Cloudflare Markdown for Agents or implement content negotiation at your origin.
The Cloudflare toggle is a 30-second fix if your site sits behind Cloudflare (covered in the original post). On Vercel, the equivalent is a middleware.ts file at your project root. The build is short.
Why Edge Middleware and not an API route
Vercel offers three places to run code: API routes (regional, full Node.js, slower), Edge Functions (per-route, V8-based, faster), and Edge Middleware (runs before every matched request, can rewrite the response in place). For content negotiation you want the third — the request needs to be intercepted before any route handler runs, so the right response is returned regardless of whether the URL is a static page, an SSR route, or an API call.
Middleware runs in the Edge Runtime, which is a V8 isolate with a subset of Node APIs. The trade-off is fast cold start and short execution time at the cost of restricted Node modules. For Markdown for Agents, the restriction is fine — there's nothing here that needs full Node.
Pattern A — Companion file (simplest)
If your project source includes markdown (Next.js MDX, content-from-markdown patterns), publish the source .md files alongside the rendered HTML and serve them on negotiation.
Create middleware.ts at the project root:
import { NextResponse, type NextRequest } from "next/server";
export async function middleware(request: NextRequest) {
const accept = request.headers.get("accept") || "";
if (!/text\/markdown/i.test(accept)) return NextResponse.next();
const url = request.nextUrl.clone();
if (url.pathname.endsWith("/")) url.pathname += "index.md";
else if (!url.pathname.endsWith(".md")) url.pathname += ".md";
const mdResp = await fetch(url.toString());
if (!mdResp.ok) return NextResponse.next();
const body = await mdResp.text();
return new NextResponse(body, {
status: 200,
headers: {
"content-type": "text/markdown; charset=utf-8",
"vary": "Accept",
"cache-control": "public, s-maxage=300, stale-while-revalidate=60",
},
});
}
export const config = {
matcher: ["/((?!_next|api|.*\\.(?:png|jpg|webp|svg|ico|css|js)$).*)"],
};
The matcher skips Next.js internals and static assets so middleware doesn't run on every image request. Adjust the exclusion list to match your project.
For the companion files themselves, copy your source markdown into public/ at build time. In Next.js, add a postbuild script:
{
"scripts": {
"postbuild": "cp -r content/blog public/blog"
}
}
Adjust the source path to wherever your markdown lives.
Pattern B — Runtime HTML-to-markdown conversion
If your source is not markdown — a database-backed Next.js app, a CMS pulling JSON, anything that renders to HTML at request time — convert at the middleware layer. Edge Runtime supports turndown via npm:
import { NextResponse, type NextRequest } from "next/server";
import TurndownService from "turndown";
export async function middleware(request: NextRequest) {
const accept = request.headers.get("accept") || "";
if (!/text\/markdown/i.test(accept)) return NextResponse.next();
const htmlResp = await fetch(request.nextUrl.toString(), {
headers: { accept: "text/html" },
});
if (!htmlResp.ok) return NextResponse.next();
const html = await htmlResp.text();
const mainMatch = html.match(/<main[^>]*>([\s\S]*?)<\/main>/i);
const target = mainMatch ? mainMatch[1] : html;
const td = new TurndownService({ headingStyle: "atx", codeBlockStyle: "fenced" });
const md = td.turndown(target);
return new NextResponse(md, {
status: 200,
headers: {
"content-type": "text/markdown; charset=utf-8",
"vary": "Accept",
"cache-control": "public, s-maxage=300, stale-while-revalidate=60",
},
});
}
export const config = {
matcher: ["/((?!_next|api|.*\\.(?:png|jpg|webp|svg|ico|css|js)$).*)"],
};
Edit the <main> selector to match your template. Without it the markdown twin will include nav, footer, and sidebar markup, which is exactly what AI runtimes are trying to skip.
The Vary header is not optional
Vary: Accept tells Vercel's edge cache to keep the markdown and HTML responses as separate entries. Without it, the cache will collapse them and serve the wrong response shape to the wrong client — your browser visitors will get raw markdown, your AI visitors will get HTML, depending on which one warmed the cache first.
Vercel's edge cache respects Vary correctly. Just make sure the header is on every response your middleware emits.
Verifying the fix
After deploying, run:
curl -s -H "Accept: text/markdown" -i https://yoursite.com/some-page/ | head -10
You should see content-type: text/markdown; charset=utf-8 and vary: Accept in the first ten lines, with markdown body following. Re-run the Agent Runtime Readiness audit — the third check should pass.
Common false negatives:
- Middleware isn't matching the path. Check the matcher regex; the negative lookahead has to actually exclude the things you don't want and pass through the things you do. Test it against your site's URL shapes.
- Turndown errored on edge runtime. Some turndown plugins use Node-only APIs. Stick to the core package for the Edge Runtime; if you need plugins, fall back to Pattern A or move the conversion to a regional API route.
- Cache served stale HTML. If you deployed but curl still returns HTML, Vercel's edge cache may be returning a pre-middleware-deploy response. Force a refresh with a cache-bust query string the first time.
What this costs
Vercel Edge Middleware is billed per invocation. The Hobby plan includes 1 million invocations per month free. Pro and Enterprise tiers have higher limits. The runtime-conversion pattern uses more CPU per call than companion-file serving; under heavy AI-runtime traffic, monitor the middleware execution time in the Vercel dashboard and consider Pattern A if you have markdown source available.
Related reading
- The Original Markdown For Agents Warning Post — what the audit is checking and the Cloudflare path
- Agent Runtime Readiness — the audit tool itself
- The Conversation Has Moved Past The Model — why this matters now
- Netlify Edge Functions Pattern — same fix, different platform
- Origin Server Configs (Nginx / Apache / Caddy) — if you don't have an edge layer
Fact-check notes and sources
- Vercel Edge Middleware reference: vercel.com/docs/functions/edge-middleware
- Edge Runtime API surface (what's available inside middleware): vercel.com/docs/functions/runtimes/edge-runtime
- Vary header semantics for Accept-based negotiation: RFC 9110 §12.5.5
- Turndown library on npm: npmjs.com/package/turndown
- Cloudflare Markdown for Agents reference (the feature this post replicates on Vercel): developers.cloudflare.com/fundamentals/reference/markdown-for-agents/
If you're piecing together your own minimal web stack — host, audit, monitoring, the audit loop end to end — The $20 Dollar Agency walks through that operating model.
This post is informational, not legal or SEO-consulting advice. Mentions of Vercel, Next.js, Cloudflare, and other third parties are nominative fair use; no affiliation is implied.