← Back to Blog

Fix The Markdown For Agents Warning On Fastly — Compute@Edge Pattern

Fix The Markdown For Agents Warning On Fastly — Compute@Edge Pattern

If you ran a URL through the Agent Runtime Readiness audit and the third check came back amber, you saw:

Host did not return Markdown content when Accept: text/markdown was requested. Enable Cloudflare Markdown for Agents or implement content negotiation at your origin.

Cloudflare has the toggle (covered in the original post). Fastly does not — the equivalent is a Compute@Edge service that you write and deploy. The handler is short, but there are two Fastly-specific gotchas worth covering up front.

Why Compute@Edge and not VCL

Fastly's two edge programmability layers do different things.

VCL (Varnish Configuration Language) is the older layer. It can read and rewrite request headers, route to different backends based on conditions, set cache TTLs, and modify response headers. It cannot fetch a different body, do async work outside the request flow, or run a body transform like HTML-to-markdown conversion.

Compute@Edge is the modern WebAssembly-based runtime. It supports Rust, JavaScript, Go (via TinyGo), and AssemblyScript. It runs full programs at the edge with full async support, can fetch from any backend, transform any response body, and is the right layer for content negotiation that returns a meaningfully different response.

For the Markdown for Agents fix, Compute@Edge is the only option. VCL can detect the Accept: text/markdown header and route to a different backend, but you'd still need somewhere to do the actual conversion or have pre-rendered markdown to serve.

The Compute@Edge service

The cleanest pattern is a Compute@Edge service that sits in front of your origin, intercepts requests with Accept: text/markdown, and either fetches a .md companion file from origin (Pattern A) or fetches the HTML and converts it (Pattern B).

Pattern A — Companion file

// src/index.js
import { Logger } from "fastly:logger";

addEventListener("fetch", (event) => event.respondWith(handler(event)));

async function handler(event) {
  const req = event.request;
  const accept = req.headers.get("accept") || "";

  if (!/text\/markdown/i.test(accept)) {
    // Pass through to origin unmodified
    return fetch(req, { backend: "origin_0" });
  }

  // Rewrite URL to .md companion
  const url = new URL(req.url);
  if (url.pathname.endsWith("/")) url.pathname += "index.md";
  else if (!url.pathname.endsWith(".md")) url.pathname += ".md";

  const mdReq = new Request(url.toString(), req);
  const mdResp = await fetch(mdReq, { backend: "origin_0" });

  if (!mdResp.ok) {
    // Fall back to HTML if companion missing
    return fetch(req, { backend: "origin_0" });
  }

  const mdBody = await mdResp.text();
  return new Response(mdBody, {
    status: 200,
    headers: {
      "content-type": "text/markdown; charset=utf-8",
      "vary": "Accept",
      "cache-control": "public, max-age=300",
    },
  });
}

Wire origin_0 to your origin host in the Fastly service config. Deploy with fastly compute publish.

Pattern B — Runtime conversion

If your origin doesn't expose markdown companion files, convert the HTML response. Fastly's JS runtime supports the npm turndown package via the @fastly/js-compute SDK build pipeline:

// src/index.js
import TurndownService from "turndown";

addEventListener("fetch", (event) => event.respondWith(handler(event)));

async function handler(event) {
  const req = event.request;
  const accept = req.headers.get("accept") || "";

  if (!/text\/markdown/i.test(accept)) {
    return fetch(req, { backend: "origin_0" });
  }

  // Fetch HTML from origin
  const htmlReq = new Request(req.url, {
    method: "GET",
    headers: { accept: "text/html" },
  });
  const htmlResp = await fetch(htmlReq, { backend: "origin_0" });
  if (!htmlResp.ok) return htmlResp;

  const html = await htmlResp.text();
  const mainMatch = html.match(/<main[^>]*>([\s\S]*?)<\/main>/i);
  const target = mainMatch ? mainMatch[1] : html;

  const td = new TurndownService({ headingStyle: "atx", codeBlockStyle: "fenced" });
  const md = td.turndown(target);

  return new Response(md, {
    status: 200,
    headers: {
      "content-type": "text/markdown; charset=utf-8",
      "vary": "Accept",
      "cache-control": "public, max-age=300",
    },
  });
}

Deploy with fastly compute publish. The first request for any URL with Accept: text/markdown runs the conversion; subsequent requests for the same URL hit the Fastly cache.

Cache configuration — the part that catches everyone

Fastly's cache key by default does not include Accept. Two requests to the same URL with different Accept values will get the same cached response unless you explicitly add Accept to the cache key.

In Compute@Edge, you control the cache key per request via the cacheKey option:

const cacheKey = `${req.url}::${accept.includes("text/markdown") ? "md" : "html"}`;
const resp = await fetch(req, { backend: "origin_0", cacheKey });

Without this, your Vary: Accept header is technically present on the response but the upstream Fastly cache has already collapsed both shapes into one entry. The audit will sometimes pass and sometimes fail depending on cache state, which is the worst possible debugging experience.

Set the cache key explicitly. Test by clearing the cache (fastly purge --all) and re-running the audit twice in a row.

Verifying the fix

curl -s -H "Accept: text/markdown" -i https://your.fastly.net/some-page/ | head -10

Expect content-type: text/markdown; charset=utf-8 and vary: Accept. Re-run the Agent Runtime Readiness audit — the third check should pass.

If the audit still warns:

  • Cache key didn't differentiate. Add the cacheKey logic above and purge.
  • Compute@Edge service didn't get the request. Check that your Fastly service's domain is the one being audited, not a backend host. The Compute@Edge service has to be the public endpoint.
  • Backend hostname mismatch. If your backend is on Cloudflare or another CDN, the inner CDN may strip or modify the Accept header before your origin sees it. Test with the inner CDN bypassed if possible.

What this costs

Fastly Compute@Edge is billed per request plus per CPU time. A simple companion-file pattern runs in single-digit milliseconds and costs essentially nothing per request. The runtime-conversion pattern uses 50-200 ms per cache-miss request and is correspondingly more expensive, but still under typical CDN-bandwidth costs for moderately trafficked sites.

The Compute@Edge free tier exists but is limited. Check Fastly's current pricing — for a site with light AI-runtime traffic the cost is in dollars per month; for high-traffic sites with the runtime-conversion pattern, the cost can grow meaningfully and Pattern A is the better choice.

Related reading

Fact-check notes and sources

If you're running Fastly as part of a build-your-own-web stack — host, edge, audit loop end to end — The $20 Dollar Agency covers the operating model behind that.

This post is informational, not legal or SEO-consulting advice. Mentions of Fastly, Cloudflare, and other third parties are nominative fair use; no affiliation is implied.

← Back to Blog

Accessibility Options

Text Size
High Contrast
Reduce Motion
Reading Guide
Link Highlighting
Accessibility Statement

J.A. Watte is committed to ensuring digital accessibility for people with disabilities. This site conforms to WCAG 2.1 and 2.2 Level AA guidelines.

Measures Taken

  • Semantic HTML with proper heading hierarchy
  • ARIA labels and roles for interactive components
  • Color contrast ratios meeting WCAG AA (4.5:1)
  • Full keyboard navigation support
  • Skip navigation link
  • Visible focus indicators (3:1 contrast)
  • 44px minimum touch/click targets
  • Dark/light theme with system preference detection
  • Responsive design for all devices
  • Reduced motion support (CSS + toggle)
  • Text size customization (14px–20px)
  • Print stylesheet

Feedback

Contact: jwatte.com/contact

Full Accessibility StatementPrivacy Policy

Last updated: April 2026