If you have spent the last two years polishing schema markup and watching AI Overviews eat the SERP, the next round of work is already named. It is not a single new spec. It is four of them, layered, and each one solves a different problem an AI shopping agent is trying to solve when it lands on your page.
I want to walk through them the way they actually fit together, not the way the launch announcements try to make each one sound like the only thing that matters.
NLWeb is the foundation, and most of you are already partway in
NLWeb is the layer that turns your existing site into something an agent can read without scraping the visible UI. The materials are familiar: schema.org markup, a clean RSS or Atom feed, content that is broken into sections and headings instead of one wall of text.
If you already chased rich results, FAQ schema, and Product structured data, you are doing NLWeb work. You did not have a name for it before, and now you do. The sites that have not invested in structured data are the ones with catching up to do, and the catch-up is mostly mechanical: ship the schema, advertise the feed, keep both honest.
The thing to internalize: NLWeb is not a separate file you publish. It is the posture of a site whose meaning is legible without rendering JavaScript.
WebMCP declares what your site can do, not just what it knows
WebMCP is a W3C draft. The pitch is straightforward: instead of the agent reading your content and guessing how your checkout, your booking flow, or your trial signup works, your site declares those actions in a machine-readable format. Add to cart. Book a demo. Start a trial. Check inventory.
The agent gets a map straight from the source. No clicking through hidden buttons. No fragile selectors. No "the agent broke when you redesigned the cart drawer last Tuesday."
This is the layer that turns your site from a catalog into a set of callable tools. If you are familiar with the Model Context Protocol on the server side, WebMCP is the in-browser cousin: capabilities a website exposes to an agent that has already loaded the page.
The spec is still moving. It is worth tracking now and building a mental model of which actions on your site you would want declared first, because when the spec lands you do not want to be the team still arguing about whether "add to cart" is one verb or three.
ACP: just the checkout moment, and Stripe is the rail
ACP is OpenAI and Stripe's Agentic Commerce Protocol. The scope is narrow on purpose: the moment money moves. ACP gives an AI agent a standardized way to complete a purchase on a merchant's behalf, with payment credentials, authorization, and security handled inside the protocol rather than improvised on the page.
Practically: if your store already takes Stripe and your product pages are properly marked up, you are already shaped like an ACP-eligible merchant. The protocol does the rest. ChatGPT's instant checkout is the consumer-facing surface today.
This is the protocol that makes "buy me a waterproof jacket" actually pull a card and complete the order.
UCP: the full lifecycle, decentralized, /.well-known/ucp
UCP is Google and Shopify's Universal Commerce Protocol, announced at NRF 2026 with about twenty launch partners including Target, Walmart, Wayfair, Etsy, Mastercard, Visa, and Stripe. The scope is the whole shopping lifecycle, not just checkout: discovery, capability negotiation, real-time inventory, checkout, post-purchase events like tracking and returns.
Two design choices stand out.
First, decentralized. Merchants publish a capability profile at /.well-known/ucp. Agents fetch it, negotiate which capabilities both sides support, then proceed. There is no central onboarding gate and no "Google Merchant Center for UCP" sitting in the middle. You publish, agents read.
Second, stacks rather than replaces. UCP is built to run alongside MCP, A2A, and AP2 (Agent Payments Protocol). It is the orchestration layer that says: here is what I can do, here is how to call it, here is who you talk to about returns.
If your stack is Shopify, the integration path is short. If you are on a custom store, the spec gives you a target to aim at: a public, machine-readable description of what an agent can do with you.
ACP vs UCP, the way I keep them straight
These get conflated constantly. The clean separation:
- ACP is by OpenAI and Stripe. It covers discovery and checkout. It powers ChatGPT instant checkout. Centralized merchant onboarding.
- UCP is by Google and Shopify. It covers discovery, checkout, and post-purchase. It powers Google AI Mode and Gemini commerce. Decentralized: merchants publish capabilities at
/.well-known/ucp.
Both are live in early 2026 with rollouts in progress. They are complementary, not rival. A serious brand may end up supporting both, one for the ChatGPT ecosystem and one for the Google ecosystem, the same way you keep a Bing Webmaster account next to your Search Console account.
The practical question is not which to pick. It is: which surface matters most to your customers right now, and which one is closer to your existing commerce stack? Start there. Add the other when the traffic justifies it.
What this means for your audit checklist
I updated the Agentic Commerce Readiness Audit to score against all four protocols in a single pass. The new Section 6 in the audit checks:
/.well-known/ucp— UCP capability profile published or not- WebMCP markers in the page —
<meta name="webmcp">,<link rel="webmcp">,<script type="application/webmcp+json">, or action-attribute markers - RSS or Atom feed link — the NLWeb foundation signal
- ACP eligibility — derived from Stripe SDK presence + complete Product schema + HTTPS
The earlier sections still cover Product schema completeness, organization trust, the older agent discovery files (mcp-server.json, agent-card.json, ai-plugin.json, llms.txt, ai.txt), Open Graph product metadata, and one-shot buyability. Six sections, one score, one fix prompt that names which protocol you are short on.
You will not max out all four today. Almost nobody does. The point of the audit is to tell you which gap to close first, given how the protocols stack: NLWeb is the foundation, then WebMCP declares your actions, then ACP and UCP route the actual transactions.
Where to start if you have not done any of this
Same order. Schema first, feed second, then /.well-known/ucp, then think about what actions you would want declared in WebMCP. The discovery files are cheap to publish. The schema work is the part that takes longest and matters most, because every protocol above it assumes the schema below is honest.
If you are running a small site or a personal brand that has not gotten serious about any of this yet, my book The $97 Launch is the cheapest way to get the foundation right without paying an agency to do it for you. It is an entire site-build playbook, schema and feeds included, for the price of a sandwich.
Related reading
- The agentic web is the new shopping surface — the original post when the audit shipped, with the five-section breakdown
- GEO Content Extractability Scorer — the non-commerce half of agent readiness
- MCP Advertising Audit — when you are exposing agent-callable tools, not just content
- Merchant Listings Audit — schema attributes Google Shopping requires
- E-E-A-T schema and structured data — the foundation work that makes all of this possible
Fact-check notes and sources
- NLWeb foundation framing (schema + RSS): Stat / Moz — AI search SEO tactics.
- WebMCP, W3C proposed standard for browser-side agent capability declaration: Stat / Moz — AI search SEO tactics.
- ACP, OpenAI + Stripe Agentic Commerce Protocol: Stripe — Agentic Commerce Protocol and Stat / Moz — AI search SEO tactics.
- UCP announcement at NRF 2026, Google + Shopify, ~20 launch partners: Stat / Moz — AI search SEO tactics.
- Schema.org Product reference: schema.org/Product.
- Model Context Protocol (server-side cousin to WebMCP): modelcontextprotocol.io.
This post is informational, not SEO-consulting, commerce, or legal advice. Mentions of OpenAI, Stripe, Google, Shopify, Microsoft, Perplexity, ChatGPT, Claude, Gemini, and similar products and specifications are nominative fair use. No affiliation is implied.