There are two levers for E-E-A-T. The first is content — author bios, cited sources, primary research, expertise-signaling phrases. The second is infrastructure — the structural artifacts that make your site look like a real organization: editorial policy page, corrections policy, physical address, phone number, external review platform link.
Content E-E-A-T is ongoing work. Infrastructure trust signals are ship-once-benefit-forever. Yet most sites ship six of the twelve and call it done.
Trust Signal Surface Audit checks all twelve from the Google News Initiative Trust Project model in one pass.
The 12 artifacts
- Author byline on the article page.
rel=authororitemprop=authoror a visible named byline. - Editorial policy page.
/editorial-policyor/policies/editorial. Describes who writes, how stories are chosen, what biases are disclosed. - Corrections policy page.
/corrections. How errors are handled when caught. - About page depth. Not a stub — actual substance explaining who runs the site and why.
- Visible dateModified on articles. Either a
<time datetime="...">element or a CSS-class pattern like.date-updated. - Physical address on contact page. Street address or P.O. Box. Real businesses have one.
- Phone number on contact page. E.164-formatted if international.
- Privacy policy page. GDPR + CCPA requirement.
- Terms of service page. Baseline legal infrastructure.
- External review platform link. Trustpilot, BBB, G2, Capterra, etc. on the homepage.
- Authors or team page. Multi-author publications should have a directory.
- HTTPS. Baseline; if you don't have this, fix it today.
Missing any three of twelve reads as amateur operation. Missing any six reads as "this site might be a shell."
Why the infrastructure layer matters for AEO
AI answer engines — Perplexity, ChatGPT Search, Gemini — specifically up-weight sources that have these artifacts present. The reasoning: an organization willing to publish an editorial policy and a corrections policy is more likely to be accountable than one that won't. That accountability correlates with factual reliability, which is the thing the engines are actually trying to measure.
Trust signals aren't just E-E-A-T. They're AEO source-quality inputs.
The fix workflow
For each missing artifact, the AI fix prompt drafts the page copy. Editorial policy and corrections policy are the two that feel like "I don't know what to write" — the prompt emits a template draft you adapt in 20 minutes.
Related reading
- E-E-A-T Audit — content + schema side of E-E-A-T
- E-E-A-T Generator — emits Person + Organization + sameAs
- Entity Citation Radar — Wikipedia + Wikidata presence
Fact-check notes and sources
- The Trust Project: thetrustproject.org
- Google News Initiative: newsinitiative.withgoogle.com
- Google on article structured data: developers.google.com/search/docs/appearance/structured-data/article
The $97 Launch covers setting up trust-signal infrastructure on day one, before you need it. The audit is the pre-launch checklist.