The question isn't whether you used AI to help write your content. The question is whether you're transparent about it, and whether your site's metadata reflects that transparency in ways search engines and regulators can verify.
Google's March 2024 core update specifically targeted sites with AI-generated content that provided no original value. The EU AI Act requires disclosure of AI-generated content in certain contexts. And readers are getting better at spotting AI writing, which means undisclosed AI content erodes the trust you're trying to build.
What disclosure actually looks like
There are several layers to AI content disclosure, and most sites miss all of them.
Visible disclosure. A note on the page saying something like "This article was drafted with AI assistance and reviewed by [author name]." Simple, honest, and increasingly expected. Some publications put this in the byline area. Others add it at the bottom. The location matters less than the existence.
Schema.org markup. The creativeWorkStatus property can indicate whether content is draft, published, or AI-assisted. The author field for SoftwareApplication type work can specify an AI tool. These structured data signals help search engines categorize content accurately.
C2PA content credentials. The Coalition for Content Provenance and Authenticity developed a standard for embedding provenance data directly into content. Images generated by DALL-E, Midjourney, and Adobe Firefly already carry C2PA metadata. If your site publishes AI-generated images, C2PA signing references tell platforms where those images came from.
AI-use policy page. A dedicated page explaining how your organization uses AI in content creation. This is becoming standard practice for publishers and is specifically referenced in Google's Search Quality Evaluator Guidelines as a transparency signal.
Why this matters for search
Google has been clear: AI content isn't automatically bad. What's bad is AI content that exists only to manipulate search rankings. The distinction comes down to whether the content provides genuine value and whether the site is transparent about its creation process.
Sites that disclose AI use and demonstrate editorial oversight tend to fare better in quality assessments than sites that try to pass AI content as entirely human-written. The transparency itself is a trust signal.
What the tool checks
The AI Content Disclosure Audit scans a page for all four disclosure layers. It checks for visible AI-disclosure text, schema.org creativeWorkStatus and SoftwareApplication author markup, C2PA signing references in images, and links to an AI-use policy page.
Each missing layer gets flagged with specific guidance on what to add and where. The tool doesn't judge whether you should be using AI. It checks whether you're being transparent about it.
The practical approach
If you use AI in your workflow, here's what to do:
Write an AI-use policy page. One page, 300-500 words, explaining what AI tools you use, how human oversight works, and what quality standards you apply. Link it from your footer.
Add visible disclosure to AI-assisted content. One sentence in the byline area or at the bottom of the article.
Add creativeWorkStatus to your Article schema. If you're already publishing Article JSON-LD (and you should be), this is one additional property.
For AI-generated images, check whether your generation tool supports C2PA. If it does, preserve the metadata when you publish.
If you're setting up a content operation from the ground up, I cover the full editorial workflow in The $20 Dollar Agency on Kindle, including how to build quality gates around AI-assisted content.
Fact-check notes and sources
- Google's guidance on AI-generated content: Google Search Central Blog, "AI-generated content", February 2023
- EU AI Act transparency obligations: EUR-Lex, Regulation (EU) 2024/1689, Article 50
- C2PA standard: Coalition for Content Provenance and Authenticity, Technical Specification v2.0
- Schema.org creativeWorkStatus: schema.org/creativeWorkStatus
Related reading
- Content credentials and image licensing — C2PA in practice
- E-E-A-T trust signal surface audit — broader trust signals
- Author authority per article — measuring authorship signals
- AI posture consistency — aligning your AI crawler directives
This post is informational, not legal or SEO-consulting advice. Mentions of Google, the EU AI Act, and C2PA are nominative fair use. No affiliation is implied.