Brand360

AI Readiness

20 AI readiness checks

A1

Organization schema

What is it

Verifies the presence of Organization schema markup that defines core business information — name, logo, contact details, social networks, and legal form. This markup creates a digital identity for the organization.

Why it matters

AI systems use Organization schema to build a knowledge graph about a company. When a user asks ChatGPT or Google AI about your business, structured data ensures accurate and complete responses including logo and contact information.

Real-world example

When Google displays a Knowledge Panel for a company like Deutsche Bank, it draws from Organization schema. Companies without this markup often lack a Knowledge Panel or display incorrect information sourced from third parties.

Verified sources

A2

Product schema

What is it

Checks for Product schema on e-commerce product pages and service listings. It includes price, availability, ratings, and product description in a machine-readable format. For non-e-commerce sites, this check is automatically skipped (N/A).

Why it matters

AI assistants like Google Shopping and ChatGPT plugins use Product schema to compare products and generate recommendations. Without this markup, your products won't appear in AI-driven shopping results.

Real-world example

Amazon has Product schema on every product page, enabling Google to display price, availability, and ratings directly in search results. E-shops without Product schema lose up to 30% of organic traffic from shopping queries.

Verified sources

A3

FAQ schema

What is it

Verifies the presence of FAQ (Frequently Asked Questions) schema markup on pages with frequently asked questions. For pages without an FAQ section, this check is automatically skipped (N/A).

Why it matters

FAQ schema is a direct source for AI answers. When ChatGPT or Google AI Overviews look for an answer to a question, they prefer content marked with FAQPage schema because it's already in a question-answer format.

Real-world example

Cloudflare has FAQ schema on its product pages, which causes their answers to appear directly in Google results as expandable questions. This increases SERP real estate and CTR.

Verified sources

A4

Review/Rating schema

What is it

Verifies the presence of Review and AggregateRating schema on pages with product or service reviews. For websites without reviews or ratings, this check is automatically skipped (N/A).

Why it matters

AI models use Review schema as a signal of trustworthiness and quality. When AI generates recommendations, it prioritizes sources with verified ratings and reviews from real users.

Real-world example

Booking.com uses AggregateRating schema on all hotels, enabling Google to display star ratings directly in search results. Hotels with visible ratings have a 25% higher click-through rate.

Verified sources

A5

AI bot policy

What is it

Analyzes robots.txt and meta tags for AI crawlers (GPTBot, ClaudeBot, Google-Extended, and others). The check determines whether the site has an explicit policy for AI bots, evaluates the clarity and comprehensibility of rules — whether robots.txt contains explicit rules for each AI crawler separately, whether a public page explaining the policy exists, and whether the rules are consistent across robots.txt and meta tags. It also evaluates the strategic tradeoff — blocking AI bots may protect content but reduces AI visibility.

Why it matters

Strategic management of AI bot access is crucial. Complete blocking means your company won't exist in AI answers. Conversely, full access may lead to unwanted training on your content without compensation. An unclear AI bot policy leads to inconsistent crawler behavior.

Real-world example

The New York Times blocked GPTBot in robots.txt, protecting its content from AI training but losing visibility in ChatGPT. Conversely, Stripe allows AI crawlers because it wants to be the primary source of information about payment APIs.

Verified sources

A6

llms.txt existence

What is it

Checks for the existence of the /llms.txt file — a new standard that provides AI models with a structured overview of a website in Markdown format. Only the presence of the file is evaluated — content quality is assessed in a separate check A7.

Why it matters

The llms.txt file is designed specifically for LLM models to quickly understand a website's purpose, services, and structure. Unlike robots.txt which controls access, llms.txt actively helps AI understand your content. Its mere existence is a strong signal that the site is AI-ready.

Real-world example

Stripe has one of the best llms.txt files — it contains a product overview, links to documentation, and API references. Thanks to this, ChatGPT and Claude can accurately answer questions about Stripe products.

Verified sources

A7

llms.txt content quality

What is it

Evaluates the content quality of the /llms.txt file — length, website description, number of section links, presence of .md links, and overall informativeness. If llms.txt doesn't exist (A6=fail), this check is automatically N/A.

Why it matters

Having an empty or minimal llms.txt is not enough. AI models need quality content — a website description, links to key sections, .md versions of documentation. A high-quality llms.txt significantly improves the accuracy of AI responses about your site.

Real-world example

OpenAI has a detailed overview at developers.openai.com/api/docs/llms.txt with links to all API sections including .md versions. A low-quality llms.txt with a single line 'This is our website' barely helps AI models at all.

Verified sources

A8

llms-full.txt (bonus)

What is it

Bonus check for the existence of the /llms-full.txt file — an extended version of llms.txt containing full documentation content in a single Markdown file. This check never penalizes — if the file doesn't exist, it's scored as N/A and doesn't affect the overall score.

Why it matters

The llms-full.txt file allows AI models to load entire documentation in a single request without needing to crawl dozens of pages. This dramatically speeds up product comprehension and reduces errors in AI responses. Since this is a specific need of documentation sites, the check works as a bonus — it rewards sites that have it but doesn't penalize those that don't.

Real-world example

OpenAI provides llms-full.txt at developers.openai.com/api/llms-full.txt, containing the entire API documentation in a single file. Competing AI services can thus quickly index the OpenAI API without crawling hundreds of pages.

Verified sources

A9

BLUF (Bottom Line Up Front)

What is it

Evaluates whether the page presents the key information at the beginning of the content — the BLUF principle. AI models extract answers primarily from the first paragraphs, so placing the main point at the top is critical.

Why it matters

AI models assign the highest weight to the first paragraphs of a page when generating responses. If key information is buried in the middle or at the end of the text, AI may overlook it and use less relevant information instead.

Real-world example

Wikipedia uses the BLUF principle on every article — the first paragraph always contains the definition and the most important facts. That's why AI models cite Wikipedia so often — the key information is always at the beginning.

Verified sources

A10

Content structure (lists, tables)

What is it

Analyzes content structure — use of bulleted and numbered lists, tables, and structured formats. Well-structured content with lists and tables is easier for AI models to parse.

Why it matters

AI models process structured content more efficiently than continuous text. Lists enable extraction of bullet-point answers, and tables provide comparable data in a clear format.

Real-world example

Cloudflare documentation uses code blocks, lists, and tables on every page. That's why AI assistants can accurately answer technical questions about Cloudflare products with specific parameters from tables.

Verified sources

A11

FAQ section on page

What is it

Checks for the presence of an FAQ section directly in the page content (not just schema markup). For pages where an FAQ doesn't make sense (e.g., contact pages), the check is skipped (N/A).

Why it matters

FAQ sections are an ideal source for AI answers because they contain question-answer pairs in natural language. AI models can directly use these pairs as responses to user queries.

Real-world example

Shopify has an FAQ section on every product page with real customer questions. These answers regularly appear in Google AI Overviews and ChatGPT because they precisely match common user queries.

Verified sources

A12

Definitions/glossary patterns

What is it

Detects definition patterns and glossary sections on the page — for example, 'Term: definition' format, definition lists (dl/dt/dd), or dedicated glossary pages. For pages without specialized terminology, this check is skipped (N/A).

Why it matters

AI models actively search for term definitions to build their knowledge base. Pages with clearly marked definitions become an authoritative source for AI when explaining technical terms.

Real-world example

MDN Web Docs (Mozilla) uses a consistent definition pattern for every web API and CSS property. That's why when you ask ChatGPT about CSS flexbox, the answer often comes from MDN definitions.

Verified sources

A13

Knowledge chunkability & content depth

What is it

Evaluates how well the page content can be divided into standalone knowledge blocks (chunks). Each chunk should contain one complete idea with a clear heading and context. Content uniqueness, depth, and clarity are also assessed.

Why it matters

RAG (Retrieval-Augmented Generation) systems split web content into chunks before storing them in a vector database. If content is poorly segmented, chunks lose context and AI generates inaccurate or incomplete responses.

Real-world example

Stripe API documentation has each endpoint in a separate section with a heading, description, parameters, and examples. This allows AI systems to accurately extract information about a specific endpoint without contamination from other sections.

Verified sources

A14

Reference / evidence signals

What is it

Checks for the presence of references, citations, and evidence in the content — external links to studies, statistics with source attribution, expert quotes. These signals increase content credibility for AI systems.

Why it matters

AI models evaluate source credibility based on references and evidence. Content backed by verifiable sources has a higher chance of being cited in AI responses.

Real-world example

Articles on HubSpot Blog always contain links to research, statistics, and case studies. That's why HubSpot articles are among the most frequently cited sources in AI responses to marketing questions.

Verified sources

A15

Freshness signals

What is it

Detects content freshness signals — publication date, last updated date, document version. For homepages, this check is skipped (N/A) since homepages typically don't have a publication date.

Why it matters

AI models prefer current content and use freshness signals to decide which source to cite. Outdated content without an update date has lower priority in AI responses.

Real-world example

Google Cloud documentation displays a 'Last updated' date on every page. When AI compares two sources on the same topic, it prefers the one with a more recent update date.

Verified sources

A16

Entity & author completeness

What is it

Checks the completeness of author and entity information on the page — author name, bio, contact, social profiles, and organization affiliation. Evaluates E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness).

Why it matters

AI systems build a knowledge graph of authors and organizations. Complete author profiles increase content credibility, and AI models prefer to cite content from identifiable experts in a given field.

Real-world example

Articles on Mayo Clinic always include the doctor's name, specialization, and credentials. That's why AI models prioritize Mayo Clinic over anonymous health websites for medical questions.

Verified sources

A17

AI-friendly content formats & feeds

What is it

Evaluates whether the site provides content in formats optimized for AI processing — Markdown page versions, RSS feed with full content, API or data feeds. For sites where an API doesn't make sense, the API portion of the check is skipped (N/A).

Why it matters

AI crawlers and agents prefer clean content without navigational noise. Sites that provide Markdown or clean content versions are processed more efficiently and have a higher chance of being included in AI responses.

Real-world example

Cloudflare offers a Markdown for Agents feature — when an AI agent sends a request with Accept: text/markdown, it receives a clean Markdown version of the page instead of full HTML. Stripe provides a comprehensive REST API with an OpenAPI specification.

Verified sources

A18

Linkability of key facts

What is it

Checks whether key facts and information on the page have direct URLs (anchor links) — whether headings contain ID attributes, whether deep links to specific sections exist, and whether specific information can be shared directly.

Why it matters

AI systems need to link to specific facts, not just entire pages. When AI cites a source, a deep link to the specific section increases response credibility and allows the user to quickly verify the information.

Real-world example

GitHub documentation automatically generates anchor links for every heading, enabling precise citation. When ChatGPT references GitHub docs, the user goes directly to the relevant section.

Verified sources

A19

Changelog / release notes

What is it

Detects a changelog or release notes on the site — change history, new features, fixed bugs. For sites where a changelog doesn't make sense (restaurants, local services), the check is skipped (N/A).

Why it matters

A changelog is a key freshness signal for AI models. A regularly updated changelog signals an actively maintained product, and AI models use it to verify the currency of product information.

Real-world example

Vercel has a public changelog at vercel.com/changelog with dates and detailed change descriptions. When a user asks AI about the latest Vercel features, AI can provide a current answer precisely because of the structured changelog.

Verified sources

A20

Semantic HTML (article, section, nav, aside)

What is it

Checks the use of semantic HTML5 elements — article, section, nav, aside, main, header, footer. These elements provide AI models with contextual information about the role of each part of the page.

Why it matters

AI crawlers use semantic HTML elements to identify the main content of a page. The article element marks primary content, nav marks navigation, and aside marks secondary content — this helps AI ignore noise and extract relevant content.

Real-world example

Web.dev (Google) consistently uses semantic HTML — main content is in article, navigation in nav, and related links in aside. AI crawlers can thus efficiently extract only the educational content without navigational noise.

Verified sources

Try auditing your website

Test your website against all checks and find out what to improve.

Start analysis