Skip to main content

Fan-out Queries – what, how & purpose

Understand the internal queries LLMs use to build their answers.

Apolline Vanneste avatar
Written by Apolline Vanneste
Updated this week

🎯 Why this view exists

To build a response, models often generate multiple internal queries, known as fan-out queries, to explore the topic before producing the final answer.

These fan-out queries largely determine:

  • which angles of the topic are explored,

  • which domains and URLs are retrieved,

  • and whether your domain or brand appears later as a Source or a Link.

The Fan-out Queries view is designed to make this internal exploration visible, so you can better understand how generative answers are constructed.

To go further:


📊 What this view helps you analyse

The Fan-out Queries view helps you analyse:

  • how LLMs decompose a prompt into multiple internal queries,

  • how stable or volatile these fan-out queries are,

  • how this behaviour differs across engines,

  • and how your domain and competitors are positioned within this fan-out layer.

It acts as a bridge between LLM reasoning and search visibility signals.

#1 Overview tab: identifying global fan-out patterns

The Overview provides a high-level dashboard to quickly identify patterns and trends across fan-out queries.

Here, you can observe:

  • Coverage: the percentage of fan-out queries for which your domain is present when URLs are retrieved

  • Total QFO: the total number of fan-out queries generated across tracked prompts

  • Average QFO: the average number of fan-out queries per prompt, by engine

You can also analyse coverage distribution:

  • by thematic or topic

  • by intent type (exploratory, comparative, decision-making, etc.)

This view is designed to answer questions like:

  • How broad is the fan-out explored by the model?

  • On which topics or intent types is my coverage stronger or weaker?

  • Do different engines behave differently at a high level?

#2 Fan-out tab: analysing individual fan-out queries

This view provides the detailed list of fan-out queries, either:

  • shown flat ("Grouped by: default")

  • or grouped by prompt, to compare fan-out patterns across engines.

For each fan-out query, you can see:

  • the fan-out query itself

  • the intent(s) associated with it

  • its stability over the last 12 collects for a given prompt and engine: 1 bar is the fan-out was present in the collect, no bar if it was absent.

  • whether your domain is present in the listed URLs provided by the model

  • the full list of URLs, ordered as returned by the API

This makes it possible to:

  • understand which internal queries the model repeatedly relies on,

  • distinguish structural fan-out queries from more occasional ones,

  • and observe relative positioning versus competitors within a given fan-out.

Reminder: Depending on the model and API capabilities, fan-out visibility may be partial and reflects the retrievable part of the model’s exploration, not its full internal reasoning.

#3 Fan-out at prompt level: understanding how a single prompt is decomposed

From the Rankings or Search Queries views, you can access the above fan-out details for a specific prompt.

This perspective helps you understand:

  • how a single prompt is broken down into multiple internal questions,

  • which types of fan-out queries the model prioritises for that prompt,

  • and how your presence varies across these internal queries.

It provides a focused lens on prompt-level reasoning, complementary to the global overview.


💡 Reading cues: how to interpret fan-out data

These patterns are signals to read, not rules.

  • Fan-out queries that are stable over time often reflect core questions the model consistently asks itself.

  • Differences across engines highlight engine-specific reasoning strategies.

  • Fan-out queries where competitors are consistently present help explain why certain domains repeatedly influence answers.

  • Recurring fan-out queries can also be read as indicators of topics the model repeatedly expects to find content about when building answers.

Fan-out data should be read as a map of the model’s internal exploration, not as a traditional ranking report. It highlights patterns of exploration, not direct causality between a single query and final visibility.


✨ How to use this view effectively

Start by identifying fan-out queries that appear consistently across engines and collects. These recurring queries often reflect the core questions the model relies on to explore a topic and determine which domains influence the final answer.

Did this answer your question?