Skip to main content

Brand Analysis: FAQs

This article focuses on the frequently asked queries related to the Brand Analysis module on Share of Model™ Platform

Ishita Mishra avatar
Written by Ishita Mishra
Updated over 3 months ago

1. What is the Brand Mention Rate (BMR)?
The Brand Mention Rate (BMR) measures how often a brand is mentioned by large language models (LLMs) in response to a query about brands in a specific category. A high BMR indicates strong visibility and market presence, while a low BMR suggests the need for better content strategies or SEO improvements.

2. What is Share of Voice (SOV)?
Share of Voice (SOV) combines two key factors: Brand Mention Rate and Average Position in LLM-generated content. SOV provides a more comprehensive view of brand visibility by considering both frequency and prominence of mentions. A higher SOV reflects more frequent and higher-ranking mentions.

3. How can I track my brand's visibility over time?
The Brand Mention Rate Over Time chart tracks the evolution of your brand’s visibility across LLMs. This chart helps monitor trends, such as increases or decreases in your brand’s mentions over specific periods, allowing you to evaluate the impact of marketing efforts or external factors​.

4. What does the Mention Rate Top Brands chart show?
The Mention Rate Top Brands chart compares the top 25 brands in your category, ranked by how frequently they are mentioned in LLM responses. This provides insights into industry trends and helps benchmark your brand’s performance against competitors.

5. How is the "Share of Voice by Model" graph calculated?
The graph follows a two-step approach:

  1. Identify the Top 10 Brands Overall – We determine the top 10 brands with the highest Share of Voice (SOV) across all LLMs selected in the filters.

  2. Calculate SOV for Each LLM – For these top 10 brands, we compute their individual SOV scores across each LLM.

  • Why is this method used?
    This approach ensures consistency in brand representation across the different bar charts. If we selected the top 10 brands separately for each LLM, the brands displayed in the graph could vary between models, making direct comparison difficult. By fixing the top 10 brands based on overall SOV, users can accurately compare their performance across different LLMs.

  • What happens if a brand is in the top 10 overall but not for a specific LLM?
    A brand may rank in the top 10 overall but have a lower ranking (or even fall outside the top 10) for a specific LLM. In such cases, it will still appear in the graph to maintain consistency. For example, if a brand is among the top 10 brands overall but ranks poorly on one or more models, it will still be included in the graph due to the overall ranking approach.

6. What is the significance of the Perceived Strengths chart?
The Perceived Strengths chart identifies the key advantages associated with your brand as perceived by LLMs. This data is grouped into clusters, helping you understand where your brand excels compared to competitors and providing actionable insights for enhancing your marketing strategies.

7. How are Perceived Weaknesses evaluated?
The Perceived Weaknesses chart visualizes the key disadvantages associated with your brand as identified by LLMs. Weaknesses are grouped into clusters, making it easier to identify areas of improvement and to benchmark against competitors' shortcomings​.

8. What is the purpose of the Share of Voice Over Time chart?
The Share of Voice Over Time chart helps track how your brand's visibility changes in comparison to competitors over a period. It combines BMR and Average Position to offer insights into shifts in brand prominence, revealing the impact of campaigns or external events​.

9. How does the Brand Attributes chart work?
The Brand Attributes chart provides a quantifiable evaluation of how LLMs perceive specific attributes of your brand and competitors. This score is a way to evaluate how accurate this brand is on this specific attribute. Scores range from -5 to +5 and help to compare attributes perception of multiple brands such as quality or customer satisfaction.

10. How are strengths and weaknesses compared across different LLM models?
Charts for Perceived Strengths and Perceived Weaknesses by LLM models show how each LLM perceives your brand relative to competitors. They highlight which aspects of your brand are most frequently recognized and where improvement is needed​.

11. How is Awareness translated for LLMs?
Translating "brand awareness" into an LLM context involves measuring how often a brand is mentioned (textual mentions and frequency), the sentiment behind those mentions (contextual relevance), and its association with key features (top-of-mind awareness). LLMs can assess how prominently a brand is discussed in relation to competitors (competitor comparisons) and analyze user engagement through queries. Metrics such as mention rates, sentiment analysis, and share of voice provide quantifiable insights into brand visibility, helping determine how "top-of-mind" a brand is in various contexts.


12. How are the various metrics on SoM platform calculated?

  • Brand awareness is measured by asking large language models (LLMs) to list brands within a specific category, resulting in two key metrics: Mention Rate and Share of Voice (SOV).

    • Mention Rate is the percentage of times a brand appears in LLM responses. For example, if Brand X is mentioned 55 times out of 100 responses, its Mention Rate is 55%.

    • SOV goes further by considering not just how often a brand is mentioned but also its position in the response. Brands that appear in higher positions (like 1st or 2nd) get more weight, contributing to a higher SOV compared to brands that appear lower.

  • For Brand Perception, LLMs rate brands on attributes (scored -5 to +5) such as quality or customer service, with the metric showing the average score for each attribute. Additionally, LLMs provide perceived strengths and weaknesses, which are grouped into themes or clusters, revealing the percentage of pros or cons for each brand in key areas.

Understanding BMR and SOV helps brands assess their visibility and prominence in the market compared to competitors. Additionally, analyzing brand perception through attributes evaluation and perceived strengths/weaknesses provides insights into consumer sentiment and areas for strategic improvement. These metrics enable brands to refine their marketing strategies, enhance their digital presence, and better position themselves within their industry.

13. How are clusters selected for brand strengths and weaknesses by LLMs?The system for selecting clusters works in three steps:

  1. Attribute embeddings are generated for each pro and con, which represent the meaning of words as numerical values.

  2. A clustering algorithm groups similar words into themes based on their embeddings.

  3. Once the clusters are formed, an LLM is asked to name each cluster, summarizing the theme of the grouped pros and cons.

  4. Calculation: We sum the occurrences of each attribute belonging to a cluster to measure its size.

This process helps categorize brand strengths and weaknesses efficiently.

14. How are the auto attributes chosen for brand evaluation?
Auto attributes are generated by LLMs. We ask the LLMs to suggest important themes or concepts to consider when buying or investing in the specific category defined for the analysis. These suggestions are then used to evaluate the brand.

15. Why is my Brand Mention Rate Over Time line chart flat?
The chart is displayed in weekly or monthly periods. For newer analyses, there may be only a few data points available, which can make the line appear flat over time due to limited data.

16. How long will it take for me to have representative data?
Depending on the metrics: for brand perception and awareness, data tends to stabilize in a few days or weeks. However, for metrics like pros/cons clusters and attributes, LLM responses vary more, so while early insights can be valuable, clusters usually become more representative after about a month.

17. Why are we focused on the current LLMs, and will more be included in the future?
Currently, the analysis focuses on popular LLMs like Chat GPT 3.5/4 Turbo, Gemini 1.5 Pro, Llama Meta 3.1 Turbo, and Claude 3.5 Sonnet. However, new models based on priority and upgraded versions will be added to the analysis as they become available.

18. Why is there no historical data for my SOV over time chart?
Historical data isn’t available because LLM data collection starts only from the day the analysis is launched, unlike media data which may have past records. It’s recommended to review the SOV chart after at least a month to allow enough data to accumulate.

19. How many times do we prompt the model per day?
Models are prompted multiple times per day, enough to remove outlying / inconsistent responses.

20. How do previous prompts affect model responses? Is "context window bias" removed?
In the Share of Model™ platform, each prompt is handled individually without any back-and-forth conversation, ensuring there’s no "context window bias." Since we use a single prompt-and-answer setup for each query, previous prompts don’t affect current responses, and context windows remain unaffected.

21. What do the numbers on the In-depth Exploration of Perceived Positive and Perceived Negative represent?
The numbers indicate how often each attribute occurs. We sum the occurrences of each attribute belonging to a cluster to measure its size. Depending on the filters applied, these counts can reflect all brands (including competitors) or focus on a specific brand. This process helps categorize brand strengths and weaknesses efficiently.

22. Are there examples of prompts used to compile SoM reports?
We have built prompt engineering pipelines to industrialize the generation and versioning of prompts for each report. As a result, we have several variations of prompts for each analysis, enabling us to obtain relevant results.

  • For instance for the Awareness report we use prompts similar as “What brands come to mind when you think of {category} ?”

  • For Category Perception, we use prompt - for example - such as “Which is better, {brand} or {competitor 1} ?”

23. How does selecting a country, such as the USA, affect the prompt during the analysis?
Selecting a country simply adds a "pre-prompt" to the main prompt (question). For example, if the USA is selected, the prompt is adjusted to something like, "I am from the USA... What brands come to your mind when we are talking about {category}?" This helps tailor the analysis to the selected region.

24. How is weight distributed in "share of" charts without a breakdown by LLMs?
In "share of" charts without a breakdown between different LLMs, the weight is distributed equally across all LLMs.

25. Why is a brand visible in the "Share of Voice by Model" graph but not in the "Average Position vs Mention Rate" graph?
This occurs because the weighting is not linear; higher positions receive significantly more weight. For example, a first-place position earns 25 points, second place earns 10 points, third earns 5 points, and fourth earns 4 points.

26. How effectively can Share of Model manage brand-specific nuances, such as recognizing acronyms, spelling variations, or common errors?
This is an area we’ve been working on continuously to improve. For sections of SoM where we don't ask for specific brands, but can get any brand as a response (i.e Brand Awareness), we have a system in place to group variations of the same brand name, such as “AWS” and “Amazon Web Services.” For sections where we specifically ask for information about a single brand (e.g., Brand Perception or Category Perception), this isn’t an issue. In summary, while LLMs occasionally refer to the same brand in different ways, our system effectively manages these variations.

27. How do we ensure that LLMs are questioned as if they were consumers asking, rather than marketers?
Share of Model is not a consumer survey; it is a tool for measuring LLMs. Therefore, we do not frame questions exactly from a consumer’s perspective, nor do we replicate a customer’s exact thought process or reasoning. Instead, we design prompts to assess how LLMs interpret and respond to queries in a way that aligns with real-world consumer interactions.

Did this answer your question?