You can test your brand's AI visibility right now: open ChatGPT, Claude, Gemini, or Perplexity and ask the questions your customers actually ask. But a single test rarely shows the full picture. AI visibility is uneven by design, and a structured approach across four prompt types shows exactly where you appear and where you don't.
What AI Visibility Actually Means for Your Brand
AI visibility is whether your brand appears in AI-generated answers to questions your buyers are already asking. It is not a Google ranking metric or a social media measure.
When a shopper asks ChatGPT "best protein powder for muscle recovery," the AI names one or two brands and moves on. According to McKinsey, 40 to 55 percent of consumers in key sectors including apparel, wellness, and beauty now use AI-based search to make purchasing decisions.
AI visibility is tracked through two metrics. Citation frequency measures how often your brand appears across a set of relevant prompts. Share of voice is your brand's citations as a percentage of all citations in your category.
For a deeper look at how this is measured, see What Is AI Citation Tracking and Why Brands Need It Now.
Why Testing One Prompt Type Isn't Enough
Testing with a single prompt gives an incomplete picture. A brand can appear in a comparison query and be completely absent in a broad discovery query.
A brand visible in comparison queries is reaching buyers in evaluation mode. A brand visible in broad discovery queries is reaching buyers before they have a shortlist. Those are different buyer moments. Gaps in either one cost real conversions.
The only way to see the full picture is to test across the same mix of query types your buyers actually use.
The 4 Prompt Types That Show Where You Stand
Four prompt types cover the AI-driven buyer journey from first awareness through to brand knowledge. Each one surfaces a different kind of gap.
1. Broad prompts
These test top-of-funnel discovery. Example: "best skincare brands" or "affordable city hotels in Manila." No location, no brand in mind. Just exploration. Many brands underperform here without realising it because they have never been seen outside their own niche.
2. Specific prompts
These test more defined intent: location, audience, use case, or a particular value angle. Example: "best protein powder for women over 40" or "hotels near a concert venue with good amenities." A brand that appears in broad queries but drops in specific ones has a positioning gap. The brand is known in general, but not clearly relevant in the moments that matter most.
3. Comparison prompts
These sit at the decision stage. Example: "[your brand] vs [competitor]" or "which is better for sensitive skin, X or Y." Comparison is a high-intent moment. A brand absent here is invisible right when buyers are closest to converting.
4. Brand-direct prompts
These test something different from the other three. Instead of asking whether you get recommended, they ask whether AI actually knows you. Example: "Is [brand] a good option for [use case]?" or "What is [brand] known for?" The results show whether AI describes you accurately and whether the tone is positive or neutral. They also reveal whether your own website is being cited as a source. For newer brands still building domain authority, this category is often the most revealing.
The pattern across most audits: a brand appears in one or two prompt types and drops in others. That uneven coverage is the actual problem, not total absence.
What This Looks Like in Practice
Here is a prompt set built for ibis Styles Manila Araneta City, a mid-range hotel in Quezon City. These 13 prompts cover all four types and reflect the questions real travellers ask AI when planning a stay.
Broad
- What are the best budget-friendly hotels in Metro Manila for a staycation?
- Which affordable city hotels in the Philippines are good for a weekend trip?
- What are the best hotels in Manila near entertainment districts?
Specific
- What are the best hotels in Quezon City for a staycation?
- Which hotels near Smart Araneta Coliseum are best for concerts and events?
- Which family-friendly hotels in Quezon City have a pool and good location?
- What are the best affordable or mid-range hotels in Quezon City for weekend stays?
- Which hotels in Quezon City have rooftop pool bars or rooftop dining?
- Which hotels in Araneta City are good for small events, meetings, or private gatherings?
- What hotels near Cubao offer a good mix of workspace, food, and overnight stay?
Comparison
- How does ibis Styles Manila Araneta City compare with Novotel Manila Araneta City?
- ibis Styles Manila Araneta City vs Seda Vertis North: which is better for a 2-night stay?
Brand-direct
- Is ibis Styles Manila Araneta City a good hotel for a Quezon City staycation?
Running this set across ChatGPT, Gemini, Claude, and Perplexity shows exactly where ibis appears, where it drops, and whether each platform describes the brand accurately when asked directly.
How to Read What You Find
Brands are rarely completely invisible. The typical pattern is uneven coverage: present in some prompt types and query contexts, missing in others.
Run the same prompts on ChatGPT, Gemini, Claude, and Perplexity separately. Results often differ. A brand may appear consistently on Gemini but barely show on Perplexity. Each model draws from different data sources and weights content signals differently.
Three things to look for:
- Where you appear: These are your current strengths. Build on them to maintain and extend that position.
- Where you drop out: These are your clearest improvement opportunities. Content strategy should target these first.
- Where you underperform by platform: This often points to a technical crawler or source gap specific to that model.
A note on what this measures: visibility scores are not a judgment on brand quality. They measure how strongly the brand is represented in tested AI recommendation scenarios. A well-known brand with weak AI representation is simply missing the signals those models need.
To see how a structured audit works from start to finish, read What's Inside an OmniGro AI Visibility Assessment.
FAQs
How many prompts do I need to test my brand's AI visibility?
Start with 12 to 20 prompts spread across all four types. That is enough to reveal meaningful patterns. A full audit runs 100 or more prompts across multiple platforms over a defined period.
Should I test the same prompts on every AI platform?
Yes. Run identical prompts on ChatGPT, Gemini, Claude, and Perplexity. Differences in results show platform-level gaps and tell you where to focus first.
What if my brand appears in some prompts but not others?
That is the most common outcome. It means you have a base to build from. The next step is identifying which prompt types show gaps and which content or source signals need to change.
AI gave a different answer when I tested the same prompt twice. Which result is right?
Both, and neither in isolation. Generative AI is non-deterministic: the same prompt can return different answers on different attempts. A single test is a data point, not a verdict. Consistent appearance across repeated runs signals real visibility. That is why periodic testing matters more than a one-off check. Running the same prompts regularly removes false positives and negatives and shows the actual pattern. OmniGro's assessment runs prompts across multiple sessions for that reason.
Does Google ranking affect AI visibility?
Partially. Google rankings can influence Gemini, which pulls from Google Search. But AI-specific signals, including structured answers, entity consistency, and third-party citations, operate independently of rankings. See why some brands appear in AI answers and others do not.
How often should I retest?
Monthly is a practical baseline. Weekly is better in competitive categories. OmniGro's AI Citation Tracking runs automated prompt monitoring every 6 to 12 hours so brands never miss a shift.
Conclusion
Four prompt types. Every platform. Regular cadence. That is the method for building a clear, honest picture of AI visibility. Most brands find they are not absent overall, just uneven. Uneven is fixable. Start by mapping where the gaps actually are.
If you want to see how this works in practice, our free AI Visibility Assessment covers all four prompt types plus citation source analysis, platform-by-platform breakdown, and a prioritised action plan. See what's inside an OmniGro assessment.
