BBloxx
LLM visibility audit · admiralbusiness.com · 13 May 2026
Prepared for
Admiral Pioneer · CTO + CEO
Prepared by
Bloxx · Charlie Bailey
Date
May 2026
LLM VISIBILITY AUDIT

Admiral Business is barely visible in the AI surfaces where SMEs are now researching insurance.

Across high-intent UK business-insurance queries run across Claude, GPT, Gemini and Perplexity, Admiral Business appears in 0.58% of competitor rankings. Simply Business appears in 4.85%, Hiscox in 3.90%, AXA in 2.88%. Admiral sits 37th in the competitor table, behind regional brokers we would expect Admiral to outrank. When Admiral does surface, sentiment is 14 positive against 1 negative. The data describes a presence problem rather than a perception problem, and presence problems in AI surfaces are addressable with content and structure.

01 · HEADLINES

The numbers

Queries spanning trades, builders, contractors and designers across Leeds, London, Manchester, Birmingham and Cardiff. Run repeatedly through Claude, GPT, Gemini and Perplexity. Thousands of individual rankings extracted from LLM responses.

Admiral's share of rankings
0.58%
17 appearances in 2,950 rankings
Position in competitor table
#37
Behind regional brokers like Coversure Leeds
Sentiment when present
14:1
Positive against negative ratio
Queries Admiral missed entirely
13 / 20
Zero rankings across all four LLMs

What stands out beyond the headline number

Where Admiral wins
Cardiff + designer

When Admiral does surface, it ranks well: best rank 1, average rank 4.8, average visibility score 52.7. The Cardiff-designer overlap produces rank #1 across Claude, Gemini and Perplexity. Geography plus niche is the unlock.

Where Admiral disappears
Everywhere else

0 appearances across the highest-volume queries: tradespeople Leeds (447 rankings), contractors London (310), builders Leeds (221), builders London (184), contractors Manchester (174), and builders Manchester (127).

Who's filling the gap
AXA, Simply Business, regional brokers

Simply Business leads on volume (143 mentions). AXA dominates the position Admiral should occupy: it is the top competitor on 1,919 weighted prompts where Admiral is missing. Smaller local sites like Coversure Leeds and Edison Ives outrank Admiral on local queries.

02 · WHERE ADMIRAL SITS

The competitor table

Top 20 brands by share of LLM rankings across the test query set, plus Admiral's actual position. Provider breadth = how many of the four LLMs surface the brand at all.

# Brand Mentions Share Avg rank Best Avg visibility Provider breadth Sentiment (+ / · / −)

Admiral Business highlighted in amber. "Provider breadth" of 4 means the brand was surfaced by all of Claude, GPT, Gemini and Perplexity. Admiral has breadth 4 despite only 17 mentions, which means no LLM is structurally hostile. Admiral can land on each surface when the prompt is right.

Provider-level breakdown

Two features stand out in the per-provider data. Anthropic is the surface most likely to mention Admiral today (9 of 17 mentions). Across the others, Admiral lands two or three times each. The pattern suggests Admiral's content is being read by some retrieval pipelines and missed by others, rather than being absent everywhere.

Provider Total mentions sampled Avg competitor rank Admiral appearances Admiral share
03 · QUERY-BY-QUERY

Where Admiral appears, and where it doesn't

Each row is one query, run five times across four LLMs. The "Admiral" column shows whether Admiral was returned in any ranking position. Click any row for the top three competitors and the providers that ran.

Query Rankings sampled Admiral appearances Admiral avg rank Status

The Cardiff signal, in detail

Cardiff-based designer queries are the standout. On "I am a designer in Cardiff, what is the best insurance?" Admiral lands rank 1 on Perplexity (twice), rank 1 on Anthropic, rank 1 on Gemini, and rank 5 on Anthropic. Six appearances on a single query, averaging rank 1.67. The reasons LLMs give are a model for what AI-citation content looks like when it works.

The reasons LLMs cited Admiral on Cardiff designer queries
  • Perplexity, rank 1: "Specialist professional indemnity insurance for graphic designers with transparent pricing starting at £7.08/month."
  • Gemini, rank 1: "Cardiff-based with specialized insurance for designers and a local South Wales support team."
  • Anthropic, rank 1: "South Wales-based specialist insurer for designers with competitive pricing from £7.08/month."
  • Perplexity, rank 1 (separate run): "Specializes in professional indemnity for graphic designers starting at £7.08/month, ideal for Cardiff-based designers."

Three signals do the work: visible pricing (£7.08/month), geographic specificity (Cardiff, South Wales), and vertical specificity (graphic designers, professional indemnity). All three are content choices, not brand or budget.

04 · SOURCES LLMs CITE

What's in the citation diet

When LLMs answer these queries, they cite a long tail of websites. The pattern of which sources keep appearing tells us where the AI surfaces are doing their reading and what kind of content they treat as canonical.

Top citation sources
Three patterns matter

1. Direct insurer sites win. simplybusiness.co.uk, axa.co.uk, markeluk.com, rhinotradeinsurance.com. LLMs prefer canonical brand pages.

2. Niche local sites punch above weight. coversure.co.uk Leeds, insurancebrokercompany.co.uk Leeds, edinsure.co.uk Leeds, kingsbridge.co.uk. Local plus niche beats generic plus national for many queries.

3. Comparison and review sites carry weight. insurancebusinessmag.com, glassdoor.co.uk, gocompare.com, trustatrader.com. These represent PR and reviews work, not on-site SEO.

Admiral's citation footprint
#52 in the source-domain table

admiralbusiness.com gets cited across the sample, but its weighted influence is roughly 10× lower than simplybusiness.co.uk.

Two Admiral pages do almost all the work: /who-we-cover/builders-insurance and /who-we-cover/tradespeople-insurance. They get pulled in as supporting citations but rarely as the canonical "best of" answer source.

Translation: Admiral's content is good enough to be cited as evidence, not good enough to be cited as the answer.

Top source domains by weighted influence

Excludes opaque wrappers (e.g. internal AI-search proxies that don't represent a real public source).

05 · CONTENT GAPS

What's actually missing

Specific content gaps were identified by analysing every prompt where Admiral was outranked. Each gap is tagged by category and weighted by how many prompts it would have helped Admiral surface on.

Gap categories, weighted by prompts affected

Trust signals and authority content dominate. Both categories are addressable through structured content: schema-marked reviews, named-author expertise blocks, dated review banners, and answer-block summaries for long-form guides.

A sample of the highest-impact missing pages

All flagged as high priority, ordered by the number of test prompts each page would have helped Admiral rank on.

Category Suggested page Top competitor filling the gap Prompts affected
06 · WHY COMPETITORS WIN

The reasons LLMs actually give

Whenever an LLM ranks a competitor, it gives a reason. Every reason in the sample was extracted and tagged. The pattern is consistent across all four providers.

Most cited reason
Expertise

"Specialist insurer for...", "30k+ customers", "decades of experience with...". LLMs cite expertise when sites have specialist landing pages, named authors, and topic-cluster depth.

What wins it: deep per-niche pages with named expert authors, "trusted by X customers" microcopy, and case studies.

Second most cited
Reviews

"4.5/5 on Trustpilot", "9/10 Feefo", "Feefo Platinum Trusted Service Award 2024". Trustpilot, Feefo and Glassdoor appear repeatedly in citation sources.

What wins it: structured-data review badges on landing pages, embedded Trustpilot widgets, and a curated review wall.

Third most cited
Pricing

"From £7.32/month", "public liability from £85/year", "covers 33,200 builders". Concrete £ numbers in body copy are quoted back verbatim.

What wins it: a visible "from £X / month" anchor on every niche landing page. The £7.08 number is the single biggest reason Admiral wins Cardiff designer queries.

Reason categories, with sentiment

Reason categoryPositiveNeutralNegativeTotal

Two warnings worth noting. (a) Credentials is the only category where negative outweighs positive. When accreditations are not visible on-page, LLMs occasionally flag this as a deduction. (b) Pricing can swing negative if numbers are stated badly, for example quoted as "expensive" without context. The answer is anchor numbers tied to specific cover types, not a generic price-on-application stance.

07 · HOW LLM AUTHORITY BUILDS

How does authority build over time with LLMs, and how dynamic are they?

LLMs decide what to cite via two distinct loops running on very different timescales. Strategy needs to address both, but the fast loop is where Admiral can move in the next four months.

Slow loop · 12+ months
Training-data influence

LLMs are trained on a snapshot of the web up to a cutoff date. Once trained, the model's baseline associations are frozen until the next major version. Current training cutoffs for GPT, Claude and Gemini sit roughly 6 to 18 months behind today.

Content published this week cannot influence what a base model "knows" about Admiral until the next training corpus is crawled, which is currently a 12+ month feedback loop for major models.

What this means for Admiral: the content shipped in 2026 is what populates the 2027 training run. Not optional, but it pays back over the long tail.

Fast loop · 4 to 12 weeks
Real-time retrieval

Increasingly the dominant mechanism. ChatGPT Search, Perplexity, Claude with web access, Gemini, Bing Copilot, and Google AI Overviews all pull live web results into their answers at inference time.

Visibility here behaves much more like classic SEO. Get indexed, accumulate authoritative signals, and the retrieval layer surfaces your URLs. Most of these systems use Google's index as the backbone, so on top of standard ranking factors (links, schema, depth, mentions on reputable sites), the AI-specific levers are explicit answer-block patterns, llms.txt, and robots.txt allow rules for AI crawlers.

What this means for Admiral: a piece of content live in May 2026 can be cited in answers in June 2026. The Cardiff-designer wins in this dataset are the proof.

A realistic 90-day curve

Weeks 4 to 12

First crawl + index

Googlebot indexation, first AI-bot crawls (Bloxx's Bot Insights captures this per page). First impressions in GSC. Early long-tail rankings.

Weeks 12 to 20

First citations

First repeatable AI-citation hits. Citations are still volatile at this stage, with daily fluctuations of ±30% normal. Sample size needs to be large enough to give confidence.

Months 5 to 9

Stable citation patterns

Citation share stabilises into a repeatable band on each provider. Cross-provider breadth (showing up on 3+ LLMs for the same query) becomes the right success metric.

08 · OPTIONS

Three paths forward

Each path is a defensible answer to the data above. The trade-offs sit in time-to-signal, regulatory drag, and how much ambition is being committed to up front.

Microsites in practice — Admiral's own playbook

The microsite strategy is not theoretical for Admiral. It is one of Admiral's most successful historical patterns.

Spin-out, standalone

Confused.com

Launched by Admiral in 2001 as a price-comparison microsite. Grew into a standalone business, separately operated, eventually sold for £390m. The largest sub-brand exit in UK insurance.

Active sub-brand

Veygo

Admiral's microsite for short-term, learner-driver and under-25s cover. Tests a customer cohort the main Admiral brand cannot serve at the same price point. Still operating, still under Admiral ownership.

Demographic sub-brands

Diamond, Bell, Elephant

Long-running Admiral sub-brands targeting specific demographic and regional segments. Distinct positioning, shared infrastructure underneath. The pattern works.

The same pattern recurs across adjacent regulated UK financial services. first direct as HSBC's digital-only sub-brand, category-leading on customer satisfaction. Quote Me Happy as Aviva's digital self-service microsite. Churchill, Privilege and Green Flag as three distinct sub-brands sitting under Direct Line Group with one capital base. Sub-brands let an incumbent serve segments and behaviours the main brand cannot reach efficiently.

What this dataset adds is that for AI-first surfaces, sub-brands carry an asymmetric advantage on LLM citation. They sidestep the regulatory and brand drag of the parent without losing the parent's reinsurance, claims and capital position. The "Cardiff designer" signal in this report is exactly this pattern in miniature: Admiral wins at rank 1 on a narrow vertical because brand, geography and niche overlap cleanly.

Exit options at maturity, well understood

Option 1

Standalone

Microsite operates as its own brand long-term (the Confused.com pattern). Useful where the microsite's positioning sits outside the main brand's appetite or category.

Option 2

Subdomain

Integrated under the main brand once proven (e.g. trades.admiral.com). Carries the main brand's authority while keeping content lane separate from the brand site's regulatory cycle.

Option 3

Subfolder + 301

Full consolidation. Microsite content migrates into admiral.com/trades, the original microsite 301-redirects, all authority and citations transfer to the main domain. Decision is data-led at month 4.

The three paths, side by side

Path A · Freelance support, in-house Path B · Bloxx 3-microsite pilot Path C · Embedded venture studio
What it is Charlie inside Admiral, optimising admiralbusiness.com alongside the in-house team and current agency. Focus on fixing the existing site, shipping new niche pages and instrumenting bot tracking. Charlie embedded 4 days/week inside Admiral Pioneer as operator-in-residence. Continuous testing across journeys, use cases, demographics and sub-brands at platform speed. Multiple microsites in flight, main-site fixes folded in alongside. Bloxx as the substrate, operating model handed to Admiral by end of season.
Time to first live page 4 to 8 weeks (procurement, agency coordination, sign-off) 1 to 3 weeks per microsite, with 2 to 3 in flight concurrently. Cadence accelerates once the first launch is templated.
Time to first AI-citation signal 3 to 6 months on main domain (assuming clean robots, working schema) 2 to 4 months per microsite, with the first signals coming in while later launches are still in build.
Investment £22.5k over 4 months (proposed). Existing agency retainer and platform spend continue. Indicative £15k/month retainer (4 days/week embedded plus full Bloxx platform), separate from an Admiral-discretion studio budget for domains, paid testing and content scaling. 6-month studio season. Final shape sized with Pioneer.
Procurement Internal freelance, fastest. Days rather than weeks. TBD. Either an external-vendor route via Pioneer or an internal contractor / freelancer wrapper, whichever Admiral can get live faster.
What changes on admiral­business.com Material change. New pages, structural fixes, schema, llms.txt, footer copyright update. Material changes folded in. Main-site structural fixes (schema, llms.txt, footer, weak per-niche pages) can run in parallel with microsite launches because the operator is embedded. Best of both.
Regulatory / legal surface Maximum. Every change goes through Admiral compliance, agency review, legal sign-off. Lowest of the three. Pioneer wrapper, microsite separation and templated sign-off reduce the per-page legal lift sharply.
Authority capture if you stop Stays with Admiral. All work sits on admiralbusiness.com. Stays with Admiral. Domains in Admiral name. Bloxx platform handover available, no fee. Operating model documented and transferable.
Risk if it doesn't work Spend sunk into a slow domain with structural issues. Hard to attribute wins or losses to specific changes against the agency's work. Highest gross commitment. Mitigated by per-microsite independence (each is its own test bed) and the month-3 decision gate.
Wasted-effort risk if folded back in None. Work is on the destination domain. Same Phase 3 patterns apply per microsite. Volume increases surface area but not the per-microsite fold-in risk.
What you actually learn How well admiralbusiness.com responds to a focused, instrumented rebuild. Useful, but a single data point. Whether Admiral can sustain a continuous category-exploration engine at platform speed. Each launch refines the playbook for the next.

The case for each path

Path A

Make the main site work

It is the destination Admiral cares about, and admiralbusiness.com has structural and content problems that need fixing regardless. Freelance procurement is fast. The work compounds on the domain Admiral already owns and markets.

Strongest if: Admiral has high confidence in the existing positioning and wants speed of fix over speed of experiment.

Path B

Run three independent experiments

The Cardiff-designer signal is real and isolated, and the rest of the surface is uncontested for Admiral. Microsites allow three distinct strategies (vertical hub, niche depth, use-case) to be tested without the regulatory drag of the main brand. The platform already exists.

Strongest if: Admiral is willing to trade slightly slower procurement for genuinely separable test cells and a faster cadence than the main brand can support.

Path C

Run the venture studio

Embed inside Admiral Pioneer 4 days/week as operator-in-residence. A continuous test bench across journeys, use cases, demographic segments and sub-brands. Speed of launch from idea to live page measured in days, iteration cycles measured in hours. Main-site fixes fold in alongside. Bloxx is the substrate, Admiral keeps the playbook. The Confused.com pattern, productised.

Strongest if: Pioneer's strategic prize is the operating model itself, not any one microsite win, and there is appetite to commit to a continuous experimentation engine across a 6-month season.

09 · RECOMMENDATION

Path B as the entry, Path C as the escalation

The data is consistent with all three paths working. Path B is where it lands today. Path C is where it can land at 12-month scale if Phase 1 proves the model.

Path C can also be committed to up front, collapsing Path B into a longer studio season. That makes sense only if Pioneer's strategic prize is the operating model itself rather than any one microsite win. Otherwise the lower-risk move is to validate the Path B hypothesis first, then size Path C against actual evidence at month 4. Either is defensible.

The caveats that apply across paths

A simple decision frame

Question 1

Can Admiral procure an external vendor in under 6 weeks?

If yes, Path B or Path C is on the table. If no, Path A is the right answer by default. Pioneer wrapping the contract typically resolves this in Admiral's case.

Question 2

Is the strategic prize a microsite, or the engine that produces microsites?

If the prize is one or two winning microsites, Path B is sufficient. If the prize is the continuous category-exploration engine itself, Path C is the right level of commitment.

Question 3

How much regulatory drag is acceptable per page?

If the main-brand sign-off cycle is genuinely under 15 working days draft-to-publish, Path A is workable. If it is typically longer, Path B or Path C carry the velocity advantage.

10 · WHAT WAS TESTED

The scope behind the numbers

A snapshot of the test design, for context. The audit can be expanded on request: see the optional UK AI Landscape Report referenced in the pilot proposal.

What was tested

Queries, niches, locations

High-intent UK business-insurance queries spanning four niches (tradespeople, builders, contractors, designers) across five cities (Leeds, London, Manchester, Birmingham, Cardiff). Each query was run repeatedly across Claude, GPT, Gemini and Perplexity using four prompt shapes, ranging from open-ended ("Best insurance for builders in London") to first-person ("I am a designer in Cardiff, what is the best insurance?").

Cities chosen deliberately to expose how geography and niche interact: Cardiff is Admiral's HQ, the others are not.

What is out of scope

And what would extend it

This audit covers business-insurance queries only. Admiral's car insurance and consumer side are out of scope and would be covered by an extension.

For a fuller picture, the optional UK AI Landscape Report (referenced in the pilot proposal) covers significantly more prompts, more competitors, and broader UK financial-services categories.