AI Competitor Analysis: The Workflow We Run for Clients (2026)

Editorial illustration of three competitor company shapes arranged in a row with a single operator at a desk in front of them, comparing them side by side with a chart connecting their positioning to an empty space labeled as the unclaimed territory

Competitor analysis usually dies in browser tabs.

A marketer opens fifteen competitor sites, screenshots a few homepages, copies some pricing into a Google Doc, makes notes about positioning that sound smart in the moment, and never opens the doc again. Two weeks later somebody on the team asks “what are competitors doing on email?” and the whole thing starts over from scratch.

AI competitor analysis solves the tab-thrash problem. Done right, it turns competitor research into a repeatable workflow that produces a brief instead of a graveyard of tabs. Done wrong, it produces a slop summary that sounds intelligent but doesn’t drive any decisions.

This post is the AI competitor analysis workflow we run for clients at ravitz.co. The tools, the prompts, the specific outputs that actually get used, and the brand teardowns we built using exactly this pattern.

For the related operator workflows in the cluster, our 30 ChatGPT prompts for marketers post has the competitor-teardown prompt as #3, and our SEO automation post covers the SERP-based competitor angle.

The problem with most competitor analysis

Three patterns make most competitor analysis useless:

It’s purely observational. Lists what competitors do, makes no recommendation about what you should do differently. A list of facts is not analysis.

It’s narrow. Looks at one competitor at a time, never compares across the set to find positioning gaps. The insight is in the deltas, not the individual data points.

It’s stale. Done once in a planning offseason, never refreshed. Competitors move quarterly; analysis-once-a-year is barely informative.

A working AI competitor analysis workflow addresses all three: it produces recommendations, it compares across the set, and it’s cheap enough to refresh on a regular cadence.

What AI is actually good for here

AI shifts the cost structure of competitor research dramatically. The mechanical work (visiting sites, extracting positioning language, comparing pricing, tagging content patterns) used to be the bulk of the time spent. AI handles all of it in minutes.

What’s left is the work AI is bad at: judgment about which gaps are commercially meaningful, which moves to counter, which to ignore, which to use as proof points for your own strategy.

The right model is: AI gathers and synthesizes, the human makes calls about what to do with the synthesis. We covered the same pattern from a different angle in our AI agents for marketing post under the “research and customer-voice agents” category.

The competitor analysis workflow we run

Six steps. End-to-end takes 60-90 minutes for 3-5 competitors once the workflow is in place. The first run is longer because you’re building the prompt library.

Step 1: pick the competitor set

The instinct is to analyze every plausible competitor. The discipline is to pick 3-5 that matter.

Two types of competitors usually deserve the slot: – Direct competitors: companies whose ICP overlaps 70%+ with yours and whose product is in the same category – Adjacent competitors: companies your customers also evaluate but who aren’t in the same category (e.g. for a marketing consultancy, the adjacents might be a freelance writer collective and a marketing agency)

Skip aspirational competitors (companies 10x bigger than you whose strategy doesn’t translate), tangential mentions (companies that just keep coming up in conversation), and dead competitors (companies the market has moved past).

Step 2: collect the raw inputs

Per competitor, the high-leverage inputs:

  • Homepage
  • Pricing page (if public)
  • Top 3 feature or service pages
  • About page
  • Top 5 blog posts ranked by traffic (use Ahrefs site explorer or Similarweb for this)
  • Two of their recent ads (if running paid, find via Meta Ad Library or LinkedIn Ad Library)
  • Last 3 LinkedIn posts from the company page
  • One sales conversation that mentions them (from your CRM)

Save URLs to a single file. The AI can fetch most of them; the rest get pasted in as content.

Step 3: run the positioning extraction prompt

Per competitor, run this prompt in Claude or ChatGPT:

You are auditing [competitor.com] for a marketing team building positioning
at [our company]. Below are inputs from their homepage, pricing, feature
pages, blog, and ads.
Extract:
- The single sentence of positioning they're claiming (your synthesis, not
a quote)
- The 3 loudest proof points they emphasize
- The specific audience they're targeting (job titles, company size,
industry)
- The audience they're clearly NOT targeting (read between the lines)
- The 5 words/phrases they repeat across surfaces (their vocabulary)
- The 3 objections they seem to be preempting in their copy
- The price point or price range, and how they frame it
- One thing they're saying that nobody else in the category is saying
- One thing they should be saying but aren't
Inputs:
[paste each page or URL with the source labeled]
Keep the output under 400 words. Cite the source page for each claim.

That’s the per-competitor brief. You’ll have 3-5 of these at the end of step 3.

Step 4: run the cross-competitor comparison prompt

Now the synthesis. Take the 3-5 per-competitor briefs from step 3 and run:

Below are positioning briefs for [N] competitors in the [category] space.
Compare them and identify:
- The shared claims (things 3+ competitors say the same way). These are
category baseline and we can't differentiate on them.
- The contested claims (things competitors say differently). These are
where positioning fights happen.
- The unclaimed territory (things nobody is saying that the audience
cares about). These are our differentiation opportunities.
- The most crowded positioning slot. Avoid this if possible.
- The most differentiated positioning currently held, and by whom
Briefs:
[paste outputs from step 3]
For the unclaimed territory section, be specific. "Nobody says they're
fast" is not specific. "Nobody anchors their pricing to outcomes instead
of features" is specific.

This step produces the actual insight. The shared claims tell you what’s table stakes. The contested claims tell you where the fights are. The unclaimed territory tells you where the opportunity is.

Step 5: run the content gap analysis

For SEO-driven competitive analysis, add this step:

Below are the top-performing pages from [competitor.com] (by traffic).
For each, identify:
- The keyword theme it ranks for
- The format (listicle, how-to, comparison, case study, etc.)
- The depth signal (word count, internal links, recency)
Then group them and tell me:
- Which topics they own (3+ ranking pages on the same theme)
- Which topics they barely touch (1-2 thin pages, easy to outflank)
- Which topics nobody in this set ranks for that the audience clearly
searches for
Inputs:
[paste competitor top pages from Ahrefs/Similarweb export]

This output feeds directly into the content plan. The “barely touch” and “nobody ranks” lists are your fastest SEO wins.

Step 6: write the recommendation memo

The AI outputs are inputs. The human writes the memo.

A working competitor analysis memo answers four questions:

  1. What’s table stakes in this category (we have to ship these to be considered)?
  2. Where are the active fights (and which side do we want to be on)?
  3. Where’s the unclaimed territory (and what’s our move into it)?
  4. Where are competitors weak that we should attack next quarter?

Three pages, max. The memo is what the team uses, not the raw competitor briefs. The briefs are the receipts.

What this workflow produced for ravitz.co

Concrete examples since most posts on this topic are theoretical.

Two of the teardowns on this blog used exactly the workflow above:

Our Liquid Death AI marketing playbook teardown ran the steps above on Liquid Death plus four water-category competitors. The “unclaimed territory” output became the post’s core argument: most water brands compete on purity, Liquid Death competes on humor, the rest of the category should pick a different non-purity axis to compete on.

Our Duolingo social media AI teardown ran the same workflow on Duolingo plus three language-app competitors. The unclaimed territory was character-driven brand voice. Nobody else in the set was doing it, and Duolingo’s social engagement is 10-20x category baseline as a result.

The teardowns work because the underlying workflow produces specific findings, not generic observations. Every claim is traceable to source material.

Tools we use

For an AI competitor analysis workflow, the tools split into three categories:

Data sources. Ahrefs for competitor SEO data (top pages, keywords, backlinks). Similarweb for traffic and channel mix. Meta Ad Library and LinkedIn Ad Library for current paid creative. The competitor’s own site for positioning copy.

AI synthesis. Claude or ChatGPT for the prompts in steps 3-5. We use Claude for the comparison step because it tends to push back on weak claims; ChatGPT is fine for the per-competitor extraction.

Memo + storage. Notion or Google Docs for the final memo. Keep the raw briefs in a separate folder linked from the memo. The receipts matter for re-running the workflow next quarter.

For the broader stack context, our complete AI marketing stack post covers where competitive intelligence fits among the other categories.

The mistakes that ruin competitor analysis

Four patterns we keep watching:

Analyzing too many competitors. Five is the cap. Ten produces noise, not signal. The marginal competitor adds analytical complexity without adding insight.

Skipping the cross-competitor comparison step. Per-competitor briefs are interesting; cross-competitor synthesis is where the strategy comes from. Teams that stop after step 3 end up with descriptive analysis, not strategic analysis.

Treating the AI output as the memo. The recommendation memo is a human document. The AI produces evidence. The human picks what to do about it. We covered the broader pattern in our AI marketing ROI post: AI does the synthesis layer, humans own the judgment layer.

Not refreshing. Quarterly is the right cadence for most B2B categories. Monthly for fast-moving consumer brands. Never is the most common cadence, and the most damaging.

When to skip AI competitor analysis

Not every team needs this. Skip the workflow if:

  • You don’t have 3+ real competitors yet (early-stage, niche markets)
  • You already know the strategic gaps and just need execution, not more research
  • You did this 60 days ago and the category hasn’t moved (the analysis-itself trap)
  • You’re using competitor analysis to procrastinate on shipping (most common reason)

The output of competitor analysis should be a decision, not another document. If you can name the decision in advance, the analysis is worth doing. If you can’t, you’re stalling.

If your team wants help running this kind of operator-grade competitive analysis on your stack, our services page explains how we work, and you can get in touch here.

FAQ

Can I do this without Ahrefs or Similarweb? Yes for steps 1-4. Those just need the competitor websites, which are public. The content gap analysis in step 5 is much weaker without ranking data. You can still eyeball top blog content from the competitor’s own site, but you lose the traffic-weighted “what’s actually working for them” signal.

How long should the per-competitor brief be? Under 400 words. The instinct is to make them comprehensive; the discipline is to keep them scannable. The cross-competitor comparison step in step 4 is where depth pays off.

Should I use the same prompt for every competitor? Yes for consistency. The whole point of the cross-comparison in step 4 is that the briefs are structured identically. Changing prompts per competitor breaks the comparison.

How do I find competitor ads if they’re not public? Meta Ad Library and LinkedIn Ad Library are public. Google Ads aren’t (officially), but tools like SpyFu or Ahrefs’ Paid Search section give you historical ad copy. For B2B competitors with low ad volume, the LinkedIn library is usually enough.

What if the AI invents claims about competitors that aren’t accurate? Same problem as any AI synthesis workflow. The audit pass: ctrl-F each AI claim in the source corpus. If you can’t find the supporting text in the raw inputs, the AI made it up. We covered the audit pattern more thoroughly in our AI persona generator post.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top