adlibrary.com Logoadlibrary.com
Share
Competitive Research,  Platforms & Tools

Claude for Competitor Research: A Practical Workflow for Marketers

Claude for competitor research works best when you bring the data. This practical workflow covers what to collect, how to structure it, and the prompt patterns that consistently extract positioning signals, ICP hooks, and creative whitespace from competitor ad copy, landing pages, and reviews.

Claude competitor research workflow showing three competitor websites analyzed in tabs with consolidated insights panel

You can tear down five competitor brands in an afternoon with Claude. The bottleneck is not the model — it's the data you give it. Feed it thin summaries and you get thin analysis. Feed it the actual ad copy, the landing page text, the pricing page HTML — and you get something genuinely useful: a structured read on what each competitor is betting on, where their positioning is tight, and where they've left whitespace.

This is a practical workflow for using Claude for competitor research. It covers what data to collect, how to structure it, prompt patterns that surface real signals, and a worked example with a DTC skincare brand analyzing three competitors.

TL;DR: Claude excels at competitor research when you give it raw, structured data — ad copy, landing pages, pricing, positioning — rather than asking it to research from scratch. The workflow is: collect data → format it → run structured prompts → extract signals. For ad creative patterns specifically, pairing Claude's analysis with a structured data source like adlibrary gives you coverage Claude alone can't provide.

Most competitive research fails at the synthesis step. You end up with a folder of screenshots, a sheet of pricing data, and a vague sense that Competitor A is more premium. Claude's long context window (up to 200k tokens) changes the synthesis problem entirely.

You can drop in 10 competitor landing pages at once and ask for a cross-brand positioning map. You can paste six months of ad copy from three brands and ask which emotional hooks they're clustering around. You can feed a 40-page market report alongside a competitor's full product catalog and ask where the gaps are.

What Claude cannot do — and this matters for calibrating expectations — is browse the internet live, pull current ad data, or tell you which creatives are actually running right now. For that, you need a dedicated ad intelligence source. Claude is the analyst; you are the data collector. This distinction shapes the entire workflow.

What data to collect before you open Claude

The quality of your Claude for competitor research output is a direct function of input quality. Here's the data worth pulling:

Ad creative and copy: Export or manually collect ad headlines, body copy, and CTA text from Meta Ad Library, TikTok Creative Center, or a tool like adlibrary. The raw text matters more than the visual at the analysis stage — Claude can't see images unless you describe them. For a deeper playbook on ad collection, the competitor ad research strategy guide covers sourcing at scale.

Landing pages: Scrape or copy-paste the full text of your competitors' primary landing pages, product pages, and checkout flows. Strip HTML, remove navigation and footers, keep only the persuasive content. You want the actual copy, not a summary.

Pricing pages: Capture the full pricing structure including plan names, feature lists, and any framing language ("Most Popular," "Best for teams"). Framing language is often more revealing than the numbers.

Organic content: Recent blog post titles and intros, YouTube video descriptions, email subject lines if you're subscribed. This surfaces positioning signals that paid ads don't always show.

Reviews: Pull 50–100 recent reviews from G2, Trustpilot, or Amazon. Customer language is often better for understanding ICP pain points than any marketing copy the brand produces.

Organize all of this into plaintext files or a single structured document. You'll be pasting it into Claude's context window. The more specific and verbatim the data, the more specific and useful the analysis.

How to set up Claude for competitor research prompts that actually work

Don't ask Claude to "analyze" — ask it to execute a specific analytical task against specific data. Vague prompts return vague analysis. Here are three prompt patterns that consistently produce useful output:

Pattern 1: Positioning map extraction

You are a brand strategist. I'm going to give you the homepage copy and primary landing page copy for three competitors in [category].

For each competitor, extract:
1. Primary value proposition (1 sentence, their actual words)
2. Core emotional hook (fear / aspiration / identity / social proof)
3. Target customer as implied by the copy (not demographics — describe the situation/problem they're addressing)
4. What they are explicitly NOT positioning around (what's absent that you'd expect to see)

Then produce a 2x2 positioning map contrasting the brands on [axis 1] vs [axis 2].

[PASTE COMPETITOR COPY HERE]

Pattern 2: Ad copy hook analysis

Below is ad copy from three DTC competitors in [category], labeled by brand. Each entry is one ad.

Tasks:
1. Categorize each ad by primary hook type: pain-point, aspiration, social proof, curiosity, offer-led, identity
2. Identify which hook types each brand is over-indexing on
3. Identify any hook type none of the brands are using (the whitespace)
4. For each brand, write one sentence describing their implied cold-traffic thesis — what they think makes a stranger stop scrolling

[AD COPY DATA HERE]

Pattern 3: Review mining for ICP signals

Below are [N] customer reviews for [Competitor].

Extract:
1. The top 5 recurring pain points customers mention before discovering the product
2. The top 5 outcomes customers report after using it
3. Any repeated phrases or exact language customers use to describe the problem (ICP signal phrases)
4. Any friction points — especially about onboarding, pricing, or missing features

Do not summarize broadly. Quote specific language where possible.

[REVIEWS PASTE HERE]

These prompts work because they give Claude a defined output format and a specific analytical lens. The modern competitor ad research guide has additional frameworks for structuring the research phase before you reach the analysis stage.

Feeding competitor data into Claude: formats and practical limits

Context window limits are the practical constraint. Claude 3.7 Sonnet handles 200k tokens — roughly 150,000 words. Enough for most competitive tasks, but you'll hit limits if you're careless.

What compresses well: Landing page text, ad copy, pricing pages. Strip HTML, remove navigation and boilerplate.

What doesn't compress: Full blog archives, image-heavy pages, complex PDFs. Summarize these or extract only the most relevant sections.

Structure your input clearly. Use separators and labels:

=== COMPETITOR: BrandName ===
SOURCE: homepage
URL: https://...

[full page text]

=== COMPETITOR: BrandName2 ===
SOURCE: pricing page
...

Claude uses these structural cues to keep brands distinct across long context. Without them, it starts blending analysis between brands — a common failure mode when working with large multi-brand documents.

For creative intelligence specifically, copy text is necessary but not sufficient. Ad creatives are primarily visual — thumbnails, video hooks, design choices — and Claude can't analyze images it hasn't seen. If you can describe the visual (or supply structured image descriptions), you get better output. This is the natural integration point for a dedicated ad library.

Marketer analyzing competitor ad creatives pinned to a corkboard using Claude AI to extract patterns

Worked example: Solari Skin analyzes three competitors

Solari Skin is a fictional DTC vitamin C serum brand at $120/month ARR targeting women 28–42 who've been burned by overpromised skincare before. They want to understand how three competitors — Naturium, Good Molecules, and Topicals — are positioning and advertising before writing their next campaign.

Step 1: Data collection (30 minutes)

From Meta Ad Library, they export the last 90 days of ad copy for all three brands — roughly 40 ads total. They copy-paste the homepage, product page, and "Our Story" page for each brand into a single text document. They pull 75 Sephora reviews for each brand.

Step 2: Positioning prompt

They run Pattern 1 (positioning map) with all three homepage texts. Claude returns:

  • Naturium: "Science-backed skincare that doesn't cost a fortune." Hook: fairness/value. Target: frustrated by luxury pricing, not anti-science.
  • Good Molecules: "No frills, no fillers, no nonsense." Hook: identity (the discerning ingredient-reader). Target: someone who has learned to read labels.
  • Topicals: "Skincare for the 87% with skin conditions everyone else ignores." Hook: belonging/representation. Target: someone who's felt excluded by the category.

Absent from all three: clinical result specificity. No brand is making quantified efficacy claims. That's whitespace.

Step 3: Ad hook analysis

Running Pattern 2 against the 40 ads, Claude identifies that all three brands heavily cluster on identity hooks ("for people who are tired of X"). Pain-point hooks are almost entirely absent. Aspiration ("you could look like") is rare. No brand is running offer-led ads in their current mix.

Step 4: Review mining

Pattern 3 against the Naturium and Good Molecules reviews surfaces repeated phrases: "I finally found something that doesn't break me out," "wish the packaging was cleaner," "confused about layering order." These are ICP signal phrases Solari can now test as hooks.

Output: Solari's next campaign brief — anchored in clinical specificity, with a pain-point hook ("found the serum that doesn't lie about results"), targeting the ingredient-aware customer that Good Molecules has primed but never fully converted on efficacy.

That's not a hypothetical use of Claude for competitor research. That's a two-hour session that produces a brief most agencies would take two weeks to assemble.

Where Claude's long context shines vs where you need structured data

Claude is exceptional at synthesis, pattern extraction, and generating positioning hypotheses from messy text. It holds up well against 20,000 words of mixed competitor content without losing coherence. The competitor ad research creative intelligence guide gives more context on why synthesis is the highest-value step in competitive workflows.

Where Claude has hard limits:

TaskClaude aloneWith structured ad data
Positioning analysis from landing pagesExcellentNot needed
Hook categorization from copyExcellentNot needed
Identifying which creatives are currently runningCannot doFull coverage
Volume/spend signalsCannot doPartial (some tools)
Creative lifespan / fatigue patternsCannot doFull coverage
Visual format analysis (video vs image vs carousel)Cannot doFull coverage

For ad intelligence tasks in the bottom half of that table, you need a data layer with actual creative indexes. Claude then becomes the analyst on top of that data — extracting patterns, writing positioning hypotheses, drafting briefs from what the data shows. This is how adlibrary fits into a Claude-based research workflow: structured creative data in, Claude-powered synthesis out. More on that integration in the how to use Claude for marketing playbook.

If you're benchmarking your ad budget against competitor spend estimates, the ROAS calculator can help set a baseline before you build campaign hypotheses from competitive data. The full spy-on-competitors guide at adlibrary covers the data sourcing side end-to-end.

What Claude doesn't replace in competitive research

Claude won't tell you when a competitor's swipe-file ad launched or how long it's been running. It won't give you a competitor's estimated spend. It can't tell you which creative is driving their performance — only which hook types appear in their ad copy.

It also won't catch what's absent from a competitor's ad library if you haven't looked there yourself. Claude reasons over the data you give it. If your sample is biased toward recent ads, your analysis will be too.

For competitive research on paid channels specifically, the how to see what Shopify apps a store is using post is a useful companion — tech stack signals often correlate with ad sophistication and budget allocation.

Claude is a research accelerator, not a research source. The distinction matters. Keep it downstream of your data collection.

Frequently Asked Questions

Can Claude browse the internet to research competitors?

No. Claude does not have live internet access in standard API and Claude.ai usage. It can only analyze data you provide directly in the conversation. For live competitor ad data, use a dedicated tool like adlibrary or Meta's own Ad Library, then paste the relevant text into Claude for analysis.

How much competitor data can I fit into one Claude session?

Claude 3.7 Sonnet supports a 200k token context window — roughly 150,000 words of plain text. That's enough for 10–15 competitor landing pages, 200+ ad copy samples, and 300+ customer reviews in a single session. Anthropic's model documentation has the exact context limits per model tier.

What are the best Claude prompts for competitor ad analysis?

Structure your prompts around specific tasks: hook categorization, positioning extraction, ICP signal mining, or whitespace identification. Avoid open-ended "analyze this" requests. Always specify the output format you want (list, table, 2x2 matrix) and the analytical lens (emotional hook type, positioning axis, ICP profile). The prompts in the "How to set up Claude" section above are ready to copy-paste.

Can Claude analyze competitor images and video ads?

Claude can analyze images if you upload them directly (images count toward token usage). For video, Claude cannot process video files — you'd need to describe the visual or provide a transcript. For systematic visual analysis of large ad libraries, a structured creative intelligence platform is more practical than Claude alone.

Is Claude for competitor research better than ChatGPT?

Claude's longer context window and stronger instruction-following make it better suited for the synthesis step — especially when working with large multi-brand documents. ChatGPT with Browsing has the advantage of live web access. For a detailed breakdown, the Claude vs ChatGPT for marketers post covers the trade-offs by workflow type.


The best competitive research session you'll run with Claude is the one where you show up with too much data. Compress it, structure it, and let the model find the pattern. The model is fast. The research is on you.

Related Articles