Claude for Customer Research: ICPs, VoC Mining, and Persona Development
Use Claude to extract ICPs, voice-of-customer signals, and jobs-to-be-done from reviews and transcripts. Includes copy-pasteable prompt patterns for customer research workflows.

Sections
Fifty Amazon reviews will tell you more about your ICP than six months of demographic overlays. That's not a hypothetical — it's a pattern that shows up every time someone actually reads the reviews instead of summarizing them at the category level. The problem is scale: reading 300 reviews, 50 support tickets, and a dozen sales call transcripts by hand takes days. Claude does it in minutes.
This post covers the practical workflows: how to structure raw customer inputs for Claude, what prompts surface the most useful signals, and how to turn unstructured voice-of-customer data into ICPs, personas, and jobs-to-be-done frameworks that actually change what you say in ads.
TL;DR: Claude for customer research means feeding raw reviews, transcripts, and support tickets into structured prompts to extract ICP signals, pain points, and jobs-to-be-done. The output — personas, positioning gaps, competitor weaknesses — is more accurate than survey-derived data and takes a fraction of the time.
Why most ICP work produces the wrong answer
Most ICP development starts with a survey or a set of firmographic filters. Company size 50–500. Industry: SaaS. Title: VP Marketing. That's not an ICP — that's a list of potential customers. The ICP emerges from why they bought, what they were failing at before, and what language they used to describe both.
The problem with surveys: people answer how they think they should, not how they actually behaved. The problem with firmographics: they describe the container, not the person. The problem with interviews: n=12 is barely signal.
Reviews, transcripts, and support tickets are different. People wrote them under real conditions — frustrated, delighted, confused, comparing. That emotional residue is exactly what you need to build an ICP that drives ad angles. Claude excels at pulling structure out of that residue at scale.
Prompts for ICP development with Claude
The core pattern: give Claude a batch of raw inputs, define the output schema, and ask it to cite evidence. The citation requirement is load-bearing — without it, Claude will confabulate confident-sounding patterns that aren't actually in the data.
Here's a reusable prompt for ICP extraction from reviews:
You are a B2B customer research analyst. I am giving you [N] customer reviews for [PRODUCT/CATEGORY].
Your task:
1. Identify 2-4 distinct customer archetypes based on WHO is writing these reviews (role, company context, use case)
2. For each archetype: name it, describe the trigger that prompted purchase, the primary pain point before purchase, and the primary job they hired this product to do
3. Pull 2-3 verbatim quotes per archetype that best represent their language
4. Note any patterns in what these customers say competitors/alternatives failed to do
Format your response as structured JSON with keys: archetype_name, trigger, pain_point, jtbd, quotes[], competitor_gaps[].
Cite the review number for every claim. Do not infer patterns not present in the reviews.
REVIEWS:
[PASTE REVIEWS HERE — numbered 1 through N]
Run this across 50–100 reviews in batches of 20–25. Merge the archetypes across batches — Claude will often name similar types differently, so you'll need a consolidation pass.
For that pass:
Here are ICP archetypes extracted from 4 separate batches of customer reviews. Your task: merge overlapping archetypes into a final set of 3-4 distinct ICPs. Where archetypes overlap, combine their quotes and evidence. Where they diverge meaningfully, keep them separate.
Preserve all verbatim quotes and their source batch numbers. Output the same JSON schema as the input.
BATCH ARCHETYPES:
[PASTE BATCH OUTPUTS]
Mining customer reviews with Claude
The review-mining workflow runs in three steps: collect, structure, extract.
Collect. Pull reviews from G2, Capterra, Amazon, Trustpilot, or the App Store depending on your category. For B2B SaaS: G2 is the best raw source because reviewers identify their role and company size. Export or scrape to a plain text file — one review per numbered line.
Structure. Before sending to Claude, strip reviewer names and PII. Keep: review text, star rating, reviewer role if available. Star rating matters — 3-star reviews are often the most diagnostic because they contain both what worked and what didn't.
Extract. Use the prompt above. Add this instruction for reviews with ratings: "Pay particular attention to 3-star reviews — they contain nuanced signal about where the product meets and fails expectations."
For cold traffic ad copy, the most actionable output from review mining is competitor language: what customers say the alternatives got wrong. This becomes your hook. "Unlike [X] which requires a 3-week onboarding" is a real ad angle. You can't write that angle from demographic data.
Worked example: extracting ICPs from raw reviews
Here are five real-sounding reviews for a hypothetical B2B prospecting tool:
Review 1 (5★, Head of Sales, 80-person Series B): "We were drowning in manual list building before this. The intent signals are actually useful — not just 'visited your website' but real buying signals. Took about a week to get the team using it consistently."
Review 2 (3★, SDR, Enterprise): "The data quality is better than ZoomInfo for SMB contacts but the enterprise database is weak. Good for mid-market prospecting, not useful for our top 200 accounts."
Review 3 (4★, Founder, 12-person agency): "I use this instead of hiring a researcher. $200/month vs. $4000/month for a person. The LinkedIn enrichment is solid. Wish the CRM sync was cleaner."
Review 4 (2★, RevOps Manager): "Promised intent data that turned out to be just web traffic. If you already have 6sense or Bombora, there's no incremental value. Oversold in the demo."
Review 5 (5★, VP Marketing, B2B SaaS): "Finally replaced our patchwork of tools. We use it for ICP scoring and prioritizing accounts for paid campaigns. The audience export to LinkedIn is the killer feature for us."
Running these through the ICP extraction prompt, Claude would output something like:
- Archetype 1 — "The Lean Sales Lead" (Reviews 1, 3): Trigger: team scaling faster than research capacity. JTBD: replace manual list-building without hiring. Pain: time cost of prospecting. Quote: "I use this instead of hiring a researcher."
- Archetype 2 — "The Paid Media Operator" (Review 5): Trigger: fragmented tool stack. JTBD: ICP scoring into paid channel targeting. Pain: disconnected data between CRM and ad platforms. Quote: "The audience export to LinkedIn is the killer feature."
- Archetype 3 — "The Oversold Evaluator" (Reviews 2, 4): Trigger: intent data hype cycle. JTBD: differentiated signal vs. existing stack. Pain: demo-to-reality gap. Competitor gap: "If you already have 6sense or Bombora, there's no incremental value."
Five reviews, three distinct ICPs. Each one drives a different ad angle, landing page, and objection to address.
Using Claude for jobs-to-be-done analysis
Jobs-to-be-done (JTBD) is a framework for understanding why customers hire a product — not what it does, but what progress it enables in their lives. The classic framing: people don't buy a drill, they buy a hole. The framework is powerful precisely because it shifts focus from features to outcomes.
Claude is well-suited to JTBD analysis from qualitative data because the framework requires pattern-matching across narrative accounts — something LLMs do well at scale. The prompt:
Analyze the following customer reviews/transcripts through a jobs-to-be-done lens.
For each distinct "job" you identify:
1. State the job in the format: "When [situation], I want to [motivation], so I can [expected outcome]."
2. List the "functional job" (practical task), "emotional job" (how they want to feel), and "social job" (how they want to appear to others) if present in the data.
3. Quote the review text that surfaces this job.
4. Note which archetype/customer type this job belongs to (if you can tell from context).
Focus only on jobs explicitly or strongly implied in the text. Do not extrapolate.
DATA:
[PASTE REVIEWS/TRANSCRIPTS]
The output from this prompt is more useful for creative briefs than for media buying. It tells you what emotional and social jobs your product fills — information that shapes the hook and the narrative arc of an ad, not the targeting parameters.
For ad targeting, pair JTBD output with the audience segmentation work from the ICP extraction prompt. The ICP tells you who to reach. The JTBD tells you what to say when you reach them.
Extracting competitor customer gaps from reviews
Competitor review mining is one of the highest-ROI research activities in B2B marketing, and Claude makes it tractable. The workflow: pull 50–100 reviews of your top 2-3 competitors, run the ICP extraction prompt, then run a second pass specifically for gaps.
I have extracted ICP archetypes and complaints from reviews of [COMPETITOR A] and [COMPETITOR B].
Your task: identify customer segments that are underserved or explicitly dissatisfied with these products. For each gap:
1. Describe the customer segment (who they are, what they're trying to do)
2. State the specific failure mode they experience with the competitor
3. Quote the review language they use to describe the problem
4. Assess whether this gap is structural (product architecture) or executional (support/onboarding/pricing)
This is positioning research — I want to find durable wedges, not tactical complaints.
COMPETITOR REVIEW ARCHETYPES:
[PASTE EXTRACTED DATA]
Structural gaps are the valuable ones. If a competitor's enterprise weakness is baked into their data model, it won't be fixed in the next release. That's a durable wedge for positioning and for precision audience targeting.
This kind of analysis pairs well with ad intelligence: knowing what a competitor's customers complain about tells you where to attack creatively. See analyzing competitor blogs for ad insights for the complementary workflow on content-level signals.

What Claude doesn't replace in customer research
Claude is not a substitute for direct customer conversations. It surfaces patterns in existing data — it can't ask follow-up questions, probe for underlying motivation, or catch the hesitation in someone's voice before they say "the onboarding was fine."
The use cases where Claude adds the most value:
- Scale: when you have more data than you can read
- Pattern consolidation: merging outputs from multiple sources into coherent archetypes
- Language extraction: pulling the exact phrasing customers use (essential for ad copy)
The use cases where it falls short:
- New category research where no review corpus exists
- Nuanced B2B relationships where the buyer and user are different people with different jobs
- Detecting what customers don't say — the conspicuous absence of certain language
Think of Claude as a research accelerator, not a research replacement. It compresses weeks of qualitative analysis into hours. What you do with that compression — the synthesis, the positioning judgment, the creative bet — still requires a human.
For a broader view of where Claude fits in the marketing stack, the Claude for marketing 2026 playbook covers the full toolkit. And if you're running this research specifically to inform ad creative, algorithmic ad targeting and creative assets shows how ICP data translates into targeting logic.
Connecting customer research to ad data
The most direct application of Claude-powered customer research is building creative briefs that actually reflect how buyers think. ICP language becomes headline copy. JTBD emotional jobs become visual direction. Competitor gaps become objection-handling in ad body text.
Where AdLibrary fits: once you know the ICP and their language, you can validate whether competitor ads are speaking to the same segments — or leaving them unaddressed. The platform's creative intelligence layer shows you what's running across competitors, giving you the data layer to test whether your research-derived angles are actually whitespace in the market.
For the demographic targeting mechanics of putting that research into practice on paid channels, see the AdLibrary creative strategist workflow use case.
Frequently asked questions
Can Claude analyze customer reviews for ICP development? Yes. Claude handles batch review analysis well — paste 20-50 reviews per prompt run, ask it to extract archetypes with verbatim quotes, and require citation of the source review for every pattern it identifies. The citation requirement prevents confabulation and makes the output auditable.
How many reviews do you need to extract a reliable ICP with Claude? Thirty reviews is a minimum viable dataset for a single-product B2B company. At fewer than 30 you're working with anecdote, not pattern. For categories with high review volume (SaaS tools, consumer products), 100–200 reviews across three batches gives you stable archetypes with meaningful competitive gaps.
Can Claude replace user interviews for persona development? No. Claude excels at scale and pattern extraction from existing text, but it cannot probe for underlying motivation, ask follow-up questions, or detect what customers conspicuously avoid mentioning. Treat it as a way to arrive at interviews with better hypotheses, not as a substitute for them.
What's the best source of raw data for Claude-powered customer research? For B2B SaaS: G2 and Capterra reviews, because reviewers self-identify role and company context. For consumer: Amazon and app store reviews. For existing customers: support tickets and NPS verbatims. Sales call transcripts are the highest-signal source when available — they capture objections and competitor comparisons in real time.
How does Claude handle voice-of-customer analysis for jobs-to-be-done? Well, with the right prompt structure. The key is asking Claude to frame each job in the canonical JTBD format ("When [situation], I want to [motivation], so I can [outcome]") and to distinguish functional, emotional, and social jobs. Without that structure, Claude defaults to feature summaries rather than jobs — which is less useful for creative work.
The sharpest creative briefs are built on customer language, not marketing assumptions. Claude makes collecting that language tractable at scale. The constraint isn't the tool — it's whether you're willing to read the reviews at all.
For more on building the full research-to-creative workflow, see how to use Claude for marketing and the external Anthropic prompting guide for structuring complex analytical tasks.
Related: NN Group research on qualitative analysis methods for grounding AI-extracted patterns in validated research practice.
Related Articles
Precision Audience Targeting and Creative Iteration for High-Converting Meta Campaigns
Learn advanced Meta ad targeting strategies including custom audiences, lookalikes, and practical workflows for campaign optimization.

Algorithmic Ad Targeting: Creative Is the New Targeting Layer
Post-iOS-14, the algorithm targets from creative signals, not demographic checkboxes. What actually moves CPA — and the new operating model.

Analyzing Competitor Blogs for Advertising and Creative Insights
Learn to analyze competitor blog content, including topics, formats, and tone, to develop data-informed hypotheses for your own advertising campaigns.

How to Use Claude for Marketing: The 2026 Playbook for Teams and Solo Operators
Claude workflows for performance marketers: competitor teardowns, ICP research, ad copy with hypotheses, email sequences. Honest on where not to use it.

Claude for Ad Copywriting: Prompts, Workflows, and Real Examples
Five prompt patterns for Claude ad copywriting that produce testable output — hook generator, pain amplification, UGC scripts, and platform-native rewrites. Includes a worked example.