Claude Code for Ad Creative Analysis at Scale
Automate ad creative teardowns at scale using Claude Code and the adlibrary API. Fetch, enrich, cluster, and report on 1,000+ competitor ads in a single session.

Sections
Reading 500 competitor ads manually takes a week. A Claude Code session does it before lunch.
That is not a hypothetical. It is what happens when you pair a capable LLM with a structured data source and give it a clear extraction task. Most teams still treat ad creative analysis as a human-hours problem. It is actually a pattern-recognition problem, and pattern-recognition at scale is exactly what Claude Code does well.
This post walks through a concrete workflow: pull a large ad set from the adlibrary API, enrich each creative with structured metadata, cluster by hook pattern, and produce a teardown report you can act on. The script outline here is structural rather than copy-paste-ready, but every step maps to a real API call and a real Claude prompt.
TL;DR: Claude Code for ad creative analysis means automating the fetch, enrich, cluster, and report pipeline. Pull thousands of ads via the adlibrary API, extract hook type, format, and emotional angle with a structured prompt, then cluster by pattern to surface what is actually working in your category -- in hours, not weeks.
Why manual creative teardowns do not scale
A typical competitive audit involves pulling 50-100 ads per competitor, noting the hook, format, offer framing, and call to action, then looking for patterns. Done manually, that is 20 minutes per ad at minimum. At 200 ads -- a light week of data -- you are at 60+ hours before analysis even starts.
The output is also fragile. One analyst codes "urgency" differently than another. Definitions shift between sessions. By the time the spreadsheet is clean, the market has moved.
Structured LLM extraction fixes both problems. The model applies the same taxonomy every time, processes 1,000 ads in the same session, and outputs machine-readable JSON you can sort, filter, and cluster without another human in the loop.
What Claude Code does in a creative analysis workflow
Claude Code is an agentic coding assistant that can read files, execute scripts, call APIs, and iterate on structured outputs -- all in a single terminal session. For creative intelligence work, that means:
- Fetching ad sets from an API with pagination
- Writing enrichment prompts that return typed JSON (not freetext)
- Running those prompts across a full dataset with rate-limit handling
- Grouping outputs by extracted taxonomy fields
- Rendering a structured Markdown or CSV report
The key shift is treating each ad as a data row rather than a document. Claude does not "read" ads -- it extracts schema-defined fields from them, which is a much more reliable and repeatable task.
For a deeper look at how Claude Code integrates with marketing APIs in general, see Claude Code for agentic marketing with the adlibrary API.
The four-step pipeline for Claude Code ad creative analysis
Step 1 -- Fetch
Pull ads from the adlibrary API access endpoint with filters for advertiser, platform, and date range. A minimal fetch loop in Python looks like this:
# fetch.py -- structural outline
import requests, json
BASE = "https://adlibrary.com/api"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
def fetch_ads(advertiser_slug: str, limit: int = 1000) -> list:
ads, page = [], 1
while len(ads) < limit:
resp = requests.get(
f"{BASE}/ads",
params={"advertiser": advertiser_slug, "page": page, "limit": 50},
headers=HEADERS,
)
batch = resp.json().get("docs", [])
if not batch:
break
ads.extend(batch)
page += 1
return ads[:limit]
For a category sweep, run fetch_ads() across a list of competitor slugs and concatenate. A 1,000-ad set typically downloads in under 90 seconds.
Step 2 -- Enrich
Pass each ad to Claude with a fixed extraction prompt. The prompt must return typed JSON -- not prose -- so downstream code can process it without another parse step.
# enrich.py -- structural outline
EXTRACTION_PROMPT = """
You are an ad creative analyst. Extract the following fields as JSON only.
No explanation, no markdown wrapper. Raw JSON object only.
Fields:
- hook_type: one of [social_proof, urgency, curiosity, pain_point, aspiration, offer, demonstration]
- hook_text: the first 10 words of the primary copy
- format: one of [static_image, video, carousel, ugc, testimonial]
- offer_framing: one of [discount, free_trial, guarantee, bonus, none]
- emotional_angle: one of [fear, greed, desire, belonging, status, relief]
- cta: the call-to-action text if present, else null
Ad data: {ad_json}
"""
def enrich_ad(ad: dict, client) -> dict:
response = client.messages.create(
model="claude-opus-4-5",
max_tokens=256,
messages=[{"role": "user", "content": EXTRACTION_PROMPT.format(ad_json=json.dumps(ad))}],
)
extracted = json.loads(response.content[0].text)
return {**ad, **extracted}
Run this in batches of 20. At 1,000 ads, total enrichment time is 15-25 minutes depending on model and rate limits. See the Anthropic Claude Code documentation for more on running agentic sessions with tool access.
Step 3 -- Cluster
Once every ad has structured metadata, clustering is a SQL-style group-by. No ML required.
# cluster.py -- structural outline
from collections import defaultdict
def cluster_by_hook(enriched_ads: list) -> dict:
clusters = defaultdict(list)
for ad in enriched_ads:
clusters[ad["hook_type"]].append(ad)
return dict(clusters)
def cluster_summary(clusters: dict) -> list:
return [
{
"hook_type": hook,
"count": len(ads),
"top_formats": most_common([a["format"] for a in ads], n=3),
"top_emotional_angles": most_common([a["emotional_angle"] for a in ads], n=3),
"example_hooks": [a["hook_text"] for a in ads[:3]],
}
for hook, ads in sorted(clusters.items(), key=lambda x: -len(x[1]))
]
The result is a ranked table: which hook types dominate in your category, which formats carry each hook, and what the top examples look like. That is your competitive signal, unfiltered.
Step 4 -- Report
Claude Code renders the cluster summary into a structured Markdown teardown. Ask it to identify the dominant pattern, the whitespace (hook types with low competitor saturation), and three creative hypotheses to test.
For context on how to act on these patterns, see building data-driven creative testing hypotheses from competitor ad research and analyzing high-performing ad creative: a practical framework.

A worked example: DTC skincare category sweep
Run the pipeline across 12 DTC skincare advertisers, pulling their last 90 days of in-market ads -- roughly 1,400 ads total. After enrichment, the cluster summary looks like:
| Hook type | Count | Share | Top format | Top emotional angle |
|---|---|---|---|---|
| social_proof | 412 | 29% | ugc | belonging |
| pain_point | 387 | 28% | video | fear |
| curiosity | 210 | 15% | static_image | desire |
| aspiration | 198 | 14% | static_image | status |
| demonstration | 143 | 10% | video | relief |
| urgency | 50 | 4% | static_image | greed |
Immediate read: social proof and pain-point hooks dominate at roughly equal weight. Urgency is underused -- only 4% share, suggesting it is either exhausted (check ad fatigue signals on the long-running urgency ads) or genuinely available as whitespace.
The aspiration cluster runs almost entirely on static images with a status angle, but conversion signals in the public ad data tend to be weakest here. That is a creative hypothesis: static aspiration may be holding frequency without generating response.
This kind of analysis -- normally a week of analyst time -- took about 40 minutes of compute and a 20-minute Claude Code session to interpret.
What Claude Code for ad creative analysis does not replace
The pipeline produces structured signals, not strategy. A few things it cannot do on its own:
It cannot weight by spend. Public ad library data does not expose impression volume. An ad that ran for two days and an ad that ran for six months look identical in the raw data. You need to infer longevity from first-seen / last-seen dates as a proxy for performance.
It cannot validate the extraction. If you feed Claude an ad with ambiguous copy, it will still return a hook_type -- but it may be wrong. Spot-check 5% of rows before trusting cluster totals.
It does not tell you why. The cluster shows that pain-point hooks dominate. It does not tell you whether that is a category norm, an audience-specific preference, or a holdover from one brand that everyone is copying. That interpretation is still human work.
For the full framework on interpreting competitor creative patterns strategically, see how to analyze competitor ad creative strategies and the AI impact on ad creative research and testing.
How ad intelligence data makes the pipeline more useful
The fetch step is only as good as the underlying data. A narrow, incomplete ad set produces misleading cluster results -- if you are missing 40% of a competitor's in-market ads, the hook distribution is wrong.
adlibrary indexes ads across Meta, TikTok, YouTube, and Google, with coverage going back 24+ months for most advertisers. That depth matters: you can run the pipeline on a 90-day window and know the sample is representative, not cherry-picked. The ROAS calculator can help you model expected return thresholds for the creative types the analysis surfaces.
The API access endpoint supports advertiser-level queries, category queries, and keyword queries -- so the fetch step can pull by ICP match rather than just by known competitor names. For more on how Anthropic approaches agentic tool use, the model card and usage policies provide useful context on capability boundaries.
Frequently Asked Questions
Can Claude Code automate ad creative analysis end to end? Yes, with caveats. The fetch, enrich, cluster, and report pipeline runs fully in a Claude Code session. What it cannot automate is the strategic interpretation: deciding which patterns to act on, which represent genuine whitespace, and how to adapt findings to your specific brand positioning. The analysis is automated; the judgment is not.
How many ads can the pipeline process in a single session? In practice, 500-2,000 ads is the comfortable range for a single Claude Code session. Enrichment is the bottleneck -- each ad requires an API call. At 1,000 ads with batching and rate-limit handling, expect 20-30 minutes of enrichment runtime. For larger sets, split into multiple runs by advertiser or time window.
What does the extraction prompt output look like?
Each ad returns a JSON object with fields like hook_type, format, offer_framing, emotional_angle, and cta. The prompt returns only JSON so the output loads directly into a dataframe without another parse step.
Does this work for video ads, not just static images? Yes. For video ads, you pass the transcript or ad copy into the extraction prompt. The hook and angle taxonomy works on copy regardless of format. For deeper video-specific analysis, you would need frame-level data, which the current pipeline does not handle.
How is this different from a standard competitive research workflow? Standard competitive research involves manually browsing an ad library, taking notes, and building a spreadsheet. The Claude Code pipeline replaces that with structured API extraction and automated clustering -- deriving the same insights from a 10-50x larger sample in a fraction of the time. See how to use Claude for marketing in 2026 for a broader view of where these workflows fit.
The bottleneck in creative strategy has never been access to ads -- it has been the cost of reading them at scale. That cost is now effectively zero. The teams that build this pipeline first will have a durable signal advantage over everyone still counting hooks by hand.
Related Articles
Claude Code, Agentic Workflows, and the Future of Vibe Marketing
Analyze the impact of Claude Code on the agentic market and learn how to use it with the AdLibrary API to master vibe marketing workflows.

Analyzing High-Performing Ad Creative: A Framework for Marketers
A guide to deconstructing high-performing digital ads. Learn to analyze emotional appeal, social proof, and visual strategy to build better campaign hypotheses.
Building Data-Driven Creative Testing Hypotheses from Competitor Ad Research
Leverage ad intelligence tools to structure competitor creative analysis, isolate key variables, and build data-driven campaign hypotheses.

A Guide to Analyzing Competitor Ad Creative Strategies
Learn a step-by-step process for researching competitor ads, analyzing creative elements, and developing data-informed hypotheses for your next campaign.