Claude Code + adlibrary API: End-to-End Competitor Intelligence Workflows
Run five Claude Code workflows against the adlibrary API for automated competitor monitoring: Slack alerts, bulk teardowns, hook extraction across 500 ads, monthly landscape reports, and new entrant detection.

Sections
A Claude Code script pointed at the adlibrary API turns competitor research from a weekly chore into a passive monitoring function. You stop opening ad libraries manually. You start receiving structured reports, Slack alerts, and category-level breakdowns — generated on a schedule, without touching a browser.
This post documents five concrete workflows: scheduled competitor monitoring with Slack alerts, bulk creative teardowns of an entire category, hook pattern extraction across 500 ads, monthly category landscape reports, and automated new competitor detection. Each includes working code you can run today.
TL;DR: The Claude Code adlibrary API combination lets you automate every layer of competitor ad intelligence — from daily change detection to deep pattern analysis — with scripts that run unattended and deliver structured outputs to wherever your team works.
Workflow 1: Scheduled competitor monitoring with Slack alerts
The most common use case is also the one most teams never fully automate. They check manually, sporadically, and miss the windows when a competitor's ad volume spikes.
Here's a complete cron-ready script. It pulls the last 24 hours of ads from a target brand, passes them to Claude for change detection, and pushes a structured summary to Slack.
#!/usr/bin/env node
// competitor-monitor.mjs — run via: node competitor-monitor.mjs
// Schedule: cron 0 9 * * * /usr/local/bin/node /scripts/competitor-monitor.mjs
import fetch from 'node-fetch';
import Anthropic from '@anthropic-ai/sdk';
const ADLIBRARY_KEY = process.env.ADLIBRARY_KEY;
const SLACK_WEBHOOK = process.env.SLACK_WEBHOOK;
const BRAND_SLUG = 'rivals-brand-name'; // target brand slug
async function fetchRecentAds(brandSlug, hours = 24) {
const since = new Date(Date.now() - hours * 3600 * 1000).toISOString();
const res = await fetch(
`https://adlibrary.com/api/ads?brand=${brandSlug}&createdAfter=${since}&limit=50`,
{ headers: { Authorization: `Bearer ${ADLIBRARY_KEY}` } }
);
return res.json();
}
async function analyzeWithClaude(ads) {
const client = new Anthropic();
const msg = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 1024,
messages: [{
role: 'user',
content: `Analyze these ${ads.length} new competitor ads from the last 24 hours.
Output JSON with: { "volume_change": "up|down|flat", "new_angles": ["..."], "dominant_format": "video|static|carousel", "urgency": "high|medium|low", "summary": "2 sentences max" }
Ads: ${JSON.stringify(ads.map(a => ({ headline: a.headline, body: a.body, format: a.format })))}`,
}],
});
return JSON.parse(msg.content[0].text);
}
async function sendSlackAlert(analysis, adCount) {
const color = analysis.urgency === 'high' ? '#E84631' : '#36a64f';
await fetch(SLACK_WEBHOOK, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
attachments: [{
color,
title: `Competitor Monitor: ${adCount} new ads detected`,
text: analysis.summary,
fields: [
{ title: 'Volume', value: analysis.volume_change, short: true },
{ title: 'Format', value: analysis.dominant_format, short: true },
{ title: 'New Angles', value: analysis.new_angles.join(', ') },
],
}],
}),
});
}
const ads = await fetchRecentAds(BRAND_SLUG);
if (ads.docs?.length > 0) {
const analysis = await analyzeWithClaude(ads.docs);
await sendSlackAlert(analysis, ads.docs.length);
console.log('Alert sent:', analysis.urgency);
} else {
console.log('No new ads in last 24h');
}
Drop this in a cron job. Set ADLIBRARY_KEY and SLACK_WEBHOOK as environment variables. Your team wakes up to a Slack message every morning with a structured signal on what a competitor did overnight.
The urgency field is the decision gate. High urgency means new angles appeared — someone on the team needs to look at those creatives today. Medium or flat means the monitoring worked and nothing changed.
Workflow 2: Bulk creative teardown of a category
Point this at a category instead of a single brand and you get a full competitive landscape in one run. The pattern: pull all ads from a category via the adlibrary API, batch them into chunks of 20, run Claude's analysis in parallel, then consolidate.
// category-teardown.mjs
import Anthropic from '@anthropic-ai/sdk';
import fetch from 'node-fetch';
const client = new Anthropic();
async function getCategoryAds(categorySlug, limit = 200) {
const res = await fetch(
`https://adlibrary.com/api/ads?category=${categorySlug}&limit=${limit}&sort=-createdAt`,
{ headers: { Authorization: `Bearer ${process.env.ADLIBRARY_KEY}` } }
);
const data = await res.json();
return data.docs || [];
}
function chunkArray(arr, size) {
return Array.from({ length: Math.ceil(arr.length / size) }, (_, i) =>
arr.slice(i * size, i * size + size)
);
}
async function analyzeChunk(ads, chunkIndex) {
const msg = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 2048,
messages: [{
role: 'user',
content: `Analyze this batch of ${ads.length} ads from the same product category.
For each ad, extract:
- hook_type: question | pain_point | social_proof | curiosity | direct_offer
- cta_pattern: urgency | soft | none
- primary_claim: one phrase max
- format: video | static | carousel
Return a JSON array. One object per ad.
Ads: ${JSON.stringify(ads.map((a, i) => ({ id: i, headline: a.headline, body: a.body?.slice(0, 200) })))}`,
}],
});
return JSON.parse(msg.content[0].text);
}
const ads = await getCategoryAds('meal-delivery');
const chunks = chunkArray(ads, 20);
// Run in parallel — respect rate limits with a small concurrency cap
const results = [];
for (let i = 0; i < chunks.length; i += 3) {
const batch = chunks.slice(i, i + 3);
const batchResults = await Promise.all(batch.map((chunk, j) => analyzeChunk(chunk, i + j)));
results.push(...batchResults.flat());
}
// Aggregate
const hookCounts = results.reduce((acc, r) => {
acc[r.hook_type] = (acc[r.hook_type] || 0) + 1;
return acc;
}, {});
console.log('Category teardown complete');
console.log('Hook distribution:', hookCounts);
console.log(`Total analyzed: ${results.length} ads`);
A run against 200 ads in the meal delivery category takes about 90 seconds and costs under $0.50 in Claude API calls. The output tells you exactly which hook types the category is over-indexed on — which is where the whitespace is.
Workflow 3: Hook pattern extraction across 500 ads
This is the workflow that produces the most durable competitive signal. Hook patterns cluster. When you extract them at scale, you can see which mechanisms the whole category has converged on and which are genuinely underused.
The extraction prompt matters more than the infrastructure here. Use a tight taxonomy and force single-label outputs:
// hook-extractor.mjs
const HOOK_PROMPT = `You are extracting hook patterns from ad copy.
Classify the opening hook into exactly ONE of:
- PAIN_AGITATE: opens by naming a specific painful situation
- SOCIAL_PROOF: opens with numbers, results, or third-party validation
- CURIOSITY_GAP: withholds information to create tension
- DIRECT_OFFER: leads with price, discount, or concrete value
- IDENTITY_CLAIM: opens with who the ad is for ("For founders who...")
- CONTRARIAN: opens by challenging a common belief
- SCENARIO: opens with a specific real-world moment
Return JSON: { "hook_type": "<TYPE>", "hook_text": "<first sentence of headline/copy>", "confidence": 0-1 }
Ad headline: {HEADLINE}
Ad body (first 100 chars): {BODY}`;
async function extractHook(client, ad) {
const prompt = HOOK_PROMPT
.replace('{HEADLINE}', ad.headline || '')
.replace('{BODY}', (ad.body || '').slice(0, 100));
const msg = await client.messages.create({
model: 'claude-haiku-4-5', // Use Haiku for extraction — cheaper, fast enough
max_tokens: 256,
messages: [{ role: 'user', content: prompt }],
});
return JSON.parse(msg.content[0].text);
}
At 500 ads with Haiku, this costs around $0.15 total. The output is a labeled dataset. Run a simple frequency count and you have a defensible map of which hooks the category is saturating.
For practical creative intelligence: when SOCIAL_PROOF and DIRECT_OFFER account for 70% of a category's hooks, PAIN_AGITATE hooks have an outsized chance of breaking through cold traffic. That's not a heuristic — it's data from the actual in-market ad set.

Workflow 4: Monthly category landscape report
This one is scheduled, not reactive. It runs on the first of each month, pulls the prior month's ad data across your tracked categories, and outputs a markdown report you can drop into Notion or email directly.
The structure is straightforward: volume by brand, hook distribution by category, new entrants vs. established players, format shifts. The Claude prompt here asks for synthesis, not just extraction:
// monthly-report.mjs
async function generateCategoryReport(client, categoryData) {
const msg = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 4096,
messages: [{
role: 'user',
content: `Generate a monthly competitive landscape report for the "${categoryData.name}" category.
Data from the past 30 days:
- Total ads tracked: ${categoryData.totalAds}
- Brands active: ${categoryData.brands.length}
- Brand volume breakdown: ${JSON.stringify(categoryData.brandVolumes)}
- Hook type distribution: ${JSON.stringify(categoryData.hookDistribution)}
- New brands (first appearance this month): ${categoryData.newBrands.join(', ')}
- Dominant formats: ${JSON.stringify(categoryData.formatBreakdown)}
Write a structured markdown report with sections:
1. Executive summary (3 bullet points max)
2. Volume shifts (who increased/decreased spend signals)
3. Creative patterns (which hooks are rising/falling)
4. New entrants analysis
5. Whitespace opportunities (what the category is NOT doing)
Be direct. Name specific brands. Flag the 2-3 most actionable findings.`,
}],
});
return msg.content[0].text;
}
The "whitespace opportunities" section is where this report earns its keep. It's the gap between what 80% of ads in the category are doing and what none of them are doing. That's where your ad intelligence becomes positioning intelligence.
Workflow 5: New competitor detection
The hardest thing to catch manually is a new entrant. They don't appear in your existing brand list. By the time you notice them on your own, they've been running for weeks.
This workflow queries the adlibrary API for ads in your category and compares against a stored list of known brands. New brands get flagged automatically.
// new-competitor-detector.mjs
import { readFileSync, writeFileSync } from 'fs';
import Anthropic from '@anthropic-ai/sdk';
import fetch from 'node-fetch';
const STATE_FILE = '/tmp/known-brands.json';
function loadKnownBrands() {
try {
return new Set(JSON.parse(readFileSync(STATE_FILE, 'utf8')));
} catch {
return new Set();
}
}
function saveKnownBrands(brands) {
writeFileSync(STATE_FILE, JSON.stringify([...brands]));
}
async function detectNewEntrants(categorySlug) {
const client = new Anthropic();
const knownBrands = loadKnownBrands();
const res = await fetch(
`https://adlibrary.com/api/ads?category=${categorySlug}&limit=500&sort=-createdAt`,
{ headers: { Authorization: `Bearer ${process.env.ADLIBRARY_KEY}` } }
);
const data = await res.json();
const currentBrands = new Set(data.docs?.map(ad => ad.brand?.slug).filter(Boolean));
const newBrands = [...currentBrands].filter(b => !knownBrands.has(b));
if (newBrands.length > 0) {
// Get their ads for Claude to profile
const newBrandAds = data.docs.filter(ad => newBrands.includes(ad.brand?.slug));
const msg = await client.messages.create({
model: 'claude-opus-4-5',
max_tokens: 1024,
messages: [{
role: 'user',
content: `Profile these new competitors just detected in the "${categorySlug}" category.
For each brand, give: positioning angle, target audience (1 sentence), aggression level (1-5), threat level (low/medium/high).
New brands and their ads: ${JSON.stringify(
newBrands.map(brand => ({
brand,
ads: newBrandAds
.filter(a => a.brand?.slug === brand)
.slice(0, 5)
.map(a => ({ headline: a.headline, body: a.body?.slice(0, 150) })),
}))
)}`,
}],
});
console.log('New competitors detected:', newBrands);
console.log('Analysis:', msg.content[0].text);
}
saveKnownBrands(new Set([...knownBrands, ...currentBrands]));
return newBrands;
}
await detectNewEntrants('meal-delivery');
Run this weekly. The state file grows as you encounter brands. The detection logic is O(1) per brand — new entrants surface immediately.
What this setup doesn't replace
These workflows produce automation outputs, not creative judgment. Claude can tell you that 68% of a category's ads use PAIN_AGITATE hooks. It cannot tell you whether your specific brand has the positioning credibility to run that hook effectively.
Similarly, new competitor detection tells you a brand appeared. It doesn't tell you whether they're venture-backed with a six-month runway or a drop-shipper testing a product. The profile analysis gets you close, but a human still needs to spend 15 minutes on their site.
The other limit is recency. The adlibrary API surfaces ads that were active and tracked. Very new creatives — running for less than 48 hours — may not yet appear. For time-sensitive monitoring of a direct competitor, daily scrapes with a 24-hour lookback window are more reliable than weekly batch runs.
Use these scripts as the filter layer. The output narrows your weekly competitive review from "scan 200 ads" to "look at these 8 flagged items." That compression is where the real value sits.
How the data layer fits together
All five workflows pull from the same source: the adlibrary API, covered in detail in the API documentation and implementation guide. The API returns structured ad objects with headline, body, format, brand metadata, and temporal data. Claude handles the interpretation layer on top.
The pattern — data layer feeds Claude, Claude produces structured JSON, JSON feeds your tooling — is the same one described in Claude Code for agentic marketing workflows with the adlibrary API. This post extends it with production-grade implementations.
For the creative analysis side, Claude Code for ad creative analysis at scale covers the prompt engineering in more depth than we have space for here. And if you're building the competitor research function from scratch, Claude Code for competitor research automation covers the full stack, including how to structure your data pipeline.
The API access feature gives you the data endpoints these scripts depend on. Try the ad spend estimator to size the budget headroom your competitor intelligence should be protecting.
For reference on what makes an AI agent reliable in production versus in a demo, see Anthropic's Claude Code documentation and model overview.
Frequently Asked Questions
Can Claude Code automate competitor ad monitoring without manual checks? Yes. The scheduled monitoring workflow above runs fully unattended via cron, pulls new ads from the adlibrary API, analyzes them with Claude, and pushes structured alerts to Slack. No manual browser visits required. Teams typically set urgency thresholds so only high-signal changes trigger a human review.
How many ads can Claude analyze in a single session? Claude Opus handles batches of 50-100 ads per API call comfortably. For larger volumes, the chunking approach in Workflow 2 processes 200+ ads in parallel batches of 20, completing in under two minutes. The hook extraction workflow (Workflow 3) uses Claude Haiku for cost efficiency — 500 ads costs roughly $0.15.
What Claude Code adlibrary API workflow is best for finding whitespace in a category? The monthly category landscape report (Workflow 4) is the most direct route. It aggregates hook distribution, format breakdown, and volume shifts across all active brands, then instructs Claude to identify what the category is NOT doing. That negative space is the whitespace. Run it monthly so you're comparing against a baseline, not just a snapshot.
Does this work for multi-platform monitoring?
The adlibrary API returns ads across platforms — Meta, Google, TikTok, and others — in a unified format. The scripts above work against any platform's ad set by adjusting the platform filter parameter. Multi-platform competitor analysis is covered in Claude for analyzing ad data.
How do you handle false positives in new competitor detection? The state file approach in Workflow 5 accumulates known brands over time. Early runs will flag many brands as "new" until the state stabilizes. After 2-3 weekly runs against the same category, the false positive rate drops to near zero — only genuine new entrants trigger alerts. For noisy categories, add a minimum ad count threshold: only flag brands with 3+ ads in the detection window.
The competitive research function used to justify a dedicated analyst. With these five workflows running on a schedule, a single practitioner can monitor four categories, track 20 competitors, and produce monthly landscape reports — before 9am on Monday.
The data is in the API. Claude handles the interpretation. You just need to wire them together.
Related Articles
Claude Code, Agentic Workflows, and the Future of Vibe Marketing
Analyze the impact of Claude Code on the agentic market and learn how to use it with the AdLibrary API to master vibe marketing workflows.

Claude Code for Competitor Research: Automating Ad Teardowns, LP Audits, and Content Gap Analysis
Automate competitor ad teardowns, LP audits, and content gap analysis with Claude Code. Pull adlibrary API data and generate weekly reports in under 30 minutes.

Claude Code for Ad Creative Analysis at Scale
Automate ad creative teardowns at scale using Claude Code and the adlibrary API. Fetch, enrich, cluster, and report on 1,000+ competitor ads in a single session.

Claude for Analyzing Ad Data: Patterns, Hypotheses, and Creative Teardowns
Use Claude's 1M-token context to analyze hundreds of competitor ads at once — extract hook patterns, generate testable hypotheses, and run bulk creative teardowns in a single session.