Claude Code for Competitor Research: Automating Ad Teardowns, LP Audits, and Content Gap Analysis
Automate competitor ad teardowns, LP audits, and content gap analysis with Claude Code. Pull adlibrary API data and generate weekly reports in under 30 minutes.

Sections
A manual competitor teardown takes four hours. Claude Code does it in twenty minutes while you're on a call. That's not a hypothetical—it's the difference between running competitor intelligence as a quarterly ritual and running it every week as a repeatable operation.
Claude Code for competitor research changes the equation because it combines a capable LLM with real shell access and file system control. You're not prompting a chat interface to condense a page you pasted in. You're scripting an agent that fetches landing pages, pulls ad records from an API, structures the output, and writes a Markdown report—all in one session.
TL;DR: Claude Code turns competitor research into a repeatable script. Set up a project folder, give it shell access, and it can scrape landing pages, query the adlibrary API for in-market ads, and generate structured teardown reports in under 30 minutes. This post covers the exact setup, sample commands, and a reusable prompt structure.
Why manual competitor teardowns don't scale
Most marketing teams do competitor research the same way: someone opens fifteen browser tabs, screenshots some ads, copies hero copy into a doc, and writes up notes. The output is one person's filtered interpretation, produced at irregular intervals, and rarely structured enough to compare week-over-week.
The bottleneck isn't analytical capacity. It's the mechanical extraction work—fetching pages, cataloguing ad creative, formatting observations. That's the part that takes three hours before any thinking happens.
Automating that layer with Claude Code for competitor research doesn't replace the analyst. It eliminates the data-grunt work so the analyst spends their time on the signal, not the retrieval.
Setting up a competitor research project folder with Claude Code
Start with a consistent folder structure. Claude Code works best when it has a clear working directory and a set of reference files to read from and write to.
mkdir -p ~/research/competitor-research/{inputs,outputs,scripts}
cd ~/research/competitor-research
# inputs/ — competitor URLs, keywords, brand names
# outputs/ — structured teardown reports (Markdown or JSON)
# scripts/ — reusable shell + prompt scripts
Create an inputs/competitors.txt with one URL per line—homepage, pricing page, and the key landing page per competitor. Then open Claude Code in that directory:
claude
Claude Code reads the working directory automatically. From there you can give it a standing instruction set that runs every session—a "research mode" prompt saved in scripts/research-init.md:
## Research session instructions
Working dir: ~/research/competitor-research
Available tools: bash, file read/write, curl
For each competitor URL in inputs/competitors.txt:
1. Fetch the page HTML with curl
2. Extract: headline, subheadline, primary CTA text, social proof signals, pricing mentions
3. Write structured JSON to outputs/<domain>-lp-audit-<date>.json
4. Append a 3-bullet Markdown summary to outputs/weekly-report.md
Flag any LP that loads a paywall, redirect, or error.
Load it at session start with /file scripts/research-init.md. Claude Code will read it and operate within those parameters for the full session.
Scraping competitor landing pages with shell access
Claude Code's shell access is what makes this practical. A standard competitor LP scrape looks like this in a session:
# Claude Code runs this inside the session
curl -sL "https://competitor.com/pricing" | python3 -c "
import sys
from html.parser import HTMLParser
class TextExtractor(HTMLParser):
def __init__(self):
super().__init__()
self.text = []
self.skip = False
def handle_starttag(self, tag, attrs):
if tag in ('script','style','nav','footer'):
self.skip = True
def handle_endtag(self, tag):
if tag in ('script','style','nav','footer'):
self.skip = False
def handle_data(self, data):
if not self.skip and data.strip():
self.text.append(data.strip())
p = TextExtractor()
p.feed(sys.stdin.read())
print('
'.join(p.text[:200]))
" > outputs/competitor-pricing-raw.txt
Claude Code will then read that raw text, extract the structured fields you specified, and write output. No manual copy-paste. No context window stuffed with full HTML—just the relevant text handed off cleanly.
For competitor ad research strategy, this pattern scales to ten competitors in a single session because each extraction takes seconds and the outputs accumulate in structured files Claude Code can reference.
Pulling ad records from the adlibrary API
Landing pages tell you one story. Active ads tell you another. The adlibrary API access exposes in-market ad records—creative, copy, platform, run dates—that you can pull programmatically and feed directly into a teardown session.
A basic ad pull for a competitor looks like this:
# Pull last 30 days of ads for a competitor domain
curl -s "https://adlibrary.com/api/ads?search=competitor-brand&limit=50&sort=-firstSeen" -H "Authorization: Bearer YOUR_API_KEY" | python3 -c "
import json, sys
ads = json.load(sys.stdin)
for ad in ads.get('docs', []):
print(json.dumps({
'id': ad.get('id'),
'headline': ad.get('headline'),
'body': ad.get('body'),
'platform': ad.get('platform'),
'firstSeen': ad.get('firstSeen'),
'lastSeen': ad.get('lastSeen'),
'mediaType': ad.get('mediaType')
}, indent=2))
" > outputs/competitor-ads-raw.json
Once that output is written, you instruct Claude Code:
Read outputs/competitor-ads-raw.json.
Identify:
1. The top 3 hooks by repeat pattern (cluster similar openings)
2. Any angle that's run for 30+ days (likely a control)
3. Media type distribution (video vs. static)
4. Any claims or guarantees that appear in multiple ads
Write findings to outputs/ad-teardown-<date>.md with one section per finding.
That's ad intelligence extraction—structured, repeatable, and fast. What previously required a human reading through fifty ads and taking notes now takes two minutes of compute plus thirty seconds of review.
For a broader look at how this fits into a workflow, how to use Claude for marketing covers the agentic layer in more depth.
Generating weekly teardown reports automatically
The real value compounds when this runs on a schedule. A weekly teardown report pulls together LP changes, new ads, and creative pattern shifts into a single Markdown file your team can review in five minutes.
Here's the prompt structure that generates a publishable teardown:
## Weekly competitor teardown — week of <DATE>
Read all files in outputs/ modified in the last 7 days.
Synthesize into a report with the following sections:
### Hook patterns this week
- List the 3-5 recurring opening lines or angles across competitor ads
- Note if any are new vs. repeating from prior weeks
### LP changes detected
- Compare current LP audit against last week's (check outputs/ for prior audit)
- Flag headline changes, new social proof, pricing updates, added/removed CTAs
### Content whitespace
- Identify topics or angles competitors are NOT covering
- Cross-reference with what they're actively spending behind in ads
### Recommended swipe file additions
- List 3-5 specific ad copy snippets or structural patterns worth tracking
- Explain why each is signal, not noise
Output to outputs/weekly-teardown-<DATE>.md
That prompt structure is a swipe file for the research process itself. Run it every Friday, review Monday morning. The output is consistent enough to compare across weeks, which is what turns one-off observations into pattern detection.
This approach to automation in competitor research isn't about replacing judgment—it's about making sure your judgment is applied to patterns that emerged over eight weeks of data, not a snapshot from one afternoon.

What this doesn't replace: the analyst's job
Claude Code handles extraction and structuring. It doesn't handle interpretation.
When you see a competitor's control ad has been running for 47 days on a cold-traffic angle about "paying too much for freight shipping," that's a signal. What it signals—whether to attack that positioning, mirror the concern, or differentiate away from it entirely—requires understanding your ICP, your margins, and your current creative portfolio. Claude Code can surface that 47-day runtime. It cannot tell you what to do with it.
The same applies to LP audits. If a competitor rewrites their hero headline from a feature claim to an outcome claim, Claude Code will detect and flag the change. Whether that shift signals they found a better angle or they're flailing and testing desperately—that's your read, not the model's.
For reverse-engineering competitor ad funnels, the mechanical layer (what's running, what's changed, how long has it been live) is exactly what you should automate. The interpretive layer (why it's working, what it implies for your strategy) is where your team adds value.
The creative intelligence in this workflow isn't the script. It's what you do after the report lands.
How to structure a competitor research session from scratch
If you're starting from zero, here's the complete session sequence:
- Create project folder with
inputs/,outputs/,scripts/subdirectories - Populate
inputs/competitors.txtwith 3-5 competitor URLs (homepage + pricing + hero LP) - Save research-init.md in
scripts/with your standing extraction rules - Open Claude Code in the project directory:
claude - Load init file:
/file scripts/research-init.md - Run LP scrape: ask Claude Code to fetch and extract each competitor LP
- Pull ad records via adlibrary API for each competitor brand name
- Generate teardown report: use the weekly report prompt structure above
- Review and annotate: add your interpretive notes to the generated Markdown
First run takes about 25 minutes. Subsequent weekly runs take under 10 minutes because the scripts are already written and the folder structure is in place.
For a related view on how Shopify store intelligence works with similar scripting patterns, the same project-folder approach applies.
The ad spend estimator can help you gauge how much budget competitors might be running behind the patterns you identify.
Using adlibrary as the data layer for competitor intelligence
The limiting factor in any Claude Code competitor research setup is data quality. Shell-scraped landing pages give you copy signals. Publicly scraped ad libraries give you a partial picture.
The adlibrary API adds structured, deduplicated ad records with metadata—platform, format, first/last seen dates, creative content—that turns raw ad watching into trackable intelligence. Claude Code can query that API, filter for recency, cluster by angle, and write the output in whatever format your workflow needs.
That combination—a capable agent with shell access plus a structured ad intelligence data source—is what makes the difference between a research script that pulls some HTML and one that generates a brief your creative team actually acts on.
If you're building this out, Claude Code for ad copywriting workflows covers the downstream step: taking those competitor signals and using them to brief creative.
Frequently Asked Questions
Can Claude Code actually scrape competitor websites on its own?
Yes. Claude Code has shell access, which means it can run curl and wget commands to fetch page HTML, pipe output through parsing scripts, and write structured results to files. It's not doing anything a human couldn't do manually in a terminal—it's just doing it faster and with consistent output formatting. Be aware of each site's terms of service for automated access.
How does Claude Code for competitor research differ from using ChatGPT?
ChatGPT operates in a chat interface without shell access or file system control. You'd need to paste content in manually and the session doesn't persist to disk. Claude Code runs as an agent in your local terminal, reads and writes files, executes shell commands, and can chain multi-step workflows within a single session. For repeatable research automation, the difference is significant.
What data does the adlibrary API return for competitor ads?
The adlibrary API returns ad records including headline, body copy, media type (video/image/carousel), platform, first seen date, last seen date, and brand/advertiser data. This gives you enough structure to track creative patterns over time, identify long-running controls, and detect angle shifts—all of which are harder to track from manual ad library browsing.
How often should you run automated competitor teardowns?
Weekly is the useful cadence for most teams. Daily is noise unless you're in a fast-moving auction where competitor creative rotates constantly. Monthly misses too much. Weekly reports give you enough signal to see patterns emerge and enough lag to distinguish a test from a control.
Does this work for non-ecommerce competitors?
Yes. The pattern works for any competitor with a landing page and active paid advertising. The LP audit component is format-agnostic—it extracts headline, CTA, social proof, and pricing mentions regardless of whether the product is physical, SaaS, or a service. The ad record pull works for any brand running paid campaigns on platforms adlibrary tracks.
The bottleneck in competitor research has never been ideas or analysis frameworks. It's always been the extraction step—the mechanical work of gathering what's actually out there. That's the part Claude Code eliminates. Once the scripts exist, the research runs itself.
Build the folder. Write the init prompt. Let the agent do the collection. Spend your time on the part that actually requires a brain.
Related Articles
Claude Code, Agentic Workflows, and the Future of Vibe Marketing
Analyze the impact of Claude Code on the agentic market and learn how to use it with the AdLibrary API to master vibe marketing workflows.



Claude for Ad Copywriting: Prompts, Workflows, and Real Examples
Five prompt patterns for Claude ad copywriting that produce testable output — hook generator, pain amplification, UGC scripts, and platform-native rewrites. Includes a worked example.