adlibrary.com Logoadlibrary.com
Share
Guides & Tutorials,  Platforms & Tools

Claude Code Prompts for Marketing Workflows: A Copy-Paste Library

25+ Claude Code prompts for marketing — competitor research, ad analysis, SEO audits, report generation, and creative batch ops. Copy-paste with setup notes and expected outputs.

Claude Code prompt library organized in terminal folders showing research SEO ad-ops and reporting categories for marketing workflows

Most LLM prompts assume you're typing into a chat box. Claude Code prompts are different — they assume file access, shell execution, and the ability to install dependencies. That changes what's possible by an order of magnitude.

A prompt in Claude Code isn't just a text instruction. It can read your ad account export, run a Python script, hit an API, parse hundreds of URLs, and write structured output to a file — all in sequence, with context persisting across steps. Marketers who've been using the Claude.ai web interface are often surprised when they first see what the Code environment can do.

This is a curated library of Claude Code prompts organized by marketing workflow. Each prompt includes setup context and a note on what to expect from the output.

TL;DR: Claude Code prompts for marketing go beyond chat — they read files, run scripts, and call APIs. This library covers 25+ copy-paste prompts across competitor research, ad data analysis, SEO audits, report generation, creative batch operations, and data pulls, with setup notes for each.

Why Claude Code prompts work differently from chat prompts

When you run a prompt in Claude Code, you're working inside a terminal agent that can interact with your file system and execute code. The difference from Claude.ai or any other chat interface is structural.

A chat prompt has to be self-contained. You paste data in, get text back, copy it out. Claude Code prompts can reference files by path, pipe data between steps, install libraries mid-session, and persist context across an entire workflow. That means you don't have to manually move data between tools — Claude moves it.

This also changes how you write prompts. Good Claude Code prompts specify:

  • Where the input data lives (file path, API endpoint, folder structure)
  • What transformation needs to happen
  • Where output should go and in what format
  • Any constraints or dependencies that need to be installed

For a deeper orientation to the environment itself, see Claude Code, Agentic Workflows, and the Future of Vibe Marketing.

Competitor research prompts

These prompts assume you've pulled competitor ad data — either via AdLibrary's API access, a manual CSV export, or scraped URLs. Claude Code reads the data and does the analysis.

Prompt 1 — Extract creative patterns from competitor ad copy

Read all .txt files in ./competitor-ads/ (each file is one competitor's ad copy, filename = brand name).
For each competitor:
1. Count unique hooks (first sentence of each ad)
2. Identify the 3 most-used emotional triggers (fear, aspiration, social proof, urgency, curiosity)
3. Note any recurring offer structures (% off, free trial, money-back)
4. Flag any seasonal or event-based creative angles

Output: markdown table with one row per competitor. Save to ./analysis/competitor-creative-patterns.md

Expected output: A structured table with pattern frequencies across brands. Useful for spotting whitespace in positioning — angles competitors aren't using.

Prompt 2 — Competitor landing page audit batch

I have a list of competitor landing page URLs in ./urls.txt (one per line).
For each URL:
1. curl the page, extract the H1, subheadline, and first CTA button text
2. Note the primary value prop from meta description
3. Flag any trust signals (testimonials, logos, badges) in the first viewport

Compile into ./analysis/lp-audit.csv with columns: url, h1, subheadline, cta, value_prop, trust_signals
Rate-limit to 1 request per 2 seconds. Skip 404s.

Expected output: A CSV ready for import into a spreadsheet. No manual copy-paste from 30 browser tabs.

For a full research workflow context, the competitor ad research strategy guide covers how to structure the inputs these prompts consume.

Ad data analysis prompts

These prompts work on exported ad account data — Meta Ads CSV exports, Google Ads reports, or any structured performance file.

Prompt 3 — Surface top and bottom 10% creatives

Read ./meta-ads-export.csv (columns: ad_name, spend, impressions, clicks, conversions, revenue).
Calculate ROAS = revenue / spend and CTR = clicks / impressions for each ad.
Identify:
- Top 10% by ROAS (minimum $500 spend)
- Bottom 10% by ROAS (minimum $500 spend)
- Top 10% by CTR
- Creatives with high CTR but low ROAS (engagement traps)

Output: ./analysis/creative-performance-tiers.md with a summary and four tables.
Flag any ad name patterns or keywords that correlate with top vs bottom performance.

Expected output: Performance tiers with pattern signals. The "engagement trap" flag alone is worth running this every week.

Prompt 4 — Frequency and fatigue scan

Read ./meta-ads-export.csv. Add a column: fatigue_signal = 1 if CTR dropped >20% week-over-week AND frequency > 3.
Group by ad set. For each ad set with ≥1 fatigue_signal:
- List the flagged creatives
- Calculate how much spend landed after the fatigue signal triggered
- Suggest whether the ad set needs new creative or audience expansion based on the pattern

Save to ./analysis/fatigue-report.md

Expected output: A prioritized list of ad sets bleeding spend on tired creative. Actionable in one refresh cycle.

Prompt 5 — Monthly performance summary with narrative

Read ./reports/monthly-data.csv (weekly rows with: week, channel, spend, revenue, new_customers).
Calculate:
- Total and weekly spend/revenue/ROAS by channel
- Week-over-week trend for each metric
- Best and worst performing week with context

Then write a 200-word executive summary paragraph in plain English — no bullet points, no tables.
Voice: direct, data-first, flag what needs attention.
Save the full analysis to ./reports/monthly-summary.md

Expected output: A ready-to-paste exec summary with the supporting data tables underneath. Human-in-the-loop just adds the strategic context.

For understanding the prompt structure being used here, prompt engineering as a discipline applies even in code-execution contexts — the constraint syntax matters.

Prompt template card with structured sections for context task constraints and output fields displayed in a terminal for Claude Code marketing workflows

SEO audit prompts

Claude Code can run SEO analysis that would normally require a paid tool subscription — by scripting against live pages or structured crawl exports.

Prompt 6 — On-page SEO audit for a content batch

I have a list of blog post URLs in ./posts.txt (one per line).
For each URL:
1. Fetch the page, extract: title tag, meta description, H1, first 100 words, word count estimate
2. Check if the title and H1 contain the target keyword (infer from URL slug)
3. Flag: title >60 chars, meta desc >155 chars, missing meta desc, H1 count ≠ 1
4. Check internal link count (links pointing to same domain) and external link count

Output ./seo-audit/on-page-report.csv. Rate-limit 1 req/3 sec.

Expected output: A full on-page audit without Screaming Frog. Good for auditing 20-100 pages at once.

Prompt 7 — Internal linking gap analysis

Read ./sitemap.xml. Extract all post/article URLs.
For each URL, fetch the page and extract all internal links (same domain).
Build a graph: node = URL, edge = internal link.
Find:
- Pages with zero inbound internal links (orphan pages)
- Pages with >10 inbound links (authority hubs)
- Pages published in the last 90 days with <3 inbound links

Output: ./seo-audit/internal-links.md with orphan list, hub list, and new-page gap list.

Expected output: An internal linking audit that would take hours to do manually. The new-page gap list is the highest-signal output — those pages need links added now.

The SEO glossary entry covers the strategic rationale for internal linking if you need it for reporting context.

Report generation prompts

These prompts turn raw data into formatted reports — for clients, for leadership, or for your own records.

Prompt 8 — Client-ready weekly ad report

Read ./data/this-week.csv and ./data/last-week.csv (same structure: channel, spend, impressions, clicks, conversions, revenue).
Calculate week-over-week changes for all metrics.
Write a structured markdown report with:
- Executive summary (3 sentences, plain English, lead with the most important number)
- Performance table: this week vs last week vs % change
- Three observations (what worked, what didn't, what to watch)
- Recommended actions for next week (2-3 bullet points maximum)

Tone: direct, client-facing, no jargon. Save to ./reports/weekly-report-[current date].md

Prompt 9 — Quarterly creative performance retrospective

Read ./data/q1-ads.csv (columns: week, ad_name, format, spend, roas, ctr, hook_type [tagged manually]).
For each hook_type:
- Aggregate total spend, average ROAS, average CTR
- Identify the single best-performing ad

Write a narrative retrospective: which hook types drove performance, which formats underperformed, and what the data suggests for Q2 creative strategy.
200-300 words. Save to ./reports/q1-retrospective.md

Expected output: A narrative tied to actual numbers. This feeds directly into your creative brief process without an extra analysis layer.

Creative batch operations

When you need to generate or transform copy at volume — not one prompt, but a systematic operation across a set of inputs.

Prompt 10 — Batch ad copy generation from product feed

Read ./products.csv (columns: product_name, price, key_benefit, target_audience, proof_point).
For each product, generate 3 Facebook ad headlines (max 40 chars) and 3 primary text variants (max 125 chars).
Angle 1: benefit-first
Angle 2: social proof / credibility
Angle 3: urgency / scarcity

Output: ./copy/ad-copy-batch.csv with columns: product_name, angle, headline, primary_text.
Do not repeat the same opening word across variants for the same product.

Expected output: A populated copy sheet with 3 angles per product. For a 50-product catalog, this replaces a full day of copywriting.

Prompt 11 — Existing ad copy refresh for creative fatigue

Read ./running-ads.csv (columns: ad_id, headline, primary_text, running_days, ctr_trend).
For ads where running_days > 21 OR ctr_trend = "declining":
- Rewrite the headline with a different opening word and emotional angle
- Preserve the core offer but shift the framing
- Keep character counts within 10% of original

Output: ./copy/refreshed-ads.csv with original and refreshed versions side by side.

For more on the ad copy craft principles underlying these prompts, that glossary entry covers the framework.

To see how these prompts compare to non-Code Claude prompts for copywriting, the Claude for ad copywriting prompts guide is the right companion read. And the broader 50 Claude prompts for marketers library covers the non-Code equivalents for everything here.

Data pull and API integration prompts

These prompts require API keys or access credentials — set them as environment variables before running.

Prompt 12 — Pull competitor ad data via AdLibrary API

Using the AdLibrary API (base URL in $ADLIBRARY_BASE, key in $ADLIBRARY_KEY):
1. For each brand in ./brands.txt, fetch the last 90 days of ads
2. For each ad: extract creative format, headline, CTA, first run date, last active date
3. Identify which ads ran for >14 days (signal: survived the test phase)
4. Save to ./data/competitor-ads-[brand].json and a summary CSV at ./data/competitor-summary.csv

Install requests if not available. Rate-limit per API docs.

Expected output: Structured competitor creative data without manual research. The 14-day filter is a signal — ads that survive that long are working.

Prompt 13 — Automated search console data pull

Using the Google Search Console API (credentials in ./gsc-credentials.json, property URL in $GSC_PROPERTY):
Pull the last 90 days of query data for pages matching /blog/*.
For each page:
- Top 10 queries by impressions
- Average position, CTR
- Flag queries ranked 8-20 with >100 impressions (quick-win candidates)

Output ./seo/gsc-quick-wins.csv sorted by opportunity score (impressions × (1 - CTR)).

Expected output: A prioritized list of ranking-adjacent keywords where a content update could move the needle fast.

What these prompts don't replace

Claude Code prompts are high-leverage for structured analysis, batch operations, and report generation. They're not a replacement for strategic judgment.

The prompts above produce outputs — patterns, numbers, drafts. A human still needs to decide which patterns matter, which numbers warrant action, and which copy variants match the brand voice. Claude Code compresses the distance between raw data and a formatted deliverable; it doesn't close it.

There are also ceiling limits. For analysis that requires understanding context across months of brand history, visual design judgment, or real-time channel changes (a sudden CPM spike, a platform policy update), you need human oversight in the loop. These prompts work best when scoped tight and run as part of a repeatable workflow.

For teams running ad intelligence at scale, AdLibrary's AI ad enrichment feature can feed structured competitive data directly into these workflows — the data layer that makes prompts like #12 above available without building your own scraper.

The best Claude Code marketing setups treat these prompts as components, not one-shot solutions. Run prompt 3 on Monday. Pipe its output into prompt 8 on Friday. The compounding is where the real efficiency lives.

For a broader orientation to the Claude ecosystem beyond Code, the how to use Claude for marketing playbook and Claude vs ChatGPT for marketers cover the strategic context.

Refer to Anthropic's Claude Code documentation for environment setup, and the official Claude Code page for current capability updates.

Frequently Asked Questions

Do I need coding experience to use Claude Code prompts for marketing?

No, but you need to be comfortable with a terminal. You run Claude Code from the command line, and prompts like the ones above can reference files by path and execute code without you writing any code yourself. The learning curve is installing the tool and understanding how to structure file paths — the prompts do the rest. Start with the simpler analysis prompts (Prompts 3 and 6) before moving to API integration.

Can Claude Code read my Meta Ads Manager exports?

Yes. Meta exports ad data as CSV, and Claude Code reads CSV files natively. Export from Ads Manager, place the file in a folder, and reference it by path in the prompt. Prompts 3, 4, and 5 above are built specifically for this. The main variable is which columns your export includes — you may need to adjust column names in the prompt to match your export headers.

How is Claude Code different from using Claude.ai for marketing prompts?

Claude.ai is a chat interface — you paste data in, get text back. Claude Code runs in your terminal with direct access to your file system and can execute code. That means it can process entire folders of files, install Python libraries, hit APIs, and write structured output without you manually moving data between steps. For anything involving more than a few data points, Claude Code is significantly faster.

Are these Claude Code prompts reusable across different clients or accounts?

Yes — that's the main architectural advantage. Structure your prompts to read from parameterized paths (e.g., ./client-a/data/ vs ./client-b/data/) and you run the same prompt logic across multiple accounts. Most agencies using Claude Code for reporting build a prompt library organized by workflow, then adapt the file paths per client. Prompt 8 and 9 are the most portable starting points.

What's the best way to structure a Claude Code prompt for marketing analysis?

Follow a four-part structure: (1) data source — specify the exact file path and format; (2) transformation — what calculations, extractions, or rewrites are needed; (3) constraints — rate limits, character counts, filters; (4) output — file name, format, and structure. Prompts that skip the output specification often produce useful analysis but in a format you can't easily use. Be explicit about where results go and what shape they take.

Good automation in marketing isn't about replacing judgment — it's about not wasting good judgment on tasks that don't need it. These prompts handle the retrieval and formatting. You handle the decisions.

Related Articles