Claude for Ad Copywriting: Prompts, Workflows, and Real Examples
Most AI-generated ad copy fails not because the model is weak, but because the input is. This guide covers the prompt patterns, brief structures, and workflow techniques that get Claude producing direct-response copy that actually competes in-feed.

Sections
Most ad copy written by AI is bad for a specific, fixable reason: the model never saw the customer. It saw a product. That gap is why Claude for ad copywriting works or fails entirely based on your brief — not the model's capabilities.
A team testing Claude-generated Facebook ads without a structured brief will get the same flat, benefit-stacking copy everyone else gets. A team that feeds Claude ICP pain, brand voice rules, and real proof points will get first-draft copy that's ready to test. The difference is entirely in the input.
TL;DR: Claude writes strong ad copy when given a specific audience, a named pain, a proof point, and explicit tone constraints. Without those inputs, it defaults to generic benefit stacking. This guide covers five prompt patterns, a full worked example, and the platform-native rewrite workflow that turns good Facebook copy into Google Search and LinkedIn variants.
Why Claude for ad copywriting fails by default
The failure mode is structural, not random. When you prompt Claude with "write me a Facebook ad for my SaaS product," it produces AIDA-by-rote: grab Attention, build Interest, create Desire, call to Action. Technically correct. Competitively useless.
Three specific failure patterns show up consistently:
Generic framing. Without a specific ICP and a specific pain, Claude writes for everyone and reaches no one. "Save time and money" is not a hook. It is a filler phrase that your cold traffic scrolls past in 0.4 seconds.
Missing tension. The best ad copy creates a gap — a before and an after, a problem and a relief. LLMs default to positive framing and skip the friction entirely. No friction means no click.
No brand voice. Claude defaults to polished, professional, centrist prose. If your brand is sharp, irreverent, or technically dense, you will not get there without explicit instruction.
The fix is not a better model. It is a better brief.
The input quality principle
Good ad copy is a function of: specific brief + brand voice rules + social proof or data points.
Remove any one and the output degrades predictably. Add all three and the model has enough raw material to make real decisions.
Think of Claude less as a copywriter and more as a junior writer who is fast, tireless, and direction-dependent. A junior writer without a brief writes fluff. The same writer with a tight brief, a style guide, and three customer testimonials produces solid first drafts.
For a deeper look at how creative testing connects to this input principle, see how to structure creative tests around AI tools.
Claude prompts for Facebook ads that actually work
Five prompt patterns cover roughly 90% of paid social use cases. Each is copy-pasteable and annotated.
1. The hook generator
Use when you need 10-15 opening lines to test. The goal is variation across emotional angles, not iteration on one framing.
You are an expert direct-response copywriter specializing in paid social.
PRODUCT: [product name and one-sentence description]
TARGET AUDIENCE: [specific demographic + psychographic descriptor]
CORE PAIN: [the specific problem this product solves]
PROOF POINT: [one stat, testimonial, or result]
Generate 12 opening hooks for a Facebook ad (first 2 lines visible before "See more").
Vary across these emotional angles: fear of missing out, social proof, contrarian claim,
identity statement, surprising stat, negative outcome avoided.
Format: numbered list, one hook per line, no explanations.
Constraint: no hook may use the words "transform," "oversell," or "discover."
Forcing variation across angles prevents the model from clustering around one framing. The banned words list stops the three most common LLM filler words. Cross-reference your hooks against Meta's advertising policies before running.
2. Pain-point amplification
Useful for mid-funnel ads targeting audiences who have category awareness but haven't converted. Leads with the problem, not the product.
You are writing a Facebook ad for [product] targeting [audience].
IDENTIFIED PAIN: [describe the specific frustration in the customer's own language]
WHAT THEY HAVE TRIED: [list 2-3 alternatives they have already attempted]
WHY THOSE FAILED: [specific reason each falls short]
Write a 150-word ad that:
1. Opens by naming the pain in language the customer would use themselves
2. Acknowledges the failed alternatives without dismissing them
3. Positions the product as a different mechanism, not just a better version
4. Ends with a low-commitment CTA ("See how it works" not "Buy now")
Tone: direct, not aggressive. Empathetic, not weak.
The "different mechanism" instruction stops the copy from reading as a comparison ad. It positions the product as solving a different problem than what alternatives were solving — which is how you bypass ad fatigue from audiences who have seen every competitor's angle.
3. Facebook ad from landing page
Fastest workflow for teams who have good landing page copy and need paid social extensions.
I will paste a landing page below. Extract the core value proposition
and write 3 Facebook ad variants from it.
RULES:
- Do not invent claims not present in the source material
- Each variant must have: primary text (max 125 words), headline (max 27 chars), description (max 27 chars)
- Variant 1: leads with the outcome
- Variant 2: leads with the pain
- Variant 3: social proof lead (use any testimonials or stats from the page)
OUTPUT FORMAT:
[Variant 1]
Primary: ...
Headline: ...
Description: ...
--- LANDING PAGE BELOW ---
[paste full landing page text here]
The character limits match Facebook's actual ad creative specs. Building them into the prompt means you are editing for messaging, not cutting for character count after the fact.
4. UGC-style script
For video ads mimicking user-generated content. UGC scripts fail when they sound scripted — this prompt explicitly constrains formal language.
Write a 30-second UGC-style video ad script for [product].
SPEAKER PERSONA: [age, occupation, specific situation they were in]
TRIGGER MOMENT: [what specific event made them try the product]
BEFORE STATE: [concrete description of life before the product]
AFTER STATE: [concrete description of life after — specific, not hyperbolic]
FORMAT: action/spoken word format
- [Action]: describes what the viewer sees on screen
- [VO]: what the speaker says
TONE RULES:
- No scripted phrases ("I was struggling with...", "That's when I found...")
- Use incomplete sentences and natural restarts
- Maximum one superlative in the entire script
- End on product shot, not on the speaker
5. A/B headline matrix
When running structured split tests, you need headlines that vary on one dimension at a time.
Generate a headline matrix for [product] with the following structure:
Rows = 4 value angles: [list 4 distinct value props]
Columns = 3 tones: direct/benefit-focused | curiosity gap | social proof
Output as a 4x3 table. Each cell = one headline (max 60 characters).
Headlines in the same row address the same underlying value.
Headlines in the same column use the same rhetorical device.
This is for Google Search and Meta. Do not use trademark symbols.
12 headlines with controlled variation — what you need for a proper creative testing cycle. See how to structure creative tests around the AIDA framework for sequencing.

Worked example: B2B analytics tool
Here is a concrete pass from brief to copy, with notes on what the output does and does not do.
Product: A B2B analytics platform that surfaces churn signals in SaaS customer data.
Brief inputs:
- Audience: VP of Customer Success, Series A-C SaaS, 50-500 employees
- Pain: customers churn 90 days in without visible warning
- Proof: "We caught 3 accounts worth $180k before they churned" — real customer quote
- Tone: data-driven, no hype
Prompt (Pain-Point Amplification pattern):
You are writing a LinkedIn ad for a B2B analytics platform targeting VP Customer Success
at Series A-C SaaS companies (50-500 employees).
IDENTIFIED PAIN: Customers churn 90 days in with no visible warning. By the time
CS catches it, the renewal conversation is already lost.
WHAT THEY HAVE TRIED: Health scores, QBRs, manual account reviews
WHY THOSE FAILED: Health scores are lagging indicators. QBRs only catch what
customers tell you. Manual reviews don't scale past 50 accounts.
Write a 130-word LinkedIn ad. Opens by naming the pain, acknowledges why current
methods fall short, and positions this tool as a leading indicator system.
Use this customer quote: "We caught 3 accounts worth $180k before they churned."
CTA: "See the signal, not the summary."
Tone: direct, credible, zero marketing jargon.
Output:
By the time your health score turns red, you have already lost the account.
QBRs tell you what customers are willing to say. Health scores tell you what happened last quarter. Neither tells you what is about to happen.
[Product] surfaces behavioral signals — login drop-off, feature abandonment, support escalation patterns — 30 to 60 days before a churn decision is made.
"We caught 3 accounts worth $180k before they churned." — VP CS, Series B SaaS
This is a leading indicator system. Not another dashboard of things you already know.
See the signal, not the summary.
Why this works: The opening creates tension without hyperbole. The middle paragraph names three specific signals rather than speaking abstractly about "data." The customer quote provides validation at the exact moment the reader is deciding whether to believe the claim.
What it gets wrong: "Behavioral signals" is slightly jargon-heavy for a non-technical VP. A revision pass replaces it with specifics: "login patterns, feature drop-off, support ticket volume."
Using Claude for UGC scripts and creative variations
UGC-style video is where the output quality gap between good and bad briefs is most visible. A prompt that says "write a UGC ad for my skincare product" produces something that sounds like it was read off a cue card. A prompt with a specific speaker persona, a named trigger moment, and a concrete before/after produces something that sounds like a real person talking.
The same principle applies to Facebook ad creative. Claude is not generating the visual — it is generating the hook, the spoken copy, and the emotional arc. Give it a character, not a product.
For a broader view of how UGC strategy plays into paid social creative, see scaling ad creatives with UGC automation and AI UGC video ads strategy.
Brand voice prompting
Brand voice is the hardest thing to transmit. The most common mistake is describing it abstractly ("we are bold and direct") rather than demonstrating it concretely.
Three approaches that work:
1. Provide samples, not adjectives. Paste 3-5 sentences of on-brand copy and say "Match this register." Claude will pattern-match far more accurately than it will interpret "conversational but professional."
2. Anti-voice list. Tell the model what your brand does not sound like. "We never use corporate hedging phrases. We do not use exclamation marks." Constraints are more precise than descriptions.
3. Character prompt. "Write as if the narrator is a 38-year-old performance marketing director who has been burned by bad tools and is explaining this to a colleague over lunch." Persona prompts expose register and specificity that abstract adjectives cannot.
Compliance and regulated categories
For regulated categories — finance, healthcare, supplements, legal — build compliance constraints directly into the prompt rather than post-editing.
COMPLIANCE CONSTRAINTS:
- No income claims or guarantees
- No before/after claims for health outcomes
- All efficacy references must use "may" or "can" language
- Do not reference specific competitors by name
- Include: "Results may vary" if using customer testimonials
This does not replace legal review. It reduces the number of rounds to get copy that passes review. Per Anthropic's prompt engineering guide, structured constraint sections in prompts produce significantly more consistent output than post-hoc editing.
Platform-native rewrites with Claude
Copy that works on Facebook does not work on Google Search. Format constraints, intent signals, and creative conventions are different enough that a rewrite is structural, not cosmetic.
Take the following Facebook ad primary text and rewrite it for three platforms.
Each rewrite must respect native format and intent:
GOOGLE SEARCH: headline 1 (30 chars) + headline 2 (30 chars) + description (90 chars).
Search intent is high. Lead with the outcome, not the story.
LINKEDIN: 150 words. Professional register. Problem-solution structure.
First line must work as a standalone hook in the feed.
TWITTER/X: 240 characters. Punchy. No hashtags. First 90 characters must stand alone
if truncated.
--- ORIGINAL FACEBOOK AD BELOW ---
[paste here]
For format specs and benchmark data, Meta's ad format guide details what character counts get truncated per placement. Build those limits into every prompt upfront.
What Claude doesn't replace
Claude is not a creative strategist. It cannot tell you which audience segment to prioritize, whether your offer is competitive, or whether your landing page will convert. It does not know your ROAS calculator breakeven point or your margin structure.
It also will not catch subtle compliance violations without explicit prompting. A model trained on general internet text will occasionally produce claims that are technically permissible but wrong for your specific regulatory context.
The workflow is: research → strategy → brief → Claude → edit → test. Claude sits at step four. Treating it as step one is why most teams are disappointed.
Feeding Claude with real ad data
All of the prompt patterns above improve when you add real competitive creative as context. Pull active ads from adlibrary — filter by competitor, category, or landing page URL — and paste the top-performing creative as reference material before your brief. Claude identifies the hooks, formats, and emotional angles your competitors are investing in. Your output is calibrated against what is actually running in-market rather than against training data.
This is the difference between prompting in a vacuum and prompting with signal. See competitor ad research strategy and classic sales letters that defined direct response for how to build that research workflow. The how to use Claude for marketing guide covers the broader workflow integration.
The prompt is not the product. The brief behind the prompt is.
Frequently Asked Questions
Can Claude write Facebook ads?
Yes. Claude produces strong Facebook ad copy when given specific brief inputs: a named audience, a concrete pain point, a proof point (stat or testimonial), and explicit tone constraints. Without those inputs, it defaults to generic benefit-stacking that performs poorly in-feed. The five prompt patterns in this guide are designed to supply those inputs systematically.
What prompts work best for Claude ad copywriting?
The hook generator and pain-point amplification patterns produce the most consistently usable output. The hook generator is best for early-stage testing when you need variation across emotional angles. Pain-point amplification works for mid-funnel audiences with category awareness who haven't converted. Both require a specific ICP definition and at least one real proof point.
Is Claude better than ChatGPT for writing ads?
The output quality difference between Claude and ChatGPT on structured ad copy tasks is smaller than the difference between a well-structured brief and a vague one. Claude tends to follow multi-part constraints more precisely in a single pass, which matters for format-constrained placements like Google Search headlines. For a direct comparison see Claude vs ChatGPT for marketers.
How do I stop Claude from writing generic ad copy?
Three interventions fix generic output: (1) specify the ICP at the psychographic level, not just demographic; (2) include a real proof point — a stat, a customer quote, a before/after result; (3) add an anti-voice list of phrases your brand never uses. Constraints produce better output than positive descriptions in every case.
Can Claude write UGC-style video scripts?
Yes, with the right structure. The UGC script prompt pattern in this guide includes a speaker persona, trigger moment, and explicit tone rules that prevent the scripted phrasing that makes most AI-generated UGC sound fake. The key constraint is banning stock opener phrases like "I was struggling with..." and limiting superlatives to one per script.
Further Reading
Related Articles

Claude vs ChatGPT for Marketers: Which LLM Fits Your Workflow
Task-by-task comparison of Claude and ChatGPT for marketers. Long-form writing, ad copy, competitive research, context windows, and opinionated picks by role.

Evaluating AI Tools for Ad Creative Generation and Rapid Testing
Speed up your ad creative workflow with AI. Compare top tools for generating ad variations, multi-platform formatting, and conversion scoring.

The Anatomy of High-Engagement Facebook Ad Creatives
Explore the structural principles of high-performing social ads, focusing on pattern interrupts, curiosity gaps, and editorial-style creative formats.
Building Data-Driven Creative Testing Hypotheses from Competitor Ad Research
Leverage ad intelligence tools to structure competitor creative analysis, isolate key variables, and build data-driven campaign hypotheses.

5 Classic Sales Letters That Defined Direct Response Copywriting
Analyze 5 famous sales letters including The Wall Street Journal and John Caples' piano ad to understand timeless copywriting and creative strategy.