Pre-launch competitor scan: a 30-minute checklist for media buyers
A 30-minute pre-launch competitor scan in 6 blocks: scope, filter proven runners, tag hooks, find gaps, check placement skew, write your launch hypothesis.

Sections
A pre-launch competitor scan takes 30 minutes and prevents three months of wasted creative budget. Before you write a single brief, open your ad intelligence layer and see what's already winning in your vertical — formats, hooks, run lengths, platform distribution. That data is the foundation your launch creative should respond to, not discover on week six. The pre-launch competitor scan is the research step most media buyers skip; this checklist makes it repeatable.
TL;DR: Run a pre-launch competitor scan in six 5-minute blocks: search the category, filter for 60+ day runners, tag dominant hooks and formats, pull the angles you don't have, note platform and placement skew, then write your launch hypothesis. Use adlibrary's unified ad search to scope by vertical in step one. The scan takes 30 minutes; skipping it costs far more.
Why most launches skip the pre-launch competitor scan (and what it costs)
The usual excuse is speed. Launch is in two weeks. The creative team has concepts. Why slow down to study what competitors are doing?
Here is what that reasoning misses: your competitors have already run the creative experiments you're about to fund. Their ad accounts have eaten the learning-phase cost, the iteration rounds, the failed angles. What survived is visible in their current active ad set. Ignoring it is paying tuition on lessons already bought.
The cost shows up in two places. First, CPL in week one is predictably high when you enter a category cold, without knowing which hooks your cold traffic audience already responds to. Second, creative iteration cycles take longer because you're discovering angle gaps reactively rather than pre-empting them.
Having a talented creative team does not substitute for the pre-launch competitor scan. Creative skill decides execution quality; the scan decides which direction to execute in. The best art director in your vertical cannot tell you from memory whether the dominant format in your category shifted from static to UGC video in the last 90 days. The data can. This is the distinction practitioners who skip the pre-launch competitor scan routinely regret.
A 30-minute competitive intelligence investment before briefing compresses the learning phase and gets your first creative batch closer to the target — by design.
The pre-launch competitor scan structure: 6 blocks of 5 minutes
Each block has one job. Do them in order. Don't spend more than 5 minutes on any single block — the goal is a defensible launch hypothesis, not a PhD-level audit.
| Block | Task | Output |
|---|---|---|
| 1 | Search the category, set the scope | Active competitor list (5–10 brands) |
| 2 | Filter by longevity (60+ day runners) | Proven creative shortlist |
| 3 | Tag dominant hooks and formats | Hook taxonomy (3–5 types) |
| 4 | Pull the angles you do not have yet | Gap list |
| 5 | Note the platform and placement skew | Channel priority signal |
| 6 | Write the launch hypothesis and exit criteria | One-page brief |
Work in a shared doc so the creative team sees your reasoning — the hypothesis line at the end is the contract between research and execution.
Block 1: search the category, set the scope
Step zero before any paid campaign: open adlibrary's unified ad search and search by your product vertical — not by brand name. This is where the pre-launch competitor scan actually starts. Begin with the category keyword your ICP would search, then layer in the two or three closest competitors by name.
Why category-first? Brand-first searches anchor you to brands you already know, which means you'll miss the challenger brands running aggressive test-and-scale campaigns. In most DTC and SaaS verticals, the challenger cohort has higher creative velocity than established players and is therefore a better signal for what's actively working right now.
Your output after block 1: a list of 5–10 active advertisers in your space, filtered to brands that have run ads in the last 30 days. Save this list — it is your analysis scope for the rest of the pre-launch competitor scan.
Practical scope rule: cap at 10 brands. If you're in a crowded vertical (supplements, SaaS, fintech), pick the 3 dominant incumbents, the 2–3 fastest-growing challengers by ad volume, and the 1–2 most direct competitors by offer. That sample covers the creative pattern space without overwhelming the remaining 25 minutes.
For cross-platform scope — if your launch is running on Meta, TikTok, and YouTube simultaneously — unified search across platforms at this step saves you from building a Meta-only hypothesis when your audience spends more time elsewhere.
Block 2: filter by longevity (60+ day runners)
Ad longevity is the single most reliable signal you have from external data. An ad that has run for 60 or more days without being paused has survived the advertiser's performance threshold. It is generating enough return to justify continued spend.
Apply the longevity filter before studying any individual creative. This removes the noise — test ads, failed launches, seasonal promos that ran for two weeks — and leaves you with the proven creative core. This is the step that separates a rigorous pre-launch competitor scan from a casual browse.
AdLibrary's ad timeline analysis shows the first-seen and last-seen dates for each ad, which lets you sort by run length and identify the 60-day-plus survivors immediately. These are the creatives worth reverse-engineering. According to Meta's own research on creative performance, ads that survive past their initial learning phase represent less than 20% of tested creatives — meaning these long-runners are genuinely selected-for material.
What you're looking for at this step:
- Format of long runners. If the 60-day survivors in your category are all UGC video and you planned to launch with polished static, that's a signal worth reconsidering before spending on production.
- Hook repetition. If three different competitors are running variations of the same opening hook — same first-line structure, different brand — that hook has been validated by multiple accounts with independent data.
- Offer consistency. Long-running ads often reveal category-standard offers. If everyone leads with "free trial" and you're planning to lead with a discount, you need a specific reason why you're departing from the proven pattern.
Save the 10–15 longest-running ads to your working list using saved ads. This becomes your reference set for blocks 3 and 4.
Block 3: tag the dominant hooks and formats
Open your saved shortlist and tag each ad by two dimensions: hook type and format.
Hook taxonomy for most DTC and SaaS verticals:
- Pain-first: opens with the problem the audience recognizes ("Still losing 3 hours a day to X?")
- Proof-anchor: opens with a concrete result ("47% more conversions in 30 days")
- Curiosity gap: opens with a question or incomplete statement that demands resolution
- Contrast: before/after structure, typically visual
- Authority: opens with a credential, a number, or a named entity
Tag each saved ad with one primary hook type. After 10–15 tags, a pattern emerges. In most mature categories, two hook types account for 60–70% of long-running ads. See the TikTok Creative Center's analysis of hook patterns for vertical-specific benchmarks.
Format taxonomy:
- Static single image
- Carousel
- UGC video (handheld, lo-fi aesthetic, direct address to camera)
- Polished video (produced, voice-over or narrative structure)
- Text-on-screen / motion graphic
Format distribution tells you where the category has converged. Convergence means your audience is trained to respond to that format — but it also means differentiation is possible with a deliberate departure.
Note both the dominant format and whether any format minority is producing long-runners. A single brand running static image in a category flooded with video, sustaining that ad for 90 days, is worth studying as a possible whitespace signal. For deeper workflow on this tagging process, the media buyer workflow use case walks through how to build a structured research session.
The pre-launch competitor scan is not a passive exercise. You are building a hypothesis, and block 3 is where the pattern becomes visible.
Block 4: pull the angles you do not have yet
This block is the most actionable part of the pre-launch competitor scan. Review your tagged shortlist and list every angle you see that is absent from your current creative brief.
Angles come in four types:
Audience-specific angles. competitor ad to Meta campaign pipeline that speak directly to a named persona ("For founders who...") or a named pain context ("When your [specific situation] happens...") rather than a generic audience.
Feature or mechanism angles. Specific claims about how something works — the mechanism rather than the outcome. "Our algorithm re-weights toward highest-converting variants automatically" is a mechanism claim. "Better results" is not.
Social proof formats. Screenshots, named testimonials, specific numbers, user counts, review platform badges. Note which proof formats your competitors use most. If your brief has no proof element and 80% of long-running competitor ads lead with a proof hook, your creative will start at a structural disadvantage.
Objection-handling angles. Ads that directly address a known barrier to purchase — price, complexity, trust, time, switching cost. These are underrepresented in most launch briefs because naming the barrier explicitly requires knowing the category well.
After this block, you should have a gap list of 3–7 angles worth testing. These become the B and C variants in your launch batch — while your A variant executes the dominant proven pattern, your B/C variants test the gaps you identified.
For structured reading on how to organize creative angles into a testable batch, the ad creative testing guide covers the batch structure in detail. The IAB's digital advertising best practices also offer a reference framework for pre-launch review protocols.
Block 5: note the platform and placement skew
Spend five minutes on distribution data before you close the research session.
Look at your shortlisted long-runners and note:
- Which platforms are they active on? (Meta only, Meta + TikTok, cross-platform)
- Which placements dominate? (Feed vs Stories vs Reels vs Audience Network)
- Any brands running on platforms you weren't planning to use?
Platform skew has two implications for your pre-launch competitor scan findings. First, if 100% of competitors are on Meta only and you're planning Meta + TikTok, you have no competitive creative reference for TikTok in your category — your TikTok batch needs to be built from native TikTok patterns. Second, if competitors are running heavy on Reels and you were planning Feed-only, your format decisions need to account for a placement your audience is actively using.
Placement skew inside Meta is especially useful. If all the long-runners are Feed creatives, your vertical may have an audience that doesn't convert from Stories. That's cheaper to know now than to discover after a two-week Stories-only test.
Note the platform and placement distribution in your brief. This becomes an input to your campaign structure and bid strategy decisions at setup.
For how placement data interacts with bidding efficiency, the frequency cap calculator is useful when your pre-launch scan reveals a category with high ad density — a sign that frequency capping decisions matter from day one. The Meta Ads benchmarks study also provides baseline CPM and CTR by vertical that anchors your placement projections.
Block 6: write the launch hypothesis and exit criteria
This is the deliverable. Everything in blocks 1–5 feeds into two sentences and a table.
The launch hypothesis format:
"Based on [N] competitors analyzed, the dominant pattern in [category] is [format + hook type]. We will lead with [our primary creative approach] as our A variant and test [gap angle 1] and [gap angle 2] as B/C variants. Hypothesis: the [gap angle] will outperform the dominant pattern by [metric] at [spend threshold] because [specific reason]."
This forces you to make a falsifiable prediction before you spend. If you can't write the "because" clause, you haven't yet identified the insight — go back to block 4.
Exit criteria table:
| Metric | Green | Yellow | Red |
|---|---|---|---|
| CPL (week 1) | Under [€X] | [€X]–[€Y] | Above [€Y] |
| Hook CTR | Above [%] | [%]–[%] | Below [%] |
| Format winner | Clear by day 7 | Emerging | No signal |
| Gap angle CPL vs A variant | Within 20% | 21–50% | Over 50% worse |
Set these thresholds before launch, based on the CPL ranges visible in your category's long-running ads. The learning phase calculator helps you set the spend threshold at which each number becomes statistically meaningful.
Once the hypothesis and exit criteria are written, the pre-launch competitor scan is complete. The creative brief can be written. The batch structure is defined. The decision rules for week-one optimization are in place.
Build this into your standard pre-launch protocol. Every launch — new product, new market, new channel — gets 30 minutes of structured research before a brief is written. The discipline compounds: each scan sharpens your pattern recognition for the next one.
For how this research feeds directly into your live account management, the media buyer workflow shows the full cycle from pre-launch research through in-flight optimization. The EMQ calculator also connects directly here — once your launch hypothesis is written, you can benchmark the expected message quality of your primary hook before spend begins.
If you're picking a launch stack, the best ad launch tools for 2026 walks through the comparison.
From the buyer side, 9 best Meta advertising software for media buyers in 2026 compares them by buyer-workflow fit.

Frequently asked questions about pre-launch competitor scans
How long does a proper pre-launch competitor scan take?
A structured pre-launch competitor scan takes 30 minutes when you follow the 6-block format. Block 1 through Block 6, five minutes each. The output is a launch hypothesis and exit criteria table — enough to brief creative and set optimization rules before spend begins. Longer research sessions produce diminishing returns unless you're entering a highly fragmented or unusual category.
What is the minimum number of competitor ads to review before launch?
Review the 10–15 longest-running ads from 5–10 competitor brands. Longevity (60+ day runners) is the filter — not total volume. Quality of signal matters more than quantity. A shortlist of 12 proven ads tells you more than 100 recent ads that may still be in their learning phase.
Can a pre-launch competitor scan replace creative testing?
No. The pre-launch competitor scan informs your starting hypothesis and reduces the cost of the first learning cycle — it does not eliminate testing. The goal is to enter the category with a first creative batch that is positioned against the competitive pattern. You still need to run your own data to find your specific audience's optimal angle. The scan compresses the path to that data; it does not replace it.
What data does adlibrary provide for a pre-launch competitor scan?
AdLibrary gives you cross-platform search by vertical, ad timeline analysis for longevity filtering, saved ads for building your working reference list, and placement distribution data. The unified ad search covers Meta, TikTok, and other platforms in a single interface. This covers blocks 1 through 5 of the pre-launch competitor scan without switching tools.
How often should you repeat the competitor scan after launch?
Run a full pre-launch competitor scan before every new launch and a lighter refresh every four to six weeks during active campaigns. Competitor creative velocity is highest in Q4 and at major seasonal windows. Category creative patterns can shift meaningfully in 60 days — a format or hook type that was minority in your last scan may have become dominant by the next one.
The 30 minutes are the cheapest part of your launch
The creative brief, the production budget, the learning-phase spend — all of it is downstream of the competitive context. A pre-launch competitor scan locks in that context before you spend a single euro on production. Every launch starts with 30 minutes on the research layer; that's the protocol.
Further Reading
Related Articles
High-Volume Creative Strategy: Scaling Meta Ads Through Native Content and Testing
Learn how high-growth brands scale using high-volume creative testing, native ad formats, and strategic retention workflows.

Manual Ad Creation Is Too Slow — Here's How Teams Ship 10× More Creative in 2026
Manual ad creation is slow because briefs are ambiguous, not because execution is slow. Fix brief quality and angle libraries first, then add Claude Opus 4.7, Nano Banana, and Arcads.

Automated Facebook Ad Launching: The 2026 Workflow That Actually Scales
Stop automating the wrong input. The 2026 guide to automated Facebook ad launching — Meta bulk uploader, Advantage+, Marketing API, Revealbot, Madgicx, and Claude Code — with the Step 0 angle framework that separates launch velocity from variant sprawl.

AI for Facebook Ads: Targeting, Creative, and Optimization in 2026
Meta's AI systems now control audience discovery, creative delivery, and budget allocation. Here's how Advantage+, broad targeting, and AI creative tools actually work in 2026.

Competitor Research Tools Compared 2026: Ad Intelligence, SEO, and Market Signals
Compare every major competitor research tool by category — ad intelligence, SEO, tech stack, and social listening. Honest rankings, coverage gaps, and opinionated picks for 2026.

Competitor Ad Research Strategy: The 2026 Creative Intelligence Framework
Why Competitor Ad Research is Essential in 2026 Competitive ad research provides a blueprint for market resonance by identifying high-performing hooks, creative.

High-Performance Ad Intelligence: Evaluating Leading Creative Research Platforms
In the fast-evolving digital advertising landscape of 2026, relying on basic ad libraries is no longer sufficient for maintaining a competitive edge.