Brand Lift: The Only Metric That Shows Whether Your Ads Are Actually Building a Brand
Brand lift is the only measurement that tells you what ROAS can’t — whether your ads are shifting awareness, recall, and purchase intent. Most teams ignore it until a brand health crisis forces the question. Here’s how Meta Brand Lift, Google Brand Lift, and survey studies work, what they actually cost, and how to read the results.

Sections
TL;DR
Brand lift is the only measurement that tells you whether your ads are actually working on the top of the funnel — not just whether they converted last-click. Most teams ignore it until a brand health crisis forces the question. Meta Brand Lift, Google Brand Lift, and third-party survey studies all exist; the difference between them is cost, statistical power, and what question you're actually trying to answer. If you're spending $50K/month on awareness campaigns and measuring success with ROAS, you're flying blind on everything that matters for long-term growth.
What brand lift actually measures (and what it doesn't)
Performance marketing taught an entire generation of media buyers to trust only what they can attribute. Clicks. Purchases. CPAs. The problem is that none of those metrics tell you whether your ads are building anything — brand recall, purchase intent, message association — the invisible stack that makes your future conversion rate cheaper to achieve.
Brand lift measures the incremental change in brand perception caused by ad exposure. The methodology is elegant and old: split your audience into an exposed group (saw the ads) and a control group (was held out), then survey both and compare the difference. The delta is lift. A 5-point increase in purchase intent means people who saw your ads were 5 percentage points more likely to say they'd consider your brand than people who didn't.
What it doesn't measure: bottom-funnel conversion, click-through, or any behavior-based signal. Brand lift is survey-only. That is its strength and its limitation. The strength: it gets at mental states that no pixel can read. The limitation: surveys are self-reported, and there's a gap between what people say and what people do.
The academic foundation here matters. Les Binet and Peter Field's IPA Effectiveness Databank research — covering 1,400+ campaigns over 30 years — established that brand-building and activation work on different timescales. Brand campaigns compound over 18–24 months. Performance ads peak in 6–13 weeks. Teams that run only activation are milking past brand equity; teams that run only brand are building awareness with no bottom-funnel harvest. The 60/40 rule (60% brand, 40% activation by budget) emerges from this data — but the ratio only makes sense if you're measuring both. Brand lift closes the measurement loop on the brand side.
The five study types: how they compare
| Study type | Who runs it | Min spend (est.) | Sample size needed | Lag time | What you learn |
|---|---|---|---|---|---|
| Meta Brand Lift | Meta (within Ads Manager) | ~$30K–$100K+ per study | 15,000–20,000 impressions minimum, 500+ surveyed | 2–4 weeks | Recall, awareness, purchase intent — tied to Meta creative |
| Google Brand Lift | Google (within Campaign Manager / DV360) | ~$50K+ per study | Similar to Meta; video campaigns preferred | 2–4 weeks | Ad recall, brand awareness, product consideration; YouTube-native |
| Nielsen Brand Effect | Nielsen (third-party) | $50K–$200K+ | Larger panels; cross-channel | 4–8 weeks | Cross-platform lift; brand equity tracking |
| Third-party survey tools (Lucid, Kantar, Dynata) | Advertiser-commissioned | $10K–$50K+ | Flexible; custom panel | 4–12 weeks | Custom attributes; competitive benchmarking |
| DIY post-hoc survey | Internal (e.g., Typeform + Ads Manager targeting) | $500–$5K (survey cost) | 200–500 usable responses | 1–3 weeks | Directional only; lower validity; no holdout guarantee |
Key takeaways from this table: Meta's and Google's native studies are the most actionable for direct response advertisers because they're tied directly to specific creative and campaigns. Third-party studies are more credible for board-level brand tracking but cost more and take longer. DIY post-hoc surveys are useful for direction but lack the statistical power to be conclusive — the holdout group is approximated, not true random assignment.
The choice depends on your objective. If you want to know whether your specific Meta video creative is driving recall, use Meta Brand Lift. If you want a cross-channel view of brand health trajectory, use Nielsen or a panel vendor. If you're directionally trying to understand whether anyone remembers you after a campaign, a DIY survey gets you there cheaply.
Required spend and sample size by platform
The most common question: "Can we run a brand lift study?" The honest answer is that the platforms won't tell you their exact minimums, and they vary by category, audience size, and campaign objective. Here are the real ranges based on documented case studies and industry practitioner experience:
| Platform | Category | Estimated min spend | Required impressions | Detection threshold |
|---|---|---|---|---|
| Meta Brand Lift | eCommerce / DTC | $30K–$50K per study period | 15K–20K | ~2–3pt lift in recall |
| Meta Brand Lift | B2B / Enterprise | $50K–$150K+ | 20K–30K | ~3–5pt lift (smaller audiences) |
| Google Brand Lift | CPG / Retail | $50K+ | 1M+ impressions (YouTube) | ~2pt lift in recall |
| Google Brand Lift | SaaS / B2B | $75K+ | Depends on targeting depth | ~3–4pt lift |
| Lucid / Dynata panel | Any category | $10K–$25K (survey only, no media buy) | 500 exposed + 500 control | ~4–6pt detectable |
| Nielsen Brand Effect | CPG / large brand | $75K–$200K | Cross-platform; panel-sized | ~1–2pt (higher precision) |
The spend minimums are not media thresholds — they're statistical power thresholds. The platform needs enough exposed users and enough completed surveys to detect a lift signal above statistical noise. If your campaign doesn't reach the minimum, the study result will be "inconclusive" (which means wasted study budget on top of media budget). The platforms have become more conservative about running studies they know will be underpowered, but they don't always flag this upfront.
The category column matters because audience size affects minimum impressions needed. A DTC brand targeting 35-year-old US women has a massive addressable pool; a B2B software company targeting IT decision-makers in DACH has a small one. Smaller audiences require longer flight times or higher frequencies to reach the impression minimum — both of which affect the study validity.
How Meta Brand Lift works (the mechanics)
Meta's Brand Lift product lives inside Ads Manager and runs alongside your normal campaign. When you set it up, Meta randomly splits your target audience into two groups at the account level: an exposed group that will see your ads, and a holdout group that won't. The split is maintained throughout the study period.
After sufficient impressions accumulate, Meta surveys both groups directly inside Facebook or Instagram — a short poll that appears as a poll ad placement. Questions are limited to: ad recall ("Have you seen an ad for [Brand] in the past few days?"), brand awareness ("How familiar are you with [Brand]?"), and purchase intent ("How likely are you to purchase from [Brand]?"). You choose which questions to include; Meta recommends 1–2 per study.
The output is a "lift score" — the percentage-point difference between exposed and control group responses. Meta also reports estimated cost per lifted user, which is useful for comparing creative efficiency across studies.
Limitations to know going in: Meta surveys are in-platform only, so you're measuring lift among Facebook/Instagram users specifically. The survey sample may skew active-app users, which could introduce selection bias. Studies require 14–28 days minimum flight. Creative variations can be tested against each other (what Meta calls "split brand lift") — this is the most useful application for agencies and creative teams who want to know which version of the campaign drives more recall.
The Meta advertising transparency library doesn't surface brand lift data publicly, but the Meta for Business Brand Lift documentation covers the full setup. Meta also publishes a Brand Lift Best Practices guide with recommended campaign lengths and creative formats.
How Google Brand Lift works
Google's equivalent product, documented in Google's Ads Help Center, runs on YouTube, DV360, and some Display campaigns. The mechanics mirror Meta's: exposed vs. control split, survey delivery, lift calculation.
Google's surveys are served as skippable interstitials within the YouTube experience — a different placement context than Meta's in-feed poll. The question pool is similar: ad recall, brand awareness, consideration, favorability, and purchase intent.
Where Google's product differs materially from Meta's:
Video-first. Google Brand Lift is optimized for YouTube video creative. Display-only campaigns tend to generate weaker lift signals, partly because the survey delivery mechanism is video-adjacent. If you're running a Google campaign that's predominantly display, expect lower statistical power.
Reach requirement. Google requires significant reach to run a study — the guidance in Google Ads Help specifies that campaigns need to be large enough to reach the required sample size, which in practice means significant impressions, especially for niche B2B audiences.
Brand Awareness vs. Lower-Funnel metrics. Google's lift metrics lean toward awareness and consideration — they don't surface purchase intent data as prominently as Meta. This is consistent with YouTube's upper-funnel positioning.
Absolute Brand Lift vs. Relative Lift. Google reports both absolute lift (percentage-point change) and relative lift (percentage change from baseline). Relative lift makes small-brand studies look better; absolute lift is more honest for goal-setting.
The key insight for teams running both Meta and Google awareness campaigns: the two studies are not directly comparable. They measure different audiences (Meta social users vs. Google/YouTube users), in different survey contexts, with different question phrasings. Treat them as separate signals for separate inventory, not as a combined brand health score.
What Ehrenberg-Bass says (and why it changes how you interpret results)
The Ehrenberg-Bass Institute at the University of South Australia, led by Byron Sharp, has spent decades studying how brands grow. Their findings in How Brands Grow create a framework for interpreting brand lift data that most brand managers miss.
The core Ehrenberg-Bass insight: brands grow by increasing mental and physical availability among light buyers. Most of a brand's growth comes from occasional buyers — not loyalists. This means your brand lift study should be designed to measure penetration-level metrics (awareness, recall among light category buyers) rather than depth-of-loyalty metrics (purchase intent among existing customers).
Practical implication: if your brand lift study is running against a custom audience of existing customers, you're measuring in-group reinforcement, not growth. The lift will look high. But it's not telling you whether you're reaching new buyers. The most useful brand lift studies sample broadly against a category-level audience — people in your buying category who may or may not know your brand.
This is also why brand lift "success" benchmarks require context. A 10-point lift in ad recall among a retargeting audience is almost meaningless. A 3-point lift in brand awareness among a cold, category-matched audience is significant. The denominator matters as much as the numerator.
The IPA / Binet and Field data reinforces this. Their The Long and the Short of It document (available at the IPA website) establishes that brand campaigns need time — 6+ months — to generate compounding effects in market share. A single 4-week brand lift study is a snapshot of a long process, not a verdict.
Reading brand lift results: what the numbers actually mean
You get the study back. It says: Ad Recall +4.2 points, Purchase Intent +1.8 points, Brand Awareness +0.9 points. What do you do with this?
Layer 1: Statistical significance. Meta and Google report confidence intervals. A 4.2-point lift that's not statistically significant at 90% confidence is noise. Read the confidence interval, not just the headline number. Most brand lift platforms provide significance flags — check them.
Layer 2: Absolute vs. baseline. A 4.2-point lift from a 10% awareness baseline means fewer absolute people than a 1-point lift from a 60% baseline. If you're a challenger brand, even small absolute lifts in awareness are meaningful because the pool of unaided aware prospects is thin. If you're a category leader, a 0.5-point lift in purchase intent might represent millions of potential buyers.
Layer 3: Frequency and creative breakdown. Meta's split brand lift allows you to compare creative A vs. creative B. If your brand-story video generates 6-point recall lift and your product-feature video generates 2-point recall lift, you've learned something actionable about what drives memorability vs. conversion consideration. This is the most underused feature of Meta Brand Lift.
Layer 4: Cost per lifted user. This metric normalizes lift across campaign sizes. It tells you: for every dollar spent, how many people moved on the brand metric? Tracking this over time, and across campaigns, gives you a brand-building efficiency curve — the closest thing to a ROAS equivalent for upper-funnel.
Layer 5: Trend over time. A single brand lift study is directional. Four quarterly studies showing a rising recall trend is evidence. Plan for a measurement cadence, not a one-time measurement event.
Step 0: Adlibrary detects competitor brand-lift creative before you run a study
Here's the competitive edge most teams miss: you don't have to wait for your own brand lift study to understand what drives recall vs. conversion in your category. Your competitors have already run hundreds of variations in the market. The evidence is in their ad library.
Adlibrary's saved ads combined with AI ad enrichment surfaces the pattern. When you save competitor ads and run them through the enrichment layer, the AI classifies each creative by its primary objective signal: is this optimized for recall (brand storytelling, emotional hook, product-free brand shots) or conversion (price, CTA, feature list, urgency)? The creative that has been running the longest — weeks, sometimes months — and at high frequency against a broad audience is the creative that's driving brand-level metrics, not just conversions. Platforms reward recall-optimized creative with lower CPMs over time. Long-flight, broad-audience creative is the proxy signal.
This means that before you spend $50K on a brand lift study to test your own creative hypotheses, you can mine competitor ad libraries to understand which creative formats, visual styles, and message angles have demonstrated brand-building staying power in your category. You're not copying. You're doing creative research with a richer evidence base than any focus group can provide.
The practical workflow: use Adlibrary's search to pull all competitor ads in your category that have been active for 30+ days. Filter for broad audience targeting (national or large interest-based) which signals top-funnel intent. Use the AI enrichment to classify by creative objective. The ones classified as brand/recall rather than conversion/DR are your reference architecture for brand lift creative.
This shortcut doesn't replace a brand lift study. It means you enter the study with a stronger hypothesis — and you get a better result because your creative was already optimized against category-level evidence rather than internal assumptions.
Setting up your first brand lift study: a step-by-step
Step 1: Define the measurement objective before you book the study. Recall, awareness, and purchase intent answer different questions. Recall tells you whether your creative was memorable. Awareness tells you whether people know your brand exists. Purchase intent tells you whether they'd consider buying. Most studies try to measure all three; the most useful studies focus on one primary metric and treat others as secondary signals.
Step 2: Set the campaign up as a brand lift-compatible objective. On Meta, this means using Awareness or Reach objectives, not Conversions. On Google, this typically means YouTube Reach or Brand Awareness campaigns, not Video Views optimized for completion. The campaign objective gates which brand lift products are available to you.
Step 3: Lock in the holdout. The study is only as valid as the randomization. Don't mess with audience targeting mid-flight. Don't add retargeting campaigns running to the same audience during the study period — this contaminates the holdout.
Step 4: Run long enough. 4 weeks minimum; 6–8 weeks preferred for smaller campaigns near the spend threshold. If the study finishes inconclusive due to underpowering, you can't retroactively extend it — you have to restart.
Step 5: Read the creative breakdown first. If you ran multiple creatives, the per-creative lift data is the most valuable output. It tells you not just whether your campaign worked but which specific execution drove the effect. This feeds directly into creative testing prioritization for the next cycle.
Step 6: Triangulate with your MER trend. Brand lift is survey-based; your blended Marketing Efficiency Ratio captures actual revenue efficiency. If brand lift improves over two quarters and MER improves over the same period (while holding media mix roughly stable), you have convergent evidence that brand investment is compounding. Neither signal alone is sufficient.
The brand lift ↔ ROAS tension: why teams get this wrong
The most common failure mode for brand lift programs: the ROAS team controls the budget. When they look at awareness campaigns through the lens of last-click attribution, the campaigns look terrible. No purchase events, high CPM, no ROAS. The brand lift study shows +5 points in purchase intent, but that's a survey result — it doesn't show up in the attribution window. The ROAS team cuts the awareness budget. Six months later, ad fatigue has set in on the performance campaigns, CAC is rising, and the brand has no awareness equity to lean on.
This cycle has been documented extensively. Binet and Field's IPA research names it directly: short-termism in budget allocation is the single largest drag on long-term marketing ROI in their dataset. The brands that outperform over a 3–5 year horizon are the ones that held brand investment constant even when the attribution models couldn't justify it.
The practical resolution is a measurement architecture that doesn't ask brand and performance to compete on the same scorecard. MER handles blended efficiency. CAC trends over time capture whether acquisition is getting harder. Brand lift studies provide the leading indicator that explains why CAC will get easier or harder in the future. Run these in parallel; report them separately; set separate goals.
Brand lift for B2B vs. DTC vs. CPG: different norms, different thresholds
DTC / eCommerce. The most common use case. Meta Brand Lift is the default tool. Recall and purchase intent are the primary metrics. Benchmark: 3–6 point recall lift for a well-executed campaign; 1–3 point purchase intent lift. Study cadence: quarterly for brands spending $100K+/month on awareness. Ehrenberg-Bass lens: focus on category buyers, not existing customers.
B2B / SaaS. Harder to run due to small audience sizes. Google Brand Lift on YouTube is often more practical than Meta for targeting by job title or company size. Benchmarks are weaker because studies are often underpowered. Consider third-party panel studies (Lucid, Dynata) with custom B2B panels — they're more flexible on audience size and allow custom question design. Typical study cost: $15K–$40K for a B2B panel study with 500 exposed / 500 control.
CPG / Retail. The category with the most mature brand lift measurement infrastructure. Nielsen Brand Effect is the gold standard here, with decades of category benchmarks. CPG brands at scale should be tracking brand health (awareness, consideration, favorability, loyalty) on a rolling monthly or quarterly basis via panel studies — not just when running a campaign. Brand lift studies are confirmatory; ongoing tracking is the baseline.
Common mistakes (and how to avoid them)
Running a study on an audience that's already brand-aware. If your holdout control group has 40% brand awareness baseline, there's limited room to lift. Run studies against cold category audiences where baseline awareness is below 20–25%.
Confusing ad recall with brand building. High ad recall ("I saw an ad from this brand") doesn't mean brand equity is building. If people recall the ad but can't recall the brand name, the creative is memorable but not branded. Watch the gap between prompted ad recall and unprompted brand recall — a large gap means your creative isn't carrying brand identifiers effectively.
Running too many concurrent campaigns. If your direct response campaigns are hitting the same audience as your brand lift study's exposed group, you're polluting the hold-out group. The control group will be exposed to brand signals through other touchpoints. This inflates the control group's brand metrics and suppresses apparent lift. Run brand lift studies on audience segments isolated from performance retargeting.
One study, one verdict. A single study that shows +2 points of purchase intent does not prove your brand strategy is working. The IPA data requires 12–18 months of consistent brand investment to show market share effects. One study showing underpowered lift should be read as "study design issue" or "campaign underpowered" — not as "brand building doesn't work."
Ignoring the creative breakdown. The highest-ROI output of any Meta Brand Lift study is the per-creative lift data. Teams that look at only the campaign-level number are throwing away half the value. Always pull the creative breakdown. Feed it into your creative testing workflow.
Frequently Asked Questions
What is a "good" brand lift score? Context-dependent, but rough benchmarks: 2–4 points of ad recall lift is moderate; 5–8 points is strong; 10+ is exceptional (usually reserved for high-novelty creative or first-to-market messaging). Purchase intent lift of 1–3 points is meaningful; above 4 points is strong. These benchmarks come from Meta and Google's own published case studies and the IPA databank — your category may vary significantly.
How often should we run brand lift studies? Quarterly for brands spending $50K+/month on awareness campaigns. For brands spending $20K–$50K/month, semi-annually is more realistic given the minimum study spend requirements. For brands below $20K/month in awareness, a DIY post-hoc survey (directional) plus MER tracking is likely more cost-effective than a formal study.
Can brand lift replace ROAS as a campaign KPI? No. They measure different things on different timescales. ROAS measures short-term revenue efficiency per dollar of ad spend. Brand lift measures incremental shifts in mental state. You need both — and you need a measurement architecture that doesn't force them to compete for the same budget line.
Why does my brand lift study come back "inconclusive"? Almost always an underpowering issue — the campaign didn't generate enough impressions or completed surveys to detect a lift above statistical noise. Check whether you hit the minimum impression threshold, whether the study period was long enough, and whether your audience was large enough. Meta and Google will often run a study they know is borderline — they don't turn it off if spend drops below threshold mid-flight. This is a design flaw you need to catch upfront.
How does brand lift relate to incrementality testing? Incrementality testing (holdout tests for purchase behavior) and brand lift (holdout tests for survey responses) use the same core methodology but measure different outcomes. Incrementality tests whether your ads drove incremental purchases that wouldn't have happened otherwise. Brand lift tests whether your ads changed how people think and feel about your brand. The ideal measurement program runs both — incrementality on performance campaigns, brand lift on awareness campaigns — with separate holdouts.
Key citations and further reading
- Meta Brand Lift documentation — official setup guide, methodology, and question types
- Google Brand Lift (Ads Help Center) — Google's lift measurement for YouTube and DV360 campaigns
- Nielsen Brand Effect — cross-platform brand health measurement product
- Les Binet and Peter Field, The Long and the Short of It — IPA Effectiveness Databank research on brand vs. performance investment (available at ipa.co.uk)
- Byron Sharp, How Brands Grow — Ehrenberg-Bass Institute foundational research on mental and physical availability
- Google Think with Google: Brand Lift best practices — applied case studies and benchmarks for video brand lift on YouTube
Brand lift is a leading indicator. Your ROAS and CAC trends are lagging indicators. Run both. The teams that win over 3–5 year horizons are the ones who built measurement infrastructure for both timescales before they needed it — not after the attribution window stopped making sense.