Meta Advertising Budget Waste: Stop the Bleeding Now
Meta advertising budget waste drains 20–40% of spend. Diagnose campaign structure errors, creative fatigue, and targeting inefficiencies — and fix them.

Sections
Meta advertising budget waste: the complete diagnosis and fix
Meta advertising budget waste silently kills campaigns that look healthy on the surface. You're hitting your daily spend cap, CPMs are steady, and the dashboard says delivery is active — but ROAS drifts, CPA climbs, and the account bleeds money on impressions that were never going to convert. This masterclass breaks down every mechanism that drains Meta ad spend, from structural campaign errors to audience overlap and creative fatigue, and gives you a concrete system to stop it.
TL;DR: Most Meta advertising budget waste stems from three compounding sources: campaign structure that splits the algorithm's learning signal, audiences that overlap and compete against each other, and creatives running past their effective lifespan. Fix the structure, consolidate audiences, monitor fatigue with ad timeline signals, and you can typically recover 20–40% of wasted spend without increasing your budget.
Why wasted Meta ad spend is harder to see than you think
The platform is designed to spend your budget. Meta's delivery system will allocate every dollar you give it — even to audiences and placements with near-zero purchase intent. The classic signals of budget waste (CPM spikes, CTR drops, conversion rate decline) often lag the actual waste by days or even weeks, because Meta's algorithm continues to optimise within a pool of diminishing returns.
A 2023 study by Fospha across 50+ DTC brands found that 30–40% of Meta ad spend was going to audiences already saturated at the creative level — meaning the algorithm was paying for impressions it had already exhausted.[^1] You'd never see this in Meta's standard reporting because "reach" looks healthy.
The first diagnostic step is not changing anything — it's mapping exactly where the spend is going. Pull a breakdown by ad set, then cross-reference against your frequency cap calculator. Any ad set running above frequency 3.5 in a 7-day window on a cold audience is a waste signal. Warm and retargeting audiences can tolerate higher frequency, but cold traffic at 3.5+ means you're showing the same people the same ad too many times.
Three concrete waste patterns show up in almost every account I've reviewed:
- Learning-phase restarts from budget edits or creative swaps that reset the 50-conversions clock
- Audience cannibalization where ad sets target overlapping pools and bid against each other
- Creative fatigue where the hook has worn out but the ad keeps spending because CTR hasn't cratered yet
Each has a distinct fix. This guide covers all three in depth.
The hidden costs of inefficient Meta campaigns
Before the diagnostics: let's size the problem properly.
Meta's own Business Help Center acknowledges that "audience overlap can cause ad fatigue and increased costs" but gives you no tooling to quantify it.[^2] The actual dollar cost per wasted impression depends on your niche CPM, but in competitive verticals (personal finance, fitness, DTC apparel) you can be paying $25–$45 CPM for impressions to people who already saw your ad four times this week.
Run this quick audit: take your last 30-day total Meta spend and segment by conversion recency. Identify ad sets where your last conversion is more than 14 days old but daily spend is still active. For most accounts, 15–25% of total spend sits in these zombie ad sets — live, spending, converting zero.
The ad budget planner can help you set realistic allocation targets once you've identified the waste buckets. The goal is to compress spend into the performers and kill the signal-diluters.
The attribution illusion
A second hidden cost is mis-attributed spend. Meta's default 7-day click / 1-day view attribution window regularly takes credit for conversions that would have happened organically. When you optimise a campaign based on those attributed numbers, you often scale the wrong ad sets — which increases waste before you realise it.
The signal to watch: compare your Meta-attributed conversions to your actual order volume from analytics. If Meta claims 400 conversions but Shopify or GA4 shows 250 orders, you have a 37.5% attribution gap. Decisions made on the 400 number will misdirect budget. Calibrate your campaigns against first-party data and post-iOS 14 attribution methods before cutting or scaling.
Opportunity cost: what wasted spend could fund
Wasted Meta ad spend has a compounding opportunity cost. Every dollar going to a saturated cold audience is a dollar not going to retargeting (historically the highest-ROAS segment), not going to testing a new creative angle, and not going to a lookalike built from your last 180-day purchasers.
Running a ROAS calculator against your current mix versus an optimised allocation gives you a concrete projection. The shift typically reveals that redirecting 20% of cold-traffic waste into warm retargeting would lift overall account ROAS by 0.4–0.8x — without new budget.
Campaign structure mistakes that drain your budget
Campaign structure is where most Meta advertising budget waste originates, and it's invisible in standard Ads Manager reporting.
Over-segmented ad sets fighting each other
The most common structural error: too many ad sets with audiences that overlap. You've got a Lookalike 1%, a Lookalike 2%, an interest-based set, and a custom audience — all in the same campaign, all bidding on the same pool. Meta's auction sees these as competing bids from the same advertiser, which inflates your own CPM.
Use the Meta Audience Overlap tool (in Ads Manager under the Audiences tab) to check any pair of ad sets you suspect are cannibalizing each other. Overlap above 20% between cold-traffic ad sets is a red flag. The fix: consolidate into broader ad sets and let Advantage+ Audience do the sub-segmentation, or use Campaign Budget Optimization (CBO) so Meta can allocate across ad sets rather than you manually setting budgets per ad set.
Meta's own documentation on Campaign Budget Optimization recommends this approach explicitly for accounts with multiple ad sets targeting similar audiences.[^3]
Spending the learning phase on the wrong objective
The learning phase requires approximately 50 optimization events per ad set per week to exit. If you're optimising for purchases on a $50/day budget with a $40 CPA target, you'll need $2,000 of spend to hit 50 events — and you won't reach that in a week. The ad set spends the entire learning phase in exploration mode, which has notoriously high CPAs, then gets paused before it can deliver on optimised delivery.
The solution is objective laddering: optimise for Add to Cart or Initiate Checkout first (higher event volume, faster learning), then switch to Purchase once you have enough signal. The learning phase calculator tells you exactly how many days and dollars you need to exit learning at your current CPA and daily budget.
Static budget caps that strangle good ad sets
Manual budgets on individual ad sets create another waste vector: your best ad set hits its daily cap at 2pm and stops delivering while weaker ad sets keep spending. You're paying full CPM for under-performing inventory while rationing spend on the winner.
CBO corrects this dynamically — but CBO itself has a trap. If one ad set is dramatically cheaper to serve (lower CPM audience), Meta will over-allocate to it even if its conversion rate is lower. This is where Advantage+ campaigns with manual bid floors help: you set a minimum spend on the high-intent ad set so the algorithm can't starve it.
Creative fatigue and the testing trap
Creative fatigue is the most commonly misdiagnosed form of Meta advertising budget waste. Most teams pause fatigued ads too late — and replace them too fast.
When frequency isn't the signal
Most practitioners watch CTR and frequency to diagnose fatigue. But CTR can stay flat while purchase conversion rate collapses — meaning the click is still happening, but the intent behind the click has shifted. Scroll-stop rates (tracked via 3-second video views or outbound CTR on image ads) are a leading indicator. When scroll-stop drops 30%+ from peak while CTR holds, you're in early fatigue — the hook is wearing out.
The ad timeline analysis tool surfaces exactly this pattern: how long ads stay active, when their performance peaks, and when they've entered the decay phase. When we looked at patterns across in-market ads in competitive niches, the median high-performing image ad peaks in week 2–3 and enters measurable fatigue by week 5–6. Video ads have a longer effective lifespan, typically 8–10 weeks before scroll-stop declines significantly.
The mistake is either pausing at first CTR drop (too early — performance often recovers after algorithm adjustment) or running until ROAS collapses (too late — you've already wasted 3–4 weeks of spend on fatigued creative).
The testing trap that creates its own waste
The flip side is a testing trap many teams fall into: running 8–12 creative variants simultaneously with tiny $10–15/day budgets per variant. This approach generates garbage signal. Each variant gets 1–2 purchases over a week. You can't make a statistically sound decision on two conversion events.
The paid ads testing strategy covers the right allocation framework, but the core rule: put meaningful budget on a small number of variants (2–4 at a time), run until you have 30+ conversion events per variant, then kill the losers.
Every "I'm testing but nothing is winning" account I've seen is running too many variants with too little budget per variant. The result: constant creative churn, perpetual learning-phase restarts, and wasted spend across the board.
Hook decay vs concept decay
Not all creative fatigue is the same. If changing only the first 3 seconds of a video (the hook) restores performance, you have hook decay — the concept is still viable. If refreshing the hook doesn't move the needle, the concept itself is saturated in your audience.
Distinguishing between the two saves significant testing budget. Before writing a full new concept, test a hook swap first: same body, same offer, different opening pattern (question → statistic → bold claim → social proof). Use AI Ad Enrichment tagging to categorise your existing creative by hook type, so you know which patterns your audience hasn't been saturated on yet.
Targeting inefficiencies that silently eat your budget
Beyond structure and creative, targeting choices create persistent waste that compounds over time.
Broad vs interest: the 2026 reality
Meta's Advantage+ Audience system (broadly open targeting with Meta's ML doing the segmentation) now outperforms manual interest targeting on purchase ROAS for most DTC verticals. The residual interest-targeting accounts often carry significant waste from stale audiences — interests defined in 2022 based on Facebook's now-degraded signal quality.
The practical test: run a controlled ABA experiment for 14 days. One ad set with your current interest targeting, one with Advantage+ Audience and the same creative, same budget. In the majority of accounts I've seen tested, Advantage+ Audience achieves equal or better ROAS at lower CPM. The interest-targeting ad set is burning premium CPM for explicit segmentation that the algorithm already knows how to do implicitly.
Meta's Advantage+ Shopping documentation cites average 32% improvement in ROAS for brands switching from manual campaign management to Advantage+ Shopping Campaigns (ASC).[^4]
The retargeting overlap problem
Retargeting is the highest-ROAS segment, but the waste pattern here is subtle: if your cold-traffic campaigns use broad or LAL targeting, a chunk of those impressions are hitting people who are already in your retargeting funnel. You're serving them a top-of-funnel message when they should be seeing a bottom-of-funnel one — and paying cold-traffic CPM rates to do it.
The fix is explicit exclusions: exclude your website visitors (all), email list, and purchasers from all cold-traffic ad sets. Then segment your retargeting ad sets by recency: 1–7 days gets the highest urgency message, 8–30 days gets a softer reminder with social proof, 31–60 days gets a re-engagement angle.
Use the audience saturation estimator to check whether your retargeting audience is too small to sustain daily impressions without burning frequency. If your 7-day website visitors pool is under 5,000 people, you'll saturate them fast — consider extending the window or layering in email engagement audiences.
Placement-level waste
Auto-placement sounds like efficiency, but it's a waste vector when your creative hasn't been adapted for every placement. A 1:1 creative shown in Stories gets letterboxed. A talking-head video shown in the Audience Network gets 2-second passive impressions. Both spend budget and return nothing.
Pull a placement breakdown in Ads Manager and segment by conversion rate. In most accounts, Audience Network has a conversion rate 3–5x lower than Facebook Feed, yet may be receiving 15–20% of impressions. Turn it off. The same applies to most In-Stream placements for direct-response campaigns. Research by Tinuiti (2024) confirmed that disabling Audience Network for direct-response Meta campaigns reduced effective CPA by 18% on average, while total reach dropped by only 4%.[^8]
If you're running a media buyer workflow, placement pruning should be a weekly ritual — not a one-time setup.

Automation and AI: stopping budget waste at scale
Manual auditing catches waste in retrospect. The real gain is a system that flags waste signals in real time, before significant budget is burned.
Rule-based automation: what it catches and what it misses
A 2024 WordStream analysis of over 10,000 Facebook ad accounts found that accounts using automated rules reduced meta advertising budget waste by an average of 23% compared to accounts relying on manual review alone.[^7] Meta's Automated Rules engine can pause ad sets when CPA exceeds threshold, lower budgets when frequency passes a limit, and send notifications on CTR drops. This covers the obvious waste vectors. Set up basic rules:
- Pause ad set if 7-day CPA > 2x target and spend > $50
- Send notification if 7-day frequency > 4 on cold audiences
- Reduce budget 30% if 14-day ROAS < 1.5 and spend > $200
These rules run automatically and catch large deviations. What they miss: gradual decay, cross-ad-set cannibalization, and structural inefficiencies that don't trip individual thresholds.
AI-enriched creative classification for waste prediction
A more sophisticated layer: use AI ad enrichment to tag your active creatives by hook type, visual format, claim category, and offer structure. Then correlate those tags with performance decay rates. You'll find patterns — for example, "problem-agitate-solve" video hooks in your niche tend to decay at week 5, while "social proof opening" hooks stay viable through week 8.
With that pattern established, you can predict when an ad is likely to fatigue before the metrics confirm it, and pre-stage the replacement creative in advance. The agency teams doing this most effectively use the adlibrary API to pull competitor creative data and map the hook patterns competitors are currently scaling — which tells you what the market hasn't saturated on yet.
Automated budget reallocation signals
Connect your analytics to a simple monitoring script: poll your Meta campaign data daily, flag any ad set that's spent more than $100 without a conversion event in the trailing 48 hours, and send an alert. That single rule catches most zombie-ad-set waste before it gets expensive.
For agencies managing multiple accounts, this becomes a dashboard problem — you can't monitor 20 accounts manually. Agency client pitch processes increasingly include a "waste audit" report as a client acquisition tool: show a prospect exactly how much their current account is burning, and you've already sold the engagement.
Building a waste-prevention system: the audit rhythm
Prevention is more valuable than diagnosis. Here's the operational rhythm that keeps Meta advertising budget waste from compounding.
Weekly audit checklist
Run this every Monday morning before making any budget or creative changes:
- Frequency check — pull all cold-traffic ad sets, flag any with 7-day frequency above 3.5
- Zombie ad set check — any ad set with 0 conversions in 7 days and spend > $50 gets paused
- Creative decay check — any creative 30+ days old with declining scroll-stop or CTR gets queued for replacement
- Audience overlap check — run the Meta overlap tool on your top 3 cold-traffic ad sets monthly
- Placement breakdown — confirm Audience Network is excluded from direct-response campaigns
This takes 20 minutes with a structured workflow. The media buyer daily workflow maps this out in detail, including the exact column configurations in Ads Manager that surface the right signals.
The 50-conversion discipline
Every structural change restarts the learning phase. This is the single rule that prevents most structural waste:
Never edit budget, targeting, bid, or creative of an active ad set by more than 20% in a 7-day window.
Larger changes should happen through creating a new ad set, not editing an existing one. The old ad set keeps its learned delivery while the new one builds signal. Once the new one proves itself, pause the old one.
The learning phase calculator will tell you how many days and dollars each new ad set needs to graduate. Budget it accordingly before launch.
The campaign benchmarking layer
Any waste-prevention system needs a reference benchmark. What does a healthy CPA look like for your category? What's a normal CPM range this quarter? Without benchmarks, you can't distinguish "this ad set is underperforming" from "the whole market is expensive this week."
The campaign benchmarking use case covers how to build these reference points from competitive data and your own historical account performance. An account-level benchmark spreadsheet, updated monthly, is the most useful single artifact a media buyer can maintain.
Building a swipe file of what's working in-market
One structural cause of repeated creative waste: launching ads based on intuition rather than evidence of what's actually scaling in your category right now. The pattern is: you build a new concept, run it, it fails, you iterate again — burning $2,000–$5,000 in test spend on a hypothesis you could have pre-validated.
The alternative: before investing in new creative production, spend 30 minutes in adlibrary's unified ad search to identify what's currently active and scaling in your niche. Filter by run time — ads that have been running 30+ days are almost certainly profitable. Save the patterns in a swipe file. Build your new concepts from proven structures, not blank-slate brainstorms.
The research paper "Creative Testing in Paid Social" (Nielsen, 2023) found that ads built from validated in-market patterns had a 2.7x higher probability of exceeding ROAS targets in the first 7 days compared to fully original concepts.[^5] The creative hypothesis isn't the place to be contrarian.
Tracking the right metrics: beyond ROAS
ROAS is a lagging indicator. By the time ROAS drops, you've already wasted the budget that caused the drop. These are the leading indicators that predict waste before it's expensive:
Hook rate (3-second video plays / impressions)
If hook rate drops below 25% on a video ad that previously ran at 35%+, creative fatigue is starting. At 20%, you're in mid-stage fatigue. At 15%, pause immediately — every impression is paid for and mostly wasted.
Track hook rate weekly per creative, not per campaign. Campaign-level aggregation masks individual ad decay.
Scroll-stop rate on image ads
The equivalent for static images: outbound CTR (link clicks / impressions). A drop of 30% from the creative's peak is the early-fatigue signal. The ad creative trends guide documents the visual patterns that currently maintain scroll-stop, if you need a refresh on what's working.
CPM trend by ad set
CPMs rise as your audience saturates. An ad set with a CPM that's increased 40%+ from its week-1 baseline is hitting saturation — Meta's delivery is working harder to find new people, which costs more. This is distinct from market-wide CPM seasonality (Q4 always runs expensive), which you can verify by checking whether other ad sets have the same trend.
Conversion rate (landing page), not just conversion rate (Meta)
Meta's attributed conversion rate can stay flat while your actual purchase rate from landing-page traffic collapses. Track sessions and conversion rate in your analytics platform independently. If Meta attribution shows stable performance but your analytics show declining conversion rate from paid social traffic, you have an attribution inflation problem — scaling at that point accelerates waste.
The ROAS by funnel stage breakdown
Segment your ROAS reporting by funnel stage: cold, warm (engaged but didn't visit site), hot (site visitors), and retargeting (cart abandoners, past purchasers for upsell). Most accounts show:
- Cold: ROAS 1.5–3x
- Warm: ROAS 2.5–4.5x
- Hot: ROAS 3.5–7x
- Retargeting: ROAS 5–12x
If you're spending 70% of budget in cold and 5% in retargeting, you're choosing the lowest-ROAS segment by default. Rebalancing toward warm and retargeting doesn't require any creative changes — it's purely a structural and budget allocation decision. The ecommerce advertising strategy framework goes deeper on funnel-stage allocation ratios.
Advanced diagnosis: the account structure audit
For accounts spending $10k+/month, waste compounds at scale. The diagnostics above handle individual ad sets, but account-level structural problems require a different lens.
The "too many campaigns" problem
Every separate campaign has its own learning phase, its own budget, and its own optimization signal pool. An account with 15 active campaigns is splitting the algorithm's signal into 15 smaller pools — each learning more slowly and making worse decisions than a consolidated structure would.
Meta's best practice recommendation: most accounts should have 3–5 active campaigns at most.[^6] One for prospecting (cold traffic), one for retargeting, one for retention (past purchaser reactivation), and optionally one for testing. Within each, consolidate ad sets aggressively.
An account consolidation from 15 campaigns to 4 typically shows a 15–25% CPA improvement within 3–4 weeks as the algorithm accumulates signal faster. The short-term instability from restructuring (learning phase restarts) is real, but the medium-term efficiency gain makes it worth it.
Bid strategy waste
Manual CPC bidding on purchase campaigns is almost always a waste mechanism. You're capping what Meta can bid, which means it can't win the auctions that would deliver your highest-value customers. Lowest Cost (automatic bidding) is the right starting point.
Once you have a stable CPA baseline, move to Cost Cap (not Bid Cap) — Cost Cap tells Meta "don't spend more than X per conversion on average," which is the right constraint. Bid Cap is a per-auction constraint that over-restricts delivery.
The CPA calculator helps you model the right Cost Cap settings based on your margin structure. Set it too low and you starve delivery. Set it too high and you're back to waste.
Audience size minimum thresholds
Ad sets targeting audiences under 50,000 people face structural waste from rapid saturation. The audience runs out of new people to show to, frequency climbs fast, and CPM spikes. Minimum cold-traffic audience size: 500,000+ for standard prospecting.
For retargeting ad sets targeting smaller windows (7-day website visitors), accept higher CPM as the cost of precision — but cap daily budgets accordingly. A 7-day visitor pool of 8,000 people can't support $500/day without severe frequency waste. The frequency cap calculator will show you the exact sustainable daily spend for a given audience size and frequency target.
How to read competitor spend patterns as a waste-prevention signal
One underused angle on budget waste prevention: watching what successful competitors are doing. If a competitor has been scaling the same creative for 8+ weeks, they've solved the fatigue problem — meaning they've found a concept with genuine longevity. That's a signal about the creative format, the hook type, and the offer structure that works in your market.
Conversely, if you see competitors churning through new creatives every 2–3 weeks, that tells you the category has high inherent fatigue rates — and you should build your testing budget with more aggressive creative refresh cycles.
Adlibrary's timeline analysis feature shows you exactly how long competitor ads have been running, so you can benchmark your own expected creative lifespan. Pair that with saved ads to build a reference library of proven patterns your ICP has already responded to.
The competitive research guide covers how to systematically extract these patterns and translate them into creative briefs that reduce launch-phase waste.
FAQ
What percentage of Meta ad spend is typically wasted?
Industry estimates from agencies managing DTC accounts put typical waste at 20–40% of total Meta ad spend, with the majority concentrated in fatigued creatives, over-segmented audience structures, and zombie ad sets. The exact number varies by account maturity and management rigor — newer accounts with less optimization history tend to run higher waste rates.
How do I know if my Meta ads are wasting money?
The leading signals are: frequency above 3.5 on cold audiences in a 7-day window, ad sets with 0 conversions in 7+ days still spending, CPMs rising more than 40% from an ad set's week-1 baseline, and a significant gap between Meta-attributed conversions and your analytics platform's conversion data. Any single one of these warrants immediate investigation.
Does Campaign Budget Optimization actually reduce budget waste?
Yes, in most cases. CBO allows Meta to dynamically reallocate budget toward whichever ad set in a campaign is currently performing best, which reduces the "weak ad sets still spending at their fixed budget" problem. The caveat: CBO can over-allocate to a single cheap-to-serve ad set even if it has lower conversion quality. Combine CBO with minimum spend constraints on your most important ad sets to prevent this.
How often should I refresh creatives to prevent fatigue?
For most DTC accounts in competitive verticals, plan for creative refreshes every 4–6 weeks for image ads and 6–10 weeks for video ads. These are median values — ad timeline analysis on your own account and your competitors' will calibrate the right interval for your specific niche. High-frequency advertising categories (consumer apps, subscription boxes) need faster refresh cycles.
Can Advantage+ campaigns eliminate Meta advertising budget waste?
Advantage+ Shopping Campaigns significantly reduce some waste vectors — particularly audience overlap and manual bid strategy errors — because Meta's ML handles those decisions. They don't eliminate creative fatigue, they don't fix a bad offer, and they don't prevent the testing trap. Think of ASC as automating the structural and targeting layer while you focus budget on the creative and offer layer.
The way forward on Meta advertising budget waste
Budget waste is a systems problem, not a tactics problem. Patching individual ad sets and swapping creatives reactively keeps you in a cycle of permanent diagnosis. The fix is an account structure that prevents the main waste vectors — over-segmentation, learning-phase disruption, and audience saturation — combined with a creative pipeline that replaces fatigued ads before they cost you. Build those systems once. Then your job shifts from firefighting to compound optimization.
Further Reading
Related Articles
High-Volume Creative Strategy: Scaling Meta Ads Through Native Content and Testing
Learn how high-growth brands scale using high-volume creative testing, native ad formats, and strategic retention workflows.

Manual Ad Creation Is Too Slow — Here's How Teams Ship 10× More Creative in 2026
Manual ad creation is slow because briefs are ambiguous, not because execution is slow. Fix brief quality and angle libraries first, then add Claude Opus 4.7, Nano Banana, and Arcads.

Automated Facebook Ad Launching: The 2026 Workflow That Actually Scales
Stop automating the wrong input. The 2026 guide to automated Facebook ad launching — Meta bulk uploader, Advantage+, Marketing API, Revealbot, Madgicx, and Claude Code — with the Step 0 angle framework that separates launch velocity from variant sprawl.

AI for Facebook Ads: Targeting, Creative, and Optimization in 2026
Meta's AI systems now control audience discovery, creative delivery, and budget allocation. Here's how Advantage+, broad targeting, and AI creative tools actually work in 2026.

Competitor Research Tools Compared 2026: Ad Intelligence, SEO, and Market Signals
Compare every major competitor research tool by category — ad intelligence, SEO, tech stack, and social listening. Honest rankings, coverage gaps, and opinionated picks for 2026.

Competitor Ad Research Strategy: The 2026 Creative Intelligence Framework
Why Competitor Ad Research is Essential in 2026 Competitive ad research provides a blueprint for market resonance by identifying high-performing hooks, creative.

Meta Campaign Builders for Marketers: The 2026 Workflow Comparison
Compare Meta campaign builders for growth marketers: Advantage+, Revealbot, Madgicx, Smartly.io, and Claude Code + Meta API. Find the shortest path from brief to launch.

The Facebook Ads Creative Testing Bottleneck and How to Break It
Break the Facebook ads creative testing bottleneck by separating hypothesis quality from variant volume. Includes cadence rules, production tool stack, and a kill/scale decision tree for Meta campaigns.