adlibrary.com Logoadlibrary.com
Share
Advertising Strategy

Attribution Window Settings: The 2026 Reality

What attribution windows actually measure post-iOS 14, and the triangulation stack that replaces blind faith in them.

View-through conversion vs click-through attribution timeline diagram showing certainty gradients and measurement windows

The attribution window setting on your Meta, Google, or TikTok account is the single field most media buyers misunderstand in 2026. It looks like a measurement knob. It is actually a credit-assignment policy that platforms tune to flatter their own reporting. After iOS 14, the gap between what the window counts and what your bank account sees has widened to the point where attribution windows alone are no longer trustworthy as a steering signal. This guide unpacks how each platform's window works today, where the numbers go wrong, and the triangulated stack that experienced buyers actually use to decide what to scale.

TL;DR: An attribution window is the time-bound rule a platform uses to credit itself with a conversion after a click or view. Meta defaults to 7-day click + 1-day view, Google Ads runs a 30-day click on Search by default, and TikTok ships 7-day click + 1-day view as standard. Post-iOS 14 signal loss makes any single window unreliable, so 2026 buyers triangulate platform windows with CAPI, MMM, and incrementality tests rather than treating one number as truth.

What an attribution window actually is

An attribution window is a time-bound rule that decides whether a platform takes credit for a conversion. The clock starts when a user touches an ad — clicks, views, or engages — and stops at a fixed interval afterward. If the conversion lands inside that interval, the platform claims it. If it lands outside, the platform forgets the touchpoint ever happened.

Two parameters define every window. The lookback duration (1 day, 7 days, 30 days) sets how far back a conversion can be tied to an ad. The interaction type (click-through, view-through, engaged-view) sets what kind of touch counts as eligible. A conversion that arrives 8 days after a click on a 7-day-click setting is invisible to the system, even though the click clearly contributed.

This sounds mechanical. It is not. Attribution windows are policy, not physics. Each platform sets its defaults to maximize the credit it can claim within signal limits, which is why two platforms running on the same campaign will both report 100% of the conversions and your finance team will see numbers that make no arithmetic sense. The window is where the overlap problem starts.

For a deeper walk through the underlying attribution tracking mechanics and how each touch type is logged, the prerequisite reading lives in our 2026 explainer. The shorthand: window = clock, interaction = trigger, conversion = stop event.

Why attribution windows broke after iOS 14

Before April 2021, the iOS Identifier for Advertisers (IDFA) was on by default and the entire ecosystem assumed deterministic match between a click and a conversion. A 28-day-click attribution window worked because the platform could literally watch the user click, leave, browse for three weeks, and convert.

Apple's App Tracking Transparency framework flipped that default. Now, on iOS, advertisers see ads and conversions through Apple's privacy-preserving SKAdNetwork API, which delivers postbacks on a delay, with limited fidelity, and only for users who did not opt out. Meta documents the impact directly in its Aggregated Event Measurement docs: maximum 8 events per domain, prioritized, with a 24-72h modeled latency on iOS.

The downstream effect on attribution windows is brutal. Meta retired the 28-day click window in early 2021. The platform's default collapsed to 7-day click + 1-day view, with even shorter windows for some iOS-heavy advertisers. The iOS 14 attribution dip erased somewhere between 15% and 40% of reported conversions overnight, depending on traffic mix.

What survived is a window system that is partially modeled, partially deterministic, and entirely dependent on whether the user signed into a logged-in surface like Facebook or accepted ATT. None of those conditions are stable. None of them are auditable from your side of the wall. This is the structural reason every serious team rebuilt their measurement stack — see the post-iOS 14 attribution rebuild playbook for the actual sequence.

Step 0: Why the window matters less when you spy with adlibrary

Before you tune a single window setting, change the question. Instead of "what does the platform say converted?" ask "what is in-market right now, and for how long has it been running?" That shift breaks the dependency on attribution math entirely for a large slice of decisions.

Across saved-ads cohorts on adlibrary, the strongest signal of a winning creative is not a reported ROAS — it is longevity. An ad that has been live continuously for 60+ days, across multiple geos, with iterations from the same advertiser, is a winner. Period. No platform window required. The market already voted with the advertiser's wallet, and that vote is visible to anyone watching the ad library directly.

This is the moat. Attribution windows tell you what the platform claims happened after you spent money. Saved-ad longevity tells you what is working before you spend a dollar. The first is a backward-looking estimate compromised by signal loss. The second is forward-looking ground truth, sourced from the public ad library and enriched with ad timeline analysis.

Practical sequence:

  1. Build a saved-ads watchlist of 20-40 in-market competitors using unified ad search. Filter for advertisers in your category running at scale.
  2. Pull the ad timeline view for any creative that has been live more than 30 days. The slope tells you whether the advertiser is leaning in or pulling back.
  3. Layer AI ad enrichment to surface the hook, format, and angle patterns the longevity ads share.
  4. Use the ad creative testing loop to ship your interpretation of the pattern before you trust any internal attribution number on the test.

Your attribution window now has a job: confirm directionally, not decide. The decision was already made by the market. This sequence is what the creative strategist workflow post calls "find the angle first" — and it is the only step that is fully under your control before signal loss enters the picture.

Attribution windows by platform: the 2026 settings

Defaults vary, but the pattern is consistent: shorter windows on view-through, longer on click-through, with platforms holding view-through credit close to the chest because it is the most contested. Here is what each major platform actually ships in 2026.

PlatformDefault click windowDefault view windowMax click windowNotes
Meta (Facebook + Instagram)7 days1 day7 days28-day click retired April 2021. iOS conversions modeled via AEM. See Meta attribution settings docs.
Google Ads (Search)30 daysN/A90 daysData-driven attribution is default for new accounts since 2023 per Google Ads help.
Google Ads (Display + YouTube)30 days click1 day engaged-view90 daysYouTube uses 10-second engaged-view; not the same as a Meta view.
TikTok Ads7 days1 day28 days clickTikTok exposes 28-day click, but most accounts run the 7d/1d default per TikTok Events API docs.
LinkedIn Ads30 days1 day30 daysAccount-level setting, applies retroactively.
Pinterest30 days1 day30 daysEngaged-view = 10 seconds on video.
Snap Ads7 days1 day28 daysIncludes swipe-up + 2s view by default.

Two things to flag. First, default ≠ optimal. The window your account ships with is the one most flattering to the platform's reporting, not necessarily the one most accurate to your actual customer journey. Second, on Meta, the attribution settings field at the ad-set level governs both reporting and optimization — change it mid-flight and you reset learning. The learning phase calculator will tell you the cost of that reset before you click save.

The third quiet trap: cross-platform double counting. With seven platforms each claiming credit on overlapping windows, any single dashboard sum is inflated. The marketing efficiency ratio (MER) framework exists precisely because platform-reported ROAS does not survive contact with finance.

View-through vs click-through impact by funnel stage

The window is one half of the story. The interaction type is the other. View-through credit is the most-debated number in paid social because it is the easiest to inflate and the hardest to verify. Below is how each funnel stage actually responds.

Funnel stageClick-through reliabilityView-through reliabilityRecommended windowWhy
Cold prospecting (TOF)MediumLow7d click + 1d viewView-through here mostly captures users already in-market. Treat with skepticism.
Mid-funnel retargetingHighMedium7d click + 1d viewClick intent is genuine; view-through often double-counts organic recall.
Bottom-funnel retargetingVery highHigh1d click + 1d viewTighten the window; longer windows hide cannibalization of organic + branded search.
Branded search defenseHighN/A1d clickAnything beyond 1 day is platform credit-grabbing on already-converted users.
Awareness / brand campaignsLowMedium7d click + 1d viewUse MMM instead. The window is decorative here.

The pattern: the closer to the bottom of funnel, the shorter the window should be. View-through credit at TOF is mostly noise — the user was going to convert anyway, and the impression coincided with the journey. View-through credit at BOF is also suspicious for the same reason in reverse: a user who saw your ad and converted that day was already deeply in your funnel.

This is why bottom-funnel ROAS measured with a 7-day click + 1-day view window can look 3-4x healthier than incrementality testing reveals it to be. The death of attribution write-up walks through specific cases where 4.5x reported ROAS dropped to 1.2x incremental ROAS once tested. The window did not lie — it just answered a question that was not the one you needed answered.

A practical heuristic from the trenches: if your ad set runs both prospecting and retargeting in one structure (Advantage+ Shopping does this by default), the platform-reported window credits retargeting touches as if they were prospecting wins. The Andromeda campaign structure post walks through how to read that report without being fooled.

The triangulation stack that replaces window-only thinking

No single window survives serious scrutiny. The stack that does — used by every operator running >$500k/mo on Meta in 2026 — has four layers, each correcting for the limits of the others.

Layer 1: Platform attribution window. Keep it. It is the fastest signal, refreshes hourly, and is fine for in-the-moment optimization decisions like turning off losing creatives. Use Meta's 7d/1d, Google's data-driven default, TikTok's 7d/1d. Do not retune unless you have an attribution rebuild reason.

Layer 2: Conversions API + server-side tracking. Restore the deterministic match the iOS 14 changes broke. A properly implemented pixel + CAPI integration lifts EMQ scores into the 8-10 range and recovers 15-30% of attributable conversions on average. Meta's own CAPI docs call out the EMQ benchmarks explicitly. Score yours with the EMQ scorer before you trust your platform numbers.

Layer 3: Marketing Mix Modeling (MMM) revival. MMM is back. The clean reason: regression-on-spend models do not depend on user-level signal, so they are immune to iOS 14, cookie deprecation, and any future privacy shift. Tools like Recast and Meta's open-source Robyn make Bayesian MMM accessible at sub-$50k/mo spend levels. Read the MMM resurrection thesis for why this is structural, not a fad.

Layer 4: Incrementality testing. Geo-holdout, ghost-bidding, conversion-lift studies. The only layer that produces causal evidence rather than correlational credit. Northbeam's public methodology notes and Triple Whale's pixel + MMM bundle both ship incrementality as standard now because attribution-window numbers alone fail finance review.

The synthesis: platform window for hour-by-hour decisions, CAPI for signal recovery, MMM for budget allocation, incrementality for periodic ground-truth checks. No single layer is sufficient. The marketing efficiency ratio sits on top as the finance reconciliation number that all four feed into.

For an end-to-end sequence on rebuilding this stack from the ground up, the post-iOS 14 attribution rebuild playbook covers the order of operations and the dependencies between layers.

How to choose the right attribution window for your account

Defaults are a starting point, not an answer. Window selection should be a deliberate decision tied to your funnel structure, signal strength, and reporting cadence. Here is the working framework.

1. Audit your average time-to-conversion. Pull your conversion path data from GA4 or your warehouse. If 80% of conversions happen within 24 hours of first ad touch, a 7-day window is over-claiming on the long tail. If your AOV is $400+ and time-to-conversion stretches past 14 days, a 7-day window is under-claiming. Both miscalibrations distort optimization.

2. Match the window to the campaign objective. Bottom-funnel retargeting → 1-day click + 1-day view. Cold prospecting → 7-day click + 1-day view. Brand awareness → keep 7d/1d for reporting consistency, but do not optimize against it. Use MMM for budget calls on awareness spend instead.

3. Hold windows constant during testing windows. Changing attribution window mid-test is the fastest way to corrupt results. The window change resets learning phase on the affected ad sets and shifts reported ROAS by 10-30% even when nothing about the actual campaign changed. The learning phase calculator shows the disruption cost.

4. Reconcile to MER weekly, MMM quarterly. No matter what your window says, your true paid-media efficiency is the spend ÷ revenue ratio your finance team sees. If platform-reported ROAS is consistently 1.5-2x your MER, your attribution windows are over-claiming and you need to compress them or invest in better deduplication.

5. Document the rationale. Whatever window you pick, write down why. Six months from now when reported ROAS shifts, you need to know whether the cause was a window-policy change, an iOS update, a creative refresh, or something else. The ad decision rationale tracking post covers this discipline.

The framework is not glamorous. It is procedural. That is the point — attribution windows are too important to be set on vibes, and the brands consistently scaling past $1M/mo treat window selection as a quarterly governance decision, not a default.

Frequently asked questions

What is the default attribution window in Meta Ads Manager?

The default attribution window in Meta Ads Manager is 7-day click + 1-day view as of 2026. The 28-day click window was retired in April 2021 following the iOS 14 changes. Meta's attribution settings documentation confirms 7d/1d is the standard for both reporting and ad-set optimization.

What is Google Ads' default attribution window?

Google Ads defaults to a 30-day click window on Search campaigns and 30-day click + 1-day engaged-view on Display and YouTube. Since 2023, new accounts also default to data-driven attribution rather than last-click, per Google Ads help. The maximum click window is 90 days.

Why did Meta drop the 28-day click attribution window?

Meta retired the 28-day click window in early 2021 because Apple's App Tracking Transparency (ATT) framework eliminated the deterministic user identifier that made long lookback windows reliable. Without IDFA, the platform could not credibly tie a click 25 days ago to a conversion today. The iOS 14 attribution dip post documents the resulting reporting collapse.

Should I change my attribution window from the default?

Most accounts should keep platform defaults for optimization and instead invest in CAPI, MMM, and incrementality testing as triangulating layers. Changing the window mid-flight resets learning phase and distorts week-over-week comparisons. Only retune if you have a documented funnel reason — for example, tightening to 1d/1d on bottom-funnel retargeting to reduce cannibalization credit.

How do I know if my attribution window is over-claiming conversions?

Compare platform-reported ROAS against your marketing efficiency ratio (MER) weekly. If platform ROAS is consistently 1.5-2x your MER, your windows are double-counting across platforms or crediting view-through touches that did not drive incremental conversions. The MER framework post walks through the reconciliation math.

Bottom line

Attribution windows are policy, not truth. Treat the platform setting as one input into a triangulation stack — CAPI for signal recovery, MMM for budget calls, incrementality for ground truth — and treat saved-ad longevity as your forward-looking signal that does not depend on any window at all. The brands that scaled past $1M/mo in 2026 did it by demoting attribution windows from oracle to thermometer.

Related Articles