adlibrary.com Logoadlibrary.com
← Back to Glossary

Modeled Conversions

Modeled Conversions are conversions that platforms statistically estimate when deterministic attribution is unavailable — typically post-iOS 14 iOS traffic or aggregated SKAN postbacks where individual matching fails.

adlibrary ad library ads library meta ads library ai marketing tool 1

Definition

When a platform cannot match an ad click to a purchase through direct, deterministic tracking, it fills the gap with a statistical estimate. That estimate is a modeled conversion: a purchase (or lead, install, or other event) the platform is confident enough happened, based on aggregate signal patterns, to count in your reported totals.

The modeled conversions mechanism works like this. Platforms like Meta and Google train prediction models on accounts where deterministic matching is intact — users who consented to tracking, server-side events arriving via Conversion API (CAPI), hashed email matches from Enhanced Conversions. They then use those models to infer conversion probability for the cohorts where direct matching is blocked. On iOS 14.5+ traffic blocked by App Tracking Transparency (ATT), conversions flow through SKAdNetwork (SKAN) postbacks that report aggregate counts, not individual events. The modeled layer stitches those counts back to campaign-level attribution without identifying individual users.

In practice, you rarely see modeled and deterministic conversions broken out in the same column. Meta surfaces the split under the "Conversion attribution setting" breakdown; Google exposes it in the "Modeled conversions" segment. Most practitioners look at total reported conversions without realizing that, for accounts running significant iOS traffic or browser-ITP-affected traffic, 20–40% of that number is modeled output, not observed signal. A DTC brand running 60% iOS traffic can see 38% of reported purchases coming from the modeled segment — and that segment may track 22% less precisely against post-purchase survey ground truth.

The right frame in 2025–2026 is neither trust nor distrust — it is calibration. Meta's modeled layer falls within 10–20% of ground truth on well-instrumented accounts. Google's Attribution models feeding data-driven attribution close a similar gap. What breaks calibration is a weak first-party signal layer: no CAPI, no hashed email match, no post-purchase survey. When the model lacks enough high-quality anchor data, it drifts. For a deeper look at the full signal-degradation picture, this post on the death of attribution covers how modeled layers interact with MMM and incrementality approaches. If you are evaluating analytics tooling to diagnose this in your accounts, this overview of AI analytics tools covers which platforms surface modeled vs deterministic splits.

Treat modeled conversions as a managed estimate, not a black box — calibrate against ground truth and the layer becomes a feature, not a liability.

Why It Matters

A growing share of every Meta and Google account is now modeled, not observed. Treating modeled and deterministic conversions as the same number hides where signal quality is breaking; treating them as different numbers exposes which campaigns are running blind. We see this split matter most when practitioners compare 2025 ROAS to pre-2021 baselines — most of that apparent decline is methodology shift, not real performance loss.

Examples

  • A DTC brand running 60% iOS traffic saw 38% of reported purchases coming from modeled conversions; the modeled segment had 22% lower precision against post-purchase survey ground truth.
  • After enabling Aggregated Event Measurement, a Meta account showed an 11% lift in reported conversions on the iOS 14.5+ cohort, almost entirely from modeling closing a previously unmeasured gap.
  • Google's Enhanced Conversions feeds first-party email/phone hashes into the modeled layer, lifting deterministic match rates and shrinking the modeled share.

Common Mistakes

  • Comparing modeled-heavy 2025 numbers to deterministic-heavy pre-2021 numbers without flagging the methodology change — most "ROAS decline" reports are partly methodology shifts, not real performance loss.
  • Treating the modeled layer as suspicious noise; deterministic-only attribution post-iOS 14 systematically under-counts conversions, which corrupts every downstream optimization decision.
  • Skipping a post-purchase survey or MMM check — modeled conversions need a ground-truth anchor or you have no way to know whether the model is calibrated.