adlibrary.com Logoadlibrary.com
← Back to Glossary

Conversion Modeling

Conversion Modeling is Meta and Google's machine-learning layer that estimates conversions on traffic where deterministic attribution is unavailable, using observed behavior on consenting users to extrapolate to unattributed cohorts.

adlibrary ad library ads library meta ads library ai marketing tool 1

Definition

When a user sees your ad but doesn't click, or clicks from an iOS device that opted out of tracking, the conversion platform has a gap. Conversion modeling is how Meta and Google fill it: a machine-learning layer that estimates the conversions on traffic where deterministic, user-level attribution is unavailable.

The mechanism works by observing consenting users whose pixel and CAPI data flows cleanly. From that observable cohort, the platform builds a statistical model of conversion probability given ad exposure, device type, time of day, audience segment, and a dozen other signals. That model is then applied to the unattributed cohort — the iOS users, the cookieless browsers, the consent-declined sessions — to produce an estimated conversion count.

On Meta, this process became structurally mandatory after iOS 14.5 and the ATT prompt. SKAdNetwork postbacks provide aggregate counts but strip user-level detail; the platform falls back to its consenting-user model to extrapolate the gap. The quality of that extrapolation depends directly on your Conversion API (CAPI) coverage: the more server-side signal you send, the better the consenting cohort your model trains on. Google's data-driven attribution model uses the same pattern — observed multi-touch paths train an ML layer that estimates contribution for the cookie-less remainder.

By mid-2025, conversion modeling had expanded further. Meta's Andromeda delivery architecture, Advantage+ campaigns, and Google's Meridian MMM framework all assume a baseline of modeled data. We see accounts where 35–50% of reported conversions are estimated, not observed. That share rises as consent rates erode and iOS penetration grows. For a full picture of what this means for your reporting stack, the post on the death of attribution in 2026 covers the structural shift in detail. And if you're cross-referencing platform estimates with an external ground truth, AI analytics tools for marketing includes a breakdown of which tools handle modeled-conversion uncertainty well.

Treat modeled ROAS as a directional signal, not a financial figure — always anchor it to a lift test or MMM before acting on it.

Why It Matters

A growing share of ad-attributed conversions are now modeled, not observed. Conversion modeling is the engineering reason your iOS reported ROAS no longer collapses to zero after an ATT opt-out wave. It is also why small calibration drifts — a CAPI coverage drop, a consent-rate change — can push reported numbers far from reality with no creative or audience change at all. If your ground-truth anchor (post-purchase survey, MMM, conversion lift test) diverges from platform numbers by more than 20%, the model has drifted.

Examples

  • Meta's conversion modeling layer fills in iOS 14.5+ traffic where SKAdNetwork postbacks lack user-level detail; it is calibrated against the consenting iOS cohort whose pixel data is observable.
  • Google's data-driven attribution model uses a similar mechanism — observed multi-touch paths train an ML layer that estimates contribution for the cookie-less remainder.
  • A subscription brand saw modeled-conversion share rise from 18% in 2022 to 41% by mid-2025 as iOS share grew and consent rates eroded; the underlying model needed re-calibration twice.

Common Mistakes

  • Believing modeled conversions can be turned off; on Meta they cannot, on Google they can but disabling them under-counts iOS aggressively.
  • Trusting modeled-heavy ROAS reports without any ground-truth check (post-purchase survey, MMM, conversion lift). Without an anchor, the model drifts unobserved.
  • Using modeled-attribution numbers in financial forecasts without uncertainty bands; modeled estimates carry confidence intervals that are usually wider than the reported point value.