Dynamic Creative in 2026: How DCO Picks the Winner Among Variants You Already Have
Dynamic creative optimization (DCO) on Meta, Google, and TikTok finds the winning combination among acceptable variants. It will not save bad creative — but feed it a pool seeded from competitor winning variants surfaced through Adlibrary, and the model converges on a winner faster than any hand-built test.

Sections
Dynamic creative will not save a bad ad. It will not invent a hook your strategist did not write or rescue an offer that has been broken since Q4. What it does, fed well, is find the winner faster than any human can. You upload a pool of acceptable variants, the platform runs the combinatorics, the model converges on what performs in the auction this week. That is the entire pitch. Everything else is noise.
TL;DR: Dynamic creative optimization (DCO) on Meta, Google, and TikTok is combinatorial testing run by the platform. Meta DCO accepts up to 10 images, 2 videos, 5 headlines, 5 primary texts, 5 descriptions, and 5 CTAs per ad set. Google RSA takes 15 headlines and 4 descriptions. TikTok ACO generates up to 30 combinations from your video and copy pool. DCO finds winners only when every input is at least acceptable — feed it the wrong assets and it converges on the least bad version of a bad ad. The moat is the asset pool you feed it. Build that pool from competitor winning variants you surface through Adlibrary's unified ad search, tag with AI ad enrichment, and you start every test from a stronger asset library than the brand next to you.
What dynamic creative actually is (and is not)
Dynamic creative is combinatorial ad serving. You hand the platform a structured pool of asset slots — images, videos, headlines, primary text, descriptions, CTAs — and the delivery system mixes them into permutations at impression time. Each user sees the combination the model predicts will perform best for them. The model handles routing, you handle inputs.
That is different from campaign-level automation like Advantage+, which decides who to spend on. DCO sits one layer below — it decides what to show a person Advantage+ already chose. The two stack, but Advantage+ is a budget allocator and DCO is a creative selector.
The thing most teams misunderstand: DCO does not test creative. It selects creative. A test asks "which is better?" and gives you a learning. DCO asks "which combination wins per user?" and gives you a delivery. You do not get a clean "headline A beat B by 18%" readout. If you want learnings, you still need proper creative testing. DCO is the production layer, not the lab.
Meta DCO: what fits in the box
The asset slot caps for a single dynamic creative ad inside a Meta ad set, per Meta's Advantage+ creative documentation, are up to 10 images, 2 videos, 5 headlines, 5 primary texts, 5 descriptions, and 5 CTAs.
Multiply those out and you get a theoretical 1,250 image-led permutations per ad. In practice the model never explores the full grid — it prunes in the first 24-48 hours, biases toward 2-3 winning combinations, and treats the rest as exploration tax.
What that means for the brief: you do not need 5×5×5 of everything. You need variance the model can differentiate. Two videos with radically different hooks out-resolve five videos that all open with the same B-roll. Three headlines arguing different angles out-resolve five tone-shifted versions of the same line. The constraint is not "fill the slots," it is "fill the slots with variants that disagree."
Meta also folds DCO into Advantage+ Creative enhancements — text overlays, music, cropping, image expansion. Those layer on top of combinatorial selection. Leave them on for cold prospecting; turn them off for retargeting where brand consistency matters.
Asset slot caps by platform
| Platform | Format | Images | Videos | Headlines | Primary text / desc | CTAs | Notes |
|---|---|---|---|---|---|---|---|
| Meta | DCO (single image) | 10 | — | 5 | 5 primary + 5 desc | 5 | Asset feeds also support up to 30 combinations via Asset Customisation |
| Meta | DCO (video) | — | 2 | 5 | 5 primary + 5 desc | 5 | Video DCO penalises mismatched aspect ratios — supply 1:1, 4:5, 9:16 |
| Meta | Advantage+ Creative enhancements | layer on top | layer on top | — | — | — | Text overlays, music, brightness, cropping — toggleable |
| RSA | — | — | 15 | 4 descriptions | — | Pinning slots is allowed but suppresses ad strength | |
| Performance Max asset groups | 20 | 5 | 5 short + 5 long | 5 descriptions | — | Plus 5 logos and 5 long descriptions | |
| Demand Gen | 20 | 5 | 5 | 5 | — | Closest GDN equivalent to Meta DCO for awareness | |
| TikTok | ACO (Smart Creative) | — | up to 10 | up to 5 captions | — | up to 1 | Generates up to 30 combinations |
| TikTok | Smart+ campaigns | — | layer on ACO | — | — | — | TikTok's Advantage+ analogue, launched 2024-2025 |
| Dynamic ads (follower/spotlight) | 1 | — | 1 headline + 1 desc | — | 1 | Personalisation by member name/photo, not creative permutation |
Sources: Meta Advantage+ Creative help center, Google RSA documentation, Google Performance Max asset specs, TikTok Smart Creative.
The headline number people quote — "Meta DCO does 1,250 combinations" — is technically correct and operationally useless. The number that matters is how many meaningfully different assets you can supply per slot. Two of those beats five clones.
Step 0: Feed your DCO with Adlibrary intel
This is the work most teams skip. They open Ads Manager, upload whatever the agency shipped that week, and let DCO sort it out. The model converges on a winner inside that pool — but the pool was assembled in a vacuum. The winner is the best of what you happened to make, not the best of what is winning in your category.
Fix: seed the asset library from outside your own account. Three plays before you ever touch a DCO ad set:
1. Pull winning variants competitors are scaling. Use unified-ad-search to query your category and ad timeline analysis to see which variants have been live longest. Long-running ads are spend-validated — brands do not keep losers in rotation for 30+ days. The find-winning-ad-creatives workflow walks the filter sequence: geo, platform, media type, date range.
2. Save patterns, not ads. Saved Ads is where your strategist builds a swipe file tagged by hook, angle, offer, and format. The point is not to copy a competitor frame-for-frame — that is a copyright problem and a positioning miss. The point is to extract the angle that is working and brief your team to argue your version of it. The save-and-share-winning-ad-creatives flow covers team handoff.
3. Tag the hooks so DCO inputs disagree on purpose. AI ad enrichment auto-classifies each saved ad by hook type (curiosity, problem-agitate, social proof, demo, status, contrarian), angle, and format. Now you can deliberately load 5 headlines that argue 5 different angles instead of 5 tone-shifted versions of one. Variant disagreement is what lets the model resolve faster.
Run this loop weekly and the pool you feed Meta DCO is no longer one writer's afternoon. It is a category-validated, format-tagged, hook-tagged library ready to slot into a 5×5×5 grid the model can resolve quickly. That is the moat. Anyone can run DCO; almost nobody seeds it from competitor signal at this resolution. The ai-creative-iteration-loop use case formalises this as a weekly cadence.
Google's version: RSA, Performance Max, Demand Gen
Google's "dynamic creative" is fragmented across three surfaces, and conflating them costs money.
Responsive Search Ads (RSA) are search-only. You supply up to 15 headlines and 4 descriptions, Google permutes them at query time. Pinning headlines to specific positions tanks your ad strength score, which throttles impression share. The 2026 best practice is to write 15 headlines that all parse standalone and let Google pin nothing.
Performance Max is Google's closest analogue to Meta DCO. You feed up to 20 images, 5 videos, 5 short headlines, 5 long headlines, 5 descriptions, and 5 logos. Performance Max then routes those across Search, Display, YouTube, Discover, Gmail, and Maps. The cost is opacity — no per-placement reporting at asset-group level without scripting workarounds. The benefit is cross-surface intent training.
Demand Gen (formerly Discovery) is the closest Google equivalent to Meta DCO for video and image. Slots match Performance Max; placements are limited to YouTube Shorts, in-feed, Discover, and Gmail. For DTC brands moving budget after the iOS 14 attribution rebuild, Demand Gen is the surface that behaves most like the Meta playbook.
The mistake is treating these three as interchangeable. They are not. Pick the surface that matches the funnel stage and the assets you have.
TikTok ACO and Smart+
TikTok's Automated Creative Optimization (ACO) is the platform's DCO. Upload up to 10 videos and 5 captions; ACO mixes up to 30 combinations and routes delivery to whichever the algorithm rates highest. CTA is single — TikTok decided fragmenting CTA on a creator-native feed is brand-confusing.
ACO became the input layer for Smart+, TikTok's Advantage+ analogue launched in 2024. Smart+ automates budget, audience, and creative, but the creative arm is still ACO underneath.
The TikTok-specific gotcha: ACO heavily weights the first 1.5 seconds of every video, because that is where the hook rate signal resolves. Ten videos with similar opens — same logo, same B-roll, same product reveal — give ACO no signal to differentiate, and delivery flattens. Winning variants on TikTok almost always have radically different first-second compositions: a face direct-to-camera, a problem stated in text, a counterintuitive claim, a creator stitch.
When DCO wins vs hand-built variants
| Funnel stage | Spend / mo | Variants / wk | DCO? | Why |
|---|---|---|---|---|
| Cold prospecting | <$10k | <5 | No — hand-built | Not enough volume for DCO to resolve; ABO gives cleaner learnings (creative-testing) |
| Cold prospecting | $10k-$50k | 5-15 | Yes — Meta DCO + Advantage+ | Volume right, model has signal, media buying iterates weekly |
| Cold prospecting | $50k+ | 15+ | Yes — DCO + parallel hand-built | DCO for production, hand-built ABO for new-angle learnings |
| Retargeting (warm) | any | <5 | Hand-built | Combinatorial routing burns frequency before resolving |
| Retargeting (warm) | $20k+ | 5-15 | Yes — DCO with pinned CTAs | Pin CTA to "Shop now" / "Get yours" to keep intent clean |
| New product launch | any | any | No — hand-built first | Need per-variant learning to validate angle before scaling |
| Brand campaign | any | any | No — hand-built | Permutation can violate brand guidelines |
| Catalog / DPA | any | any | DCO at template level | Catalog permutes products; DCO permutes the wrap |
| Always-on evergreen | $20k+ | 10+ | Yes — DCO + weekly refresh | Pairs with refresh cadence to prevent ad fatigue |
The rule under the table: DCO wins when variants disagree and the audience is large enough for the model to get signal in the first 48 hours. It loses when you are trying to learn something specific. Use DCO for production. Use structured ABO testing for hypotheses. Teams that conflate the two get neither learnings nor scale.
Five DCO failure modes that look like the algorithm's fault
1. Asset clones in disguise. Five headlines that all argue the same angle in different sentences. DCO converges on "the longest one" because that is the only variable that actually differs. Fix: brief variants by creative angle, not by tone.
2. Format mismatch. 1:1 video loaded into a placement set that includes Reels. The 9:16 placement crops or letterboxes, kills hook rate, drags down the whole ad set. Fix: supply native 1:1, 4:5, and 9:16. The meta-ad-sizes-2026 guide has the spec sheet.
3. CTA-driven thrash. Permuting CTAs across "Shop now," "Learn more," "Sign up." Different CTAs imply different funnel intents; the model picks the cheapest-CPC CTA and tanks downstream conversion. Fix: pin CTA to one funnel intent per ad set.
4. Frequency burn on tight retargeting audiences. DCO needs impression volume to resolve. On a 50k retargeting audience it burns frequency before converging. Fix: hand-build below $20k/month per the retargeting-segmentation-playbook.
5. No refresh cadence. Without fresh assets, the winning combination fatigues and the model has nothing to route to. Fix: pull at least 2 fresh assets into the pool every 2 weeks, sourced from organize-proven-ad-winners and competitor signal. The meta-ads-creative-testing-automation post drills into the 100-ads/week pipeline.
How to read DCO results without lying to yourself
You do not get clean per-variant readouts. Meta's asset-level breakdowns — best image, best headline — are correlational, not causal. The "winning" headline may have won because it paired with the winning image. Treat asset-level reporting in DCO as a hypothesis generator for your next ABO test, not a verdict.
What resolves cleanly: ad-set-level CPA and ROAS over a 7-day window. That tells you whether the DCO ad set as an aggregate beat its peers. The honest comparison is DCO vs a hand-built ABO ad set running the same audience and offer.
For the per-variant question — "did the testimonial angle beat the demo angle?" — pull your historical winners from saved-ads, build a structured 4-variant ABO test, and run it next sprint. The ad-creative-testing use case is calibrated for that. DCO is the production layer; ABO tests are how you learn.
What the case studies show (and what they hide)
Most published DCO case studies are vendor-authored, so calibrate accordingly:
- Smartly.io's e-commerce DCO benchmarks consistently show 20-40% CPA improvements over hand-built variants — but the comparison is usually against agencies that did not refresh creative weekly.
- Motion App's creative analytics reports document hook-rate decay curves on Meta that match what you see in DCO ad sets — first 48 hours is where the model resolves, days 7-14 is where fatigue starts.
- Meta's own Advantage+ Creative case studies are the cleanest source on stacking DCO with Advantage+ Shopping.
- TikTok ACO case studies show 1.3x to 2x CTR uplift over manually-built variants — usually against single-creative ad groups, so absolute numbers are inflated but directional findings hold.
What they converge on: DCO consistently beats hand-built variants when volume is high enough and the asset pool stays fresh. It loses when teams skip refresh cadence or use it as a way to ship lazy creative.
The strategic position
DCO is a commodity. Every brand running Meta has the same slot caps, the same Advantage+ enhancements, the same RSA counts on Google, the same ACO combinatorics on TikTok. The platforms are converging on a model where optimisation lives in the algorithm and differentiation lives in the inputs.
That makes the asset pipeline the only durable advantage. If your team ships briefs based on internal vibes, your DCO converges on the best of internal vibes. If your team ships briefs grounded in competitor ad research and a tagged library of category winners, your DCO converges on an externally-validated pool. Same algorithm. Different ceiling. That is what Adlibrary is for.
FAQ
Is dynamic creative the same as Advantage+? No. DCO selects which asset combination to show a user inside one ad. Advantage+ is a higher-level layer that decides budget, audience, and placement. They stack but solve different problems.
How many assets should I upload to a Meta DCO ad? Less than the cap, more than two per slot. Target 3-4 meaningfully different assets per slot — different angles, hooks, first-frames. The creative-brief template forces variance at brief stage.
Does DCO work for retargeting? Above ~$20k/month, yes. Below that, frequency burns before the model resolves; hand-build 2-4 variants instead per the retargeting-segmentation-playbook.
Can DCO replace creative testing? No. DCO is production delivery. Creative testing is how you learn which angle won. Different surfaces, different questions.
What is the fastest way to seed a DCO asset pool? Pull 10-20 long-running competitor ads from unified ad search, tag with AI ad enrichment, brief your team to argue your version of the top 3 angles, and ship 5 headlines × 5 primary texts × 3 videos that disagree on hook.
Further Reading
Related Articles

Creative Testing in 2026: A Framework That Actually Resolves (Post-Andromeda)
Creative testing in 2026 demands variable isolation post-Andromeda. Use the 60-30-10 budget split, ABO setups, and angle-first hierarchy that resolve.

Ad Creative in 2026: What It Is and What Wins
Ad creative is every visual and written element of an ad. Learn 2026 anatomy, the Andromeda shift, best practices, and the pipeline that compounds.

Hook Rate in 2026: The 3-Second Metric That Decides Meta Ads
Hook rate is the share of impressions that reach 3 seconds. See 2026 benchmarks, the formula, custom column setup, diagnostic flow, and 11 boost tactics.

Ad Fatigue in 2026: Why Your Best Creative Burns Out in Days
Ad fatigue compresses to 2-3 weeks under Andromeda. Spot the 5 signals, set the right frequency cap by platform, and refresh angles before ROAS slips.

Creative Angle: The Decision That Decides Every Ad (2026)
A creative angle is the underlying reason an ad resonates. Definition, hypothesis template, 5 DTC examples, and how Andromeda reads angle as signal.

Meta ads creative testing automation: 100 ads/week pipeline
Build a hypothesis-driven Meta ads creative testing pipeline that generates 100 ads per week using MCP, adlibrary angle clusters, and disciplined kill rules.

Organize Proven Ad Winners: Build a Reusable Creative Library
Step-by-step system to organize proven ad winners and build a creative library your whole team uses: define thresholds, audit campaigns, categorize by hook and format, and build a redeployment workflow.

Meta Advantage+ in 2026: When AI Buying Earns Budget
Meta Advantage+ in 2026: how the five surfaces (ASC, Audience, Placements, Creative, Leads) actually work, and when manual buying still wins.