adlibrary.com Logoadlibrary.com
Share
Guides & Tutorials,  Platforms & Tools

adlibrary MCP server: build your own in 60 lines of Python

Wrap adlibrary's REST API as a custom MCP server so Claude calls it natively—no more copy-pasting data into chat.

Split workstation illustration showing CLI terminal on left and code editor IDE on right, flat vector comparison of Claude Code vs Cursor for marketers

Building an adlibrary MCP server takes roughly 60 lines of Python. The hard part is not the code—it is deciding which endpoints to expose, how to write tool descriptions Claude can actually parse, and where to draw the line between the model's reasoning and your tool's enforcement. If you have already hit the ceiling of pasting ad data into chat and running scripts by hand, a custom adlibrary MCP server is the direct path out.

You will walk away with a running fastmcp server that wraps adlibrary's REST API via /features/api-access, a config block that registers it alongside the Meta Ads MCP in Claude Desktop, and a clear mental model for what Claude reads vs. what your code enforces.

TL;DR: An adlibrary MCP server is a fastmcp process that exposes four adlibrary endpoints as Claude tools. Set it up alongside Meta's official MCP server and Claude can reason across both your campaign data and competitor ad intelligence in a single session—without any manual data retrieval.

Why wrap an API as MCP instead of just calling it

When you call an API from a script, you write the call. When you wrap it as an MCP server, Claude writes the call—once it decides it needs the data. That distinction matters more than it sounds.

The Model Context Protocol defines a transport layer between a host application (Claude Desktop, Claude Code) and external tools. An adlibrary MCP server exposes three primitives: tools (executable functions), resources (readable context), and prompts (reusable instructions). For ad intelligence work, tools are the primary primitive—you want Claude to call /api/search when it needs to find competitor ads, not because you pasted a prompt asking it to.

The practical difference shows up in agentic workflows. A script runs linearly. An adlibrary MCP session runs adaptively: it searches adlibrary, reads a specific ad's timeline data via /features/ad-timeline-analysis, requests enrichment, then cross-references against Meta campaign data—all in one chain of reasoning, without you scripting the sequence. This is the core value of ad data for AI agents.

There is also a composability argument. Once your adlibrary MCP server is running, you can register it alongside any other MCP server. The combined session pattern covered later in this post is the direct payoff.

Step 0: pick the four endpoints worth exposing first

Before writing a single line of fastmcp, open adlibrary's unified ad search and think about what Claude will actually need through your adlibrary MCP server. There are dozens of endpoints, but for a first build, four do the heavy lifting:

  1. /api/search — full-text + filter search across the ad corpus. This is where every swipe file session starts.
  2. /api/ads/[id] — fetch a single ad's full detail view, including platform, format, copy, and targeting signals.
  3. /api/ads/[id]/timeline — pull run-history data showing when an ad started, paused, and resumed. Long-running ads are usually profitable ads—this endpoint is your performance signal.
  4. /api/ads/[id]/enrichment — the AI ad enrichment layer: hook analysis, emotion tags, persuasion patterns. This is what turns raw creative into a structured ad creative brief.

Start with these four. Everything else—geo filters, platform filters, media type slicing—can layer on once you have confirmed the core loop works. The automated competitor ad monitoring use case maps almost directly onto this adlibrary MCP server endpoint set.

One more constraint before you code: all four above are GET-only. Your adlibrary MCP server should never modify production data. No POST or DELETE wrappers in v1.

fastmcp in 60 lines: a runnable starter

fastmcp is a Python framework that converts decorated functions into MCP tools. Install it alongside httpx and python-dotenv:

pip install fastmcp httpx python-dotenv

Then the server skeleton:

python
# adlibrary_mcp/server.py
import os
import httpx
from dotenv import load_dotenv
from fastmcp import FastMCP

load_dotenv()

ADL_BASE = "https://adlibrary.com/api"
ADL_TOKEN = os.environ["ADL_API_TOKEN"]  # never hard-code

mcp = FastMCP(
    name="adlibrary",
    instructions=(
        "Use these tools to search competitor ads, inspect ad details, "
        "retrieve run-history timelines, and get AI enrichment signals. "
        "Always call search first to get ad IDs before calling detail or timeline tools."
    ),
)


def _headers() -> dict:
    return {"Authorization": f"JWT {ADL_TOKEN}"}


@mcp.tool()
async def search_ads(
    query: str,
    platform: str = "",
    media_type: str = "",
    limit: int = 20,
) -> dict:
    """Search adlibrary for in-market ads matching a keyword, brand, or concept.

    Args:
        query: Search keywords or brand name.
        platform: Optional filter — 'facebook', 'instagram', 'tiktok', 'youtube'.
        media_type: Optional filter — 'video', 'image', 'carousel'.
        limit: Max results to return (default 20, max 50).

    Returns a list of ad summaries with id, title, platform, and first-seen date.
    Use the returned ad id with get_ad_detail or get_ad_timeline for deeper data.
    """
    params = {"q": query, "limit": min(limit, 50)}
    if platform:
        params["platform"] = platform
    if media_type:
        params["mediaType"] = media_type

    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"{ADL_BASE}/search",
            headers=_headers(),
            params=params,
            timeout=15,
        )
        resp.raise_for_status()
        return resp.json()


@mcp.tool()
async def get_ad_detail(ad_id: str) -> dict:
    """Fetch full creative detail for a single ad by its adlibrary ID.

    Returns platform, format, copy text, CTA, landing-page URL,
    and targeting metadata. Use this when you need the full creative
    content of a specific ad found via search_ads.
    """
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"{ADL_BASE}/ads/{ad_id}",
            headers=_headers(),
            timeout=15,
        )
        resp.raise_for_status()
        return resp.json()


@mcp.tool()
async def get_ad_timeline(ad_id: str) -> dict:
    """Get the run-history timeline for an ad: start dates, pauses, and restarts.

    Long consecutive runs signal profitable creative. Use this alongside
    get_ad_detail to assess creative durability before building a swipe file.
    """
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"{ADL_BASE}/ads/{ad_id}/timeline",
            headers=_headers(),
            timeout=15,
        )
        resp.raise_for_status()
        return resp.json()


@mcp.tool()
async def get_ad_enrichment(ad_id: str) -> dict:
    """Retrieve AI-generated enrichment for an ad: hook type, emotion tags,
    persuasion pattern, offer structure, and dynamic creative signals.

    Use this to build structured creative briefs from competitor ads without
    manual analysis. Returns structured JSON suitable for direct prompt injection.
    """
    async with httpx.AsyncClient() as client:
        resp = await client.get(
            f"{ADL_BASE}/ads/{ad_id}/enrichment",
            headers=_headers(),
            timeout=15,
        )
        resp.raise_for_status()
        return resp.json()


if __name__ == "__main__":
    mcp.run(transport="stdio")

That is the full skeleton. Under 90 lines including docstrings. Run it with python server.py and it will accept MCP messages over stdin/stdout via stdio transport.

For production use, swap stdio for streamable-http and put it behind a process manager. For local Claude Desktop use, stdio is what the config expects.

Tool descriptions: the part the model actually reads

Here is the most important concept when building an adlibrary MCP server: Claude reads your docstring, not your code. The model never inspects the function body. It reads the tool name, the parameter schema, and the description string—then decides whether and how to call your tool.

This means a bad description is a broken tool, even if the code is perfect. A few patterns that cause real failures in practice:

  • Too vague: "Search for ads." — Claude will call this for any ad-related thought, including ones where get_ad_detail or a timeline query would be better.
  • No return hint: If the description does not tell Claude what shape comes back, it cannot chain tools correctly. Always describe what the return value contains and what to do with it next.
  • Missing "when to use": The search_ads docstring above explicitly says "Always call search first to get ad IDs before calling detail or timeline tools." That one sentence prevents a class of tool-call ordering errors.

Here is a second tool definition that shows how to write a description optimized for the model's tool-selection logic:

python
@mcp.tool()
async def get_ad_enrichment(ad_id: str) -> dict:
    """Retrieve AI-generated enrichment for an ad: hook type, emotion tags,
    persuasion pattern, offer structure, and dynamic creative signals.

    Use this to build structured creative briefs from competitor ads without
    manual analysis. Returns structured JSON suitable for direct prompt injection.

    When to use: call this after get_ad_detail when the user asks to analyze
    a competitor's hook, identify the persuasion mechanism, or create a brief
    based on a specific ad's structure.

    When NOT to use: skip this for bulk searches where you only need surface
    data (title, platform, run date). get_ad_detail is sufficient for that.
    """
    ...

The When NOT to use clause is underrated. It actively prevents the model from calling your most expensive endpoint on every ad in a bulk result set. In paid API contexts, that kind of description discipline directly controls your cost.

Pydantic integration via Pydantic models adds a third layer: parameter validation before the HTTP call even fires. For search_ads, a Pydantic model can enforce that platform is one of a known enum and limit is between 1 and 50. Claude will occasionally try to pass values outside your expected range; Pydantic catches that at the boundary rather than letting it produce a silent 400 from adlibrary's API.

This is the same principle that makes or breaks any adlibrary MCP server—see Claude Code's MCP config: the more precisely you constrain the interface, the more reliably the model uses it. Underconstrained tools produce underpredictable behavior.

If you want to see a more detailed breakdown of how to structure Claude Code + adlibrary API workflows, that post covers the scripting patterns this server replaces.

Auth, validation, and the rate-limit shim

Three operational concerns every production adlibrary MCP server needs to handle:

Auth: token from env, never returned

The skeleton above loads ADL_API_TOKEN from the environment. Never hard-code it, never echo it back in a tool return value. If you return the token (even accidentally in an error object), it will appear in Claude's context window and potentially in logs.

For local use, put the token in a .env file that your python-dotenv call loads. For server deployment, use your platform's secret manager.

One more thing: the token should never appear in tool output. If your API returns a full request-echo object that includes auth headers, strip them before returning. Claude does not need to see them and the agentic AI literature is clear that secrets in context are a confidentiality risk.

Parameter validation shim

For the platform and media_type parameters on search_ads, add explicit validation before the HTTP call:

python
from typing import Literal

@mcp.tool()
async def search_ads(
    query: str,
    platform: Literal["facebook", "instagram", "tiktok", "youtube", ""] = "",
    media_type: Literal["video", "image", "carousel", ""] = "",
    limit: int = 20,
) -> dict:
    """Search adlibrary for in-market ads matching a keyword, brand, or concept.

    ...(same docstring as above)...
    """
    if not 1 <= limit <= 50:
        raise ValueError("limit must be between 1 and 50")
    # ... rest of implementation

Using Literal types tells fastmcp to generate a constrained JSON schema for the parameter. The MCP client sends that schema to Claude, so the model knows the valid values before it tries to call the tool. This eliminates an entire class of "I tried to pass 'FB' and got an error" failure patterns.

Rate-limit shim

Adlibrary's API enforces rate limits. If Claude is running a bulk analysis—say, pulling dynamic creative patterns across 40 competitor ads—it will call get_ad_enrichment 40 times in quick succession. Add a simple retry shim:

python
import asyncio

async def _get_with_retry(url: str, **kwargs) -> dict:
    for attempt in range(3):
        async with httpx.AsyncClient() as client:
            resp = await client.get(url, **kwargs)
            if resp.status_code == 429:
                retry_after = int(resp.headers.get("Retry-After", 2 ** attempt))
                await asyncio.sleep(retry_after)
                continue
            resp.raise_for_status()
            return resp.json()
    raise RuntimeError("Rate limit exceeded after 3 retries")

This keeps your adlibrary MCP server functional under agent-driven bulk workloads—the kind of workload you build toward with Claude Code agentic marketing patterns. Budget your API credits accordingly; the ad budget planner is a useful reference for thinking about cost-per-analysis at scale.

Connect it to Claude Code and Claude Desktop

Once your adlibrary MCP server runs cleanly via python server.py, registering it takes one config edit per host.

Claude Code reads its MCP config from .claude/mcp.json in your project directory (or the global ~/.claude/mcp.json). Add an entry:

json
{
  "mcpServers": {
    "adlibrary": {
      "command": "python",
      "args": ["/absolute/path/to/adlibrary_mcp/server.py"],
      "env": {
        "ADL_API_TOKEN": "your-token-here"
      }
    }
  }
}

Restart Claude Code after saving. The adlibrary MCP server will appear in your available tools list. See the Claude Code MCP documentation for the full config reference.

For a deeper treatment of setting up Claude Code for adlibrary workflows, the agentic marketing with Claude Code post covers project-level configuration patterns.

Now verify the connection with a quick tool call test:

/mcp list

If adlibrary shows up with its four tools, you are live. If it does not, check that the Python path is absolute and ADL_API_TOKEN is set in the env block, not just your shell environment—Claude Code spawns the server as a subprocess and does not inherit your shell.

The combined pattern: adlibrary MCP + Meta Ads MCP in one session

This is where the architecture pays off. The Meta Ads MCP server exposes your live campaign data—spend, delivery, performance metrics, ad set structure. Your adlibrary MCP exposes competitor creative intelligence. Register both in Claude Desktop's settings.json and Claude can reason across both surfaces without you shuttling data between them.

Here is the combined config block for Claude Desktop:

python
# This is the content you put in Claude Desktop's settings.json
# under the "mcpServers" key.
# Shown as a Python dict for readability — the actual file is JSON.

CLAUDE_DESKTOP_MCP_CONFIG = {
    "mcpServers": {
        "adlibrary": {
            "command": "python",
            "args": ["/Users/you/projects/adlibrary_mcp/server.py"],
            "env": {
                "ADL_API_TOKEN": "YOUR_ADLIBRARY_TOKEN"
            }
        },
        "meta-ads": {
            "command": "npx",
            "args": [
                "-y",
                "@modelcontextprotocol/server-facebook-ads"
            ],
            "env": {
                "META_ACCESS_TOKEN": "YOUR_META_ACCESS_TOKEN",
                "META_AD_ACCOUNT_ID": "act_YOUR_ACCOUNT_ID"
            }
        }
    }
}

With both servers registered, a single Claude Desktop session can run a prompt like: "Find the top three video hooks in the DTC supplement space on adlibrary, then compare their offer structure against our current ad set copy. Flag any angles we're missing."

Claude will call search_ads to find the competitor set, get_ad_enrichment for hook analysis, and Meta Ads MCP to pull your current creatives—all in one chain. No CSV exports. No manual cross-referencing.

This is the workflow described in detail in the Meta Ads MCP + adlibrary workflows post, and it is the foundation for the always-on monitoring agent pattern. The automated social media advertising guide covers how to operationalize these sessions into repeatable workflows.

If you are managing this for multiple clients, the Meta Ads MCP for agencies post covers multi-account config isolation—each client gets a separate server entry with its own token.

What breaks and how to spot it

Four failure modes that will bite your adlibrary MCP server in practice:

Schema drift. adlibrary's API response shape evolves. If a field your tool description promises ("Returns platform, format, copy text...") disappears from the response, Claude will either hallucinate the field or produce a confused downstream call. Fix: version your tool descriptions alongside your API contract. When the underlying API changes, update the docstring before you update the code.

Tool description that confuses the model. If search_ads and get_ad_detail have overlapping descriptions, Claude will sometimes call the wrong one. A useful diagnostic: paste your tool's name and description into a fresh Claude session and ask "when would you call this tool?" If the answer does not match your intent, rewrite the description. The Meta Ads MCP debugging post covers this pattern for the Meta side—the same technique applies here.

Rate-limit cascades. Agentic sessions can generate 20–40 API calls in seconds. Without the retry shim, a single 429 breaks the entire chain and Claude will often report a confusing error. The shim from the previous section handles this, but you also want to log the 429s so you know when your usage patterns are hitting the ceiling.

Auth token in context. If you return a raw error object from httpx that includes request headers, your token lands in Claude's context window. Strip all headers from error returns. A clean error return looks like: {"error": "rate_limit", "retry_after": 2} — nothing more.

For a broader treatment of debugging MCP tool chains, the Meta Ads MCP setup guide covers connection verification steps that apply equally to custom servers. The competitor ad to Meta campaign post shows what a healthy multi-tool session looks like end-to-end, which is useful as a reference baseline when diagnosing what broke.

Frequently asked questions

Do I need to run the adlibrary MCP server on a remote host or is local enough?

Local is enough for personal Claude Desktop use. fastmcp with stdio transport runs as a subprocess of Claude Desktop—no network port required. For team use or server deployments, switch to streamable-http transport, add a process manager (supervisord, systemd), and put it behind TLS. The MCP server quickstart covers both transport options with config examples.

How is this different from calling adlibrary's API directly in a script?

In a script, you decide when the API gets called. In an MCP session, Claude decides—based on the task and the tool descriptions you provide. The practical difference is that MCP-equipped Claude can chain calls adaptively (search, then detail, then enrichment, then cross-reference with Meta) without you scripting the sequence. Scripts are linear; MCP sessions are reactive. Both use the same adlibrary API access under the hood.

Can I expose more than four endpoints?

Yes, but start with four. Every tool you add increases the surface area Claude has to reason over during tool selection. More tools means more description-writing work and more potential for the model to pick the wrong one. The right expansion sequence is: verify the core four work reliably, add timeline + enrichment in v2, then layer on filter-specific tools (geo, platform, media type) only when you have a concrete workflow that needs them. The Claude API for marketing automation post covers how to think about tool surface area for marketing use cases.

What API token do I use?

Your adlibrary API token from the dashboard. Treat it like a password: env var only, never in source control, never in tool output. If you are building a shared team server, issue a dedicated token with read-only scope and rotate it on a schedule.

Does this work with Claude Code as well as Claude Desktop?

Yes. The config format differs slightly (.claude/mcp.json for Claude Code vs. settings.json for Claude Desktop) but the server code is identical. The Meta Ads MCP setup guide shows both config formats side by side. For Claude Code specifically, project-level MCP config is the cleaner pattern because it keeps the tool set scoped to the relevant codebase.

Bottom line

An adlibrary MCP server is a tool description with code attached. Write the description for the model, write the code for the API, and the hard parts mostly disappear. Register your adlibrary MCP server alongside Meta's official server and you have a session where Claude reasons across both your live campaigns and the full competitor ad landscape—without any manual data movement.

Originally inspired by mcp.facebook.com. Independently researched and rewritten.

Related Articles