Skip to main content

The TMP Router

The TMP Router is infrastructure that sits between publishers and buyer agents. It handles request fan-out, response merging, and privacy enforcement. It does not make decisions — it routes requests and aggregates responses. The publisher configures which providers the router calls.

What the Router Does

  1. Fans out requests: Sends Context Match requests to all configured providers with context_match capability. Sends Identity Match requests to all configured providers with identity_match capability.
  2. Merges responses: Combines offers, enrichment signals, and eligibility results from multiple providers into unified responses.
  3. Enforces separation: Context and identity code paths are structurally separate — the context path never accesses identity data and vice versa.
  4. Manages latency: Applies adaptive timeouts and deprioritizes providers that consistently exceed the latency budget.

Single Binary, Separate Code Paths

The router is a single Go binary with two structurally separate code paths: one for context match, one for identity match.
┌──────────────────────────────────────────────────────────────┐
│                         TMP Router                            │
│                                                               │
│  ┌───────────────────────────┐  ┌───────────────────────────┐│
│  │    Context Match Path      │  │    Identity Match Path     ││
│  │                            │  │                            ││
│  │  Inputs:                   │  │  Inputs:                   ││
│  │  • Artifact IDs / artifact  │  │  • Opaque user token       ││
│  │  • Context signals         │  │  • ALL active package IDs  ││
│  │  • Geo, URL hash           │  │                            ││
│  │  • Available packages      │  │                            ││
│  │                            │  │  Outputs:                  ││
│  │  Outputs:                  │  │  • Eligible package IDs    ││
│  │  • Offers                  │  │  • TTL (seconds)           ││
│  │  • Enrichment signals      │  │                            ││
│  │                            │  │                            ││
│  │  Never touches:            │  │  Never touches:            ││
│  │  • User tokens             │  │  • URLs                    ││
│  │  • Any identity data       │  │  • Content signals         ││
│  └───────────────────────────┘  └───────────────────────────┘│
│                                                               │
│  No shared state between code paths.                          │
│  One binary, one audit surface, one Docker image.             │
└──────────────────────────────────────────────────────────────┘
The separation is in the code and auditable. The context path cannot read identity data because it is not passed to it, not stored in any reachable location, and not referenced in any data structure the context path processes. The same applies in reverse for the identity path. The router is open-source — anyone can verify this by reading the source. TEE attestation is an upgrade path. Without TEE, you trust that the operator deployed the published binary. With TEE, attestation proves the deployed binary matches the audited source, removing that trust requirement.

Provider Registration

Publishers configure which providers the router calls. This is an operational relationship — the publisher trusts the provider to participate in their ad decisioning.
SettingTypeDescription
endpointURLProvider’s HTTP/2 endpoint
context_matchboolProvider handles Context Match requests
identity_matchboolProvider handles Identity Match requests
Additional configuration per provider:
  • Property scoping: Which properties or property groups this provider serves
  • Latency budget: Per-provider timeout threshold
  • Priority: Provider ordering for tie-breaking in merge conflicts
Providers MAY support any combination of context_match and identity_match. A context-only provider handles enrichment or contextual targeting. An identity-only provider handles frequency capping — the publisher evaluates context locally from the media buy’s targeting rules and calls the buyer only for identity checks. All communication uses JSON over HTTP/2. TMP messages are small (200-600 bytes) — at these sizes, serialization format is less than 1% of total latency.

Integration

Prebid integration

Publishers with Prebid Server or Prebid.js add a TMP module that replaces vendor-specific RTD modules. The TMP module sends Context Match and Identity Match requests to the router and returns the merged response as targeting signals and package activation data. The publisher’s ad server (GAM, etc.) receives targeting key-values and activates the corresponding line items.

Non-Prebid surfaces

For AI assistants, mobile apps, CTV, and retail media, the router provides a direct HTTP/2 API. Any platform that can make HTTP/2 POST requests can integrate. The request and response schemas are the same regardless of surface.

SSP and DSP integration

SSPs and DSPs integrate as TMP providers — they expose an endpoint that the router calls during fan-out. This is the same pattern as existing RTD integrations.

Identity tokens

Identity tokens come from existing providers (ID5, LiveRamp, UID2, etc.) that are already present on the page or in the app. TMP does not specify token lifecycle — it consumes tokens that the publisher’s identity stack already produces.

Fan-Out and Response Merging

Context Match fan-out

When the publisher sends a Context Match request:
  1. The router identifies all providers configured for the request’s property_rid with context_match capability.
  2. It sends the request to all matching providers in parallel over HTTP/2.
  3. It waits for responses up to the latency budget (default: 50ms).
  4. It merges responses:
    • Offers are collected from all providers. If two providers return offers for the same package_id (uncommon — packages are typically provider-specific), the router keeps the first response received. Duplicate package_id across providers is a configuration error; the router SHOULD log a warning.
    • Enrichment signals are concatenated. Segments from all providers are combined into a single list. Targeting key-values from different providers are namespaced to prevent collisions.
  5. It returns the merged response to the publisher.

Identity Match fan-out

The same pattern applies. The router fans out to all providers with identity_match capability for the relevant properties, merges eligibility results, and returns a unified response. Duplicate package_id across providers is a configuration error — packages come from media buys and are provider-specific. If it occurs, the router applies conservative merging: the package is only eligible if it appears in eligible_package_ids from both providers. The router uses the minimum ttl_sec across providers and SHOULD log a warning.

Timeout handling

The default latency budget is 50ms per operation. The router allocates this proportionally across providers based on provider count and historical performance.
  • Single provider timeout: Skip that provider, log its latency percentile, proceed with responses from remaining providers. The skipped provider’s packages are treated as “not activated” for this request.
  • All providers timeout: Return an empty response — no offers for Context Match, no eligibility for Identity Match. The publisher falls back to existing demand sources (Prebid open auction, direct-sold, etc.).
  • Adaptive timeout: The router tracks per-provider latency percentiles (p50, p95, p99) and adjusts allocation over time. Consistently slow providers receive smaller timeout allocations or are preemptively skipped. This is an operational decision, not a protocol requirement.

Latency Budget

TMP targets sub-50ms end-to-end latency: publisher sends request, router fans out, providers respond, router merges, publisher receives response. This is achievable because:
  • Small messages: TMP requests are 200-600 bytes of JSON — roughly 10-20x smaller than a typical OpenRTB bid request. Serialization is sub-microsecond.
  • No price computation: Packages are pre-negotiated. The provider evaluates targeting criteria, not auction dynamics.
  • Parallel fan-out: All providers are called simultaneously. The total latency is the slowest provider’s response time, not the sum.
  • Stateless router: No database lookups in the hot path. The router’s only job is forwarding and merging.
  • Connection reuse: HTTP/2 multiplexing allows concurrent requests to each provider over a single connection.

Comparison to Vendor RTD Modules

The TMP Router generalizes what vendor-specific RTD modules do today. A single-vendor RTD module evaluates packages against content in real time, but it is locked to one provider, one surface (Prebid), and sends the full OpenRTB BidRequest. The TMP Router replaces this with a multi-provider, multi-surface, protocol-standard alternative:
Vendor RTD Module (today)TMP Router
ProvidersSingle vendorAny provider declaring TMP capabilities
DiscoveryPublisher configurationPublisher configuration
SurfacesWeb (Prebid Server)Web, AI, mobile, CTV, retail media
Request formatFull OpenRTB BidRequest (~2-10KB JSON)TMP ContextMatchRequest (~200-600 bytes JSON)
PrivacyData masking before sendingStructural separation (TEE-ready)
Identity handlingUser ID in bid requestSeparate Identity Match operation
For existing Prebid Server deployments, the TMP module replaces vendor-specific RTD modules with a generic TMP client. For surfaces without Prebid, the router’s HTTP/2 API provides the same functionality.

Relationship to TEE Auction Infrastructure

TEE-based auction infrastructure (encrypted bids, attestation proofs, verifiable winner selection) is complementary to TMP. When a publisher wants competitive selection among activated packages from multiple buyers:
  1. TMP Router collects Context Match responses (which packages each buyer wants to activate).
  2. Publisher submits the activated packages (with their pre-negotiated prices) to a TEE auction.
  3. The TEE enclave selects the winner and produces an attestation proof.
  4. Publisher activates the winning package.
TMP handles matching. TEE auctions handle competition. Publishers choose whether they need competition at all — many surfaces (editorial AI content, CTV pod composition, retail carousels) are better served by publisher-side relevance ranking than by price-based auctions. TEE auction infrastructure (AWS Nitro Enclaves, attestation, key management) is directly applicable when upgrading the TMP Router to TEE-attested operation, making it a natural infrastructure partner for the protocol.

Deployment

The TMP Router is a single Go binary built on adcp-go. It reads a configuration file listing providers and their capabilities, then exposes a single HTTP/2 endpoint. The request type field distinguishes Context Match from Identity Match — the router uses type-based dispatch, not URL paths.

Configuration

# tmp-router.yaml
listen: ":8443"
tls:
  cert: /etc/tmp/tls.crt
  key: /etc/tmp/tls.key

providers:
  - id: acme-outdoor
    endpoint: https://tmp.acmeoutdoor.example/v1
    context_match: true
    identity_match: true
    timeout_ms: 40
    properties: ["01916f3a-9c4e-7000-8000-000000000010"]

  - id: enrichment-co
    endpoint: https://enrichment.example/v1
    context_match: true
    identity_match: false
    timeout_ms: 30

latency_budget_ms: 50
adaptive_timeout: true

Container deployment

FROM ghcr.io/adcontextprotocol/tmp-router:latest
COPY tmp-router.yaml /etc/tmp/config.yaml
EXPOSE 8443
The router is stateless — no database, no persistent storage. It can be horizontally scaled behind any load balancer. Health checks are available at /healthz.

Capacity planning

Each router instance handles approximately 10,000 requests per second on a 2-vCPU container. Memory usage scales linearly with the number of concurrent connections to providers, not with request volume. For web publishers, one router instance per point of presence (PoP) is typical. For AI platforms, a centralized deployment with regional failover is sufficient since the router adds < 5ms to end-to-end latency.

Monitoring

The router exposes Prometheus metrics at /metrics:
MetricDescription
tmp_context_match_duration_msContext Match end-to-end latency histogram
tmp_identity_match_duration_msIdentity Match end-to-end latency histogram
tmp_provider_duration_msPer-provider response time histogram
tmp_provider_timeout_totalPer-provider timeout counter
tmp_provider_error_totalPer-provider error counter
tmp_offers_totalTotal offers returned across all providers
Alert on tmp_provider_timeout_total increasing — a provider consistently exceeding its timeout budget degrades match quality for all requests that include it.