The TMP Router
The TMP Router is infrastructure that sits between publishers and buyer agents. It handles request fan-out, response merging, and privacy enforcement. It does not make decisions — it routes requests and aggregates responses. The publisher configures which providers the router calls.What the Router Does
- Fans out requests: Sends Context Match requests to all configured providers with
context_matchcapability. Sends Identity Match requests to all configured providers withidentity_matchcapability. - Merges responses: Combines offers, enrichment signals, and eligibility results from multiple providers into unified responses.
- Enforces separation: Context and identity code paths are structurally separate — the context path never accesses identity data and vice versa.
- Manages latency: Applies adaptive timeouts and deprioritizes providers that consistently exceed the latency budget.
Single Binary, Separate Code Paths
The router is a single Go binary with two structurally separate code paths: one for context match, one for identity match.Provider Registration
Publishers configure which providers the router calls. This is an operational relationship — the publisher trusts the provider to participate in their ad decisioning.| Setting | Type | Description |
|---|---|---|
endpoint | URL | Provider’s HTTP/2 endpoint |
context_match | bool | Provider handles Context Match requests |
identity_match | bool | Provider handles Identity Match requests |
| Additional configuration per provider: |
- Property scoping: Which properties or property groups this provider serves
- Latency budget: Per-provider timeout threshold
- Priority: Provider ordering for tie-breaking in merge conflicts
context_match and identity_match. A context-only provider handles enrichment or contextual targeting. An identity-only provider handles frequency capping — the publisher evaluates context locally from the media buy’s targeting rules and calls the buyer only for identity checks.
All communication uses JSON over HTTP/2. TMP messages are small (200-600 bytes) — at these sizes, serialization format is less than 1% of total latency.
Integration
Prebid integration
Publishers with Prebid Server or Prebid.js add a TMP module that replaces vendor-specific RTD modules. The TMP module sends Context Match and Identity Match requests to the router and returns the merged response as targeting signals and package activation data. The publisher’s ad server (GAM, etc.) receives targeting key-values and activates the corresponding line items.Non-Prebid surfaces
For AI assistants, mobile apps, CTV, and retail media, the router provides a direct HTTP/2 API. Any platform that can make HTTP/2 POST requests can integrate. The request and response schemas are the same regardless of surface.SSP and DSP integration
SSPs and DSPs integrate as TMP providers — they expose an endpoint that the router calls during fan-out. This is the same pattern as existing RTD integrations.Identity tokens
Identity tokens come from existing providers (ID5, LiveRamp, UID2, etc.) that are already present on the page or in the app. TMP does not specify token lifecycle — it consumes tokens that the publisher’s identity stack already produces.Fan-Out and Response Merging
Context Match fan-out
When the publisher sends a Context Match request:- The router identifies all providers configured for the request’s
property_ridwithcontext_matchcapability. - It sends the request to all matching providers in parallel over HTTP/2.
- It waits for responses up to the latency budget (default: 50ms).
- It merges responses:
- Offers are collected from all providers. If two providers return offers for the same
package_id(uncommon — packages are typically provider-specific), the router keeps the first response received. Duplicatepackage_idacross providers is a configuration error; the router SHOULD log a warning. - Enrichment signals are concatenated. Segments from all providers are combined into a single list. Targeting key-values from different providers are namespaced to prevent collisions.
- Offers are collected from all providers. If two providers return offers for the same
- It returns the merged response to the publisher.
Identity Match fan-out
The same pattern applies. The router fans out to all providers withidentity_match capability for the relevant properties, merges eligibility results, and returns a unified response.
Duplicate package_id across providers is a configuration error — packages come from media buys and are provider-specific. If it occurs, the router applies conservative merging: the package is only eligible if it appears in eligible_package_ids from both providers. The router uses the minimum ttl_sec across providers and SHOULD log a warning.
Timeout handling
The default latency budget is 50ms per operation. The router allocates this proportionally across providers based on provider count and historical performance.- Single provider timeout: Skip that provider, log its latency percentile, proceed with responses from remaining providers. The skipped provider’s packages are treated as “not activated” for this request.
- All providers timeout: Return an empty response — no offers for Context Match, no eligibility for Identity Match. The publisher falls back to existing demand sources (Prebid open auction, direct-sold, etc.).
- Adaptive timeout: The router tracks per-provider latency percentiles (p50, p95, p99) and adjusts allocation over time. Consistently slow providers receive smaller timeout allocations or are preemptively skipped. This is an operational decision, not a protocol requirement.
Latency Budget
TMP targets sub-50ms end-to-end latency: publisher sends request, router fans out, providers respond, router merges, publisher receives response. This is achievable because:- Small messages: TMP requests are 200-600 bytes of JSON — roughly 10-20x smaller than a typical OpenRTB bid request. Serialization is sub-microsecond.
- No price computation: Packages are pre-negotiated. The provider evaluates targeting criteria, not auction dynamics.
- Parallel fan-out: All providers are called simultaneously. The total latency is the slowest provider’s response time, not the sum.
- Stateless router: No database lookups in the hot path. The router’s only job is forwarding and merging.
- Connection reuse: HTTP/2 multiplexing allows concurrent requests to each provider over a single connection.
Comparison to Vendor RTD Modules
The TMP Router generalizes what vendor-specific RTD modules do today. A single-vendor RTD module evaluates packages against content in real time, but it is locked to one provider, one surface (Prebid), and sends the full OpenRTB BidRequest. The TMP Router replaces this with a multi-provider, multi-surface, protocol-standard alternative:| Vendor RTD Module (today) | TMP Router | |
|---|---|---|
| Providers | Single vendor | Any provider declaring TMP capabilities |
| Discovery | Publisher configuration | Publisher configuration |
| Surfaces | Web (Prebid Server) | Web, AI, mobile, CTV, retail media |
| Request format | Full OpenRTB BidRequest (~2-10KB JSON) | TMP ContextMatchRequest (~200-600 bytes JSON) |
| Privacy | Data masking before sending | Structural separation (TEE-ready) |
| Identity handling | User ID in bid request | Separate Identity Match operation |
Relationship to TEE Auction Infrastructure
TEE-based auction infrastructure (encrypted bids, attestation proofs, verifiable winner selection) is complementary to TMP. When a publisher wants competitive selection among activated packages from multiple buyers:- TMP Router collects Context Match responses (which packages each buyer wants to activate).
- Publisher submits the activated packages (with their pre-negotiated prices) to a TEE auction.
- The TEE enclave selects the winner and produces an attestation proof.
- Publisher activates the winning package.
Deployment
The TMP Router is a single Go binary built on adcp-go. It reads a configuration file listing providers and their capabilities, then exposes a single HTTP/2 endpoint. The requesttype field distinguishes Context Match from Identity Match — the router uses type-based dispatch, not URL paths.
Configuration
Container deployment
/healthz.
Capacity planning
Each router instance handles approximately 10,000 requests per second on a 2-vCPU container. Memory usage scales linearly with the number of concurrent connections to providers, not with request volume. For web publishers, one router instance per point of presence (PoP) is typical. For AI platforms, a centralized deployment with regional failover is sufficient since the router adds < 5ms to end-to-end latency.Monitoring
The router exposes Prometheus metrics at/metrics:
| Metric | Description |
|---|---|
tmp_context_match_duration_ms | Context Match end-to-end latency histogram |
tmp_identity_match_duration_ms | Identity Match end-to-end latency histogram |
tmp_provider_duration_ms | Per-provider response time histogram |
tmp_provider_timeout_total | Per-provider timeout counter |
tmp_provider_error_total | Per-provider error counter |
tmp_offers_total | Total offers returned across all providers |
tmp_provider_timeout_total increasing — a provider consistently exceeding its timeout budget degrades match quality for all requests that include it.