Skip to main content
Provenance metadata declares how creative content was produced — whether AI was involved, which tools were used, and what disclosure obligations apply. As regulations like the EU AI Act and California SB 942 require machine-readable AI disclosure in advertising, AdCP carries this metadata at the protocol level so every party in the supply chain can declare, transmit, and verify it using the same structure. Provenance attaches to creative manifests, individual assets, or content-standards artifacts. It is a claim by the declaring party — receiving parties verify claims independently using their own detection tools.
EU AI Act Article 50 enforcement begins August 2026. California SB 942 is already in effect. Major platforms mandate AI content labeling today. AdCP’s provenance metadata provides the structured, machine-readable disclosure that these regulations require — carried through the programmatic supply chain where no standard for it previously existed.

The provenance object

Provenance is an optional object that can attach to creative assets, creative manifests, individual typed assets, and content artifacts. No fields are required at the provenance level — each section is independently useful. Schema: provenance.json
FieldTypeDescription
digital_source_typeenumIPTC-aligned classification of AI involvement
ai_toolobjectAI system used (name required, plus optional version and provider)
human_oversightenumLevel of human involvement in the creation process
declared_byobjectParty attaching this provenance claim (role required, plus optional agent_url)
declared_atstring (date-time)When this provenance claim was made (ISO 8601), distinct from created_time
created_timestring (date-time)When the content was created (ISO 8601)
c2paobjectC2PA Content Credentials reference (manifest_url required)
disclosureobjectRegulatory disclosure requirements and jurisdiction details
verificationarrayThird-party verification or detection results
extobjectStandard extension point

Minimal example

Most provenance declarations answer one question: was this AI-generated, and does it need a disclosure label?
{
  "$schema": "/schemas/core/provenance.json",
  "digital_source_type": "trained_algorithmic_media",
  "disclosure": {
    "required": true
  }
}
That’s it. digital_source_type says how the content was produced. disclosure.required says whether it needs a label. Everything else — tool details, C2PA references, jurisdiction-specific render guidance, verification results — is optional context that supply-chain participants can add when they need it. For content with no AI involvement, provenance is even simpler:
{
  "$schema": "/schemas/core/provenance.json",
  "digital_source_type": "digital_capture"
}

Full example

{
  "$schema": "/schemas/core/provenance.json",
  "digital_source_type": "trained_algorithmic_media",
  "ai_tool": {
    "name": "DALL-E 3",
    "version": "3.0",
    "provider": "OpenAI"
  },
  "human_oversight": "selected",
  "declared_by": {
    "agent_url": "https://creative.pinnaclemedia.example.com",
    "role": "agency"
  },
  "declared_at": "2026-02-15T14:35:00Z",
  "created_time": "2026-02-15T14:30:00Z",
  "c2pa": {
    "manifest_url": "https://cdn.pinnaclemedia.example.com/c2pa/manifests/hero_img_abc123.c2pa"
  },
  "disclosure": {
    "required": true,
    "jurisdictions": [
      {
        "country": "US",
        "region": "CA",
        "regulation": "ca_sb_942",
        "label_text": "Created with AI",
        "render_guidance": {
          "persistence": "flexible",
          "positions": ["prominent", "footer"]
        }
      },
      {
        "country": "DE",
        "regulation": "eu_ai_act_article_50",
        "label_text": "KI-generiert",
        "render_guidance": {
          "persistence": "continuous",
          "positions": ["overlay", "subtitle"]
        }
      }
    ]
  },
  "verification": [
    {
      "verified_by": "Reality Defender",
      "verified_time": "2026-02-15T15:00:00Z",
      "result": "ai_generated",
      "confidence": 0.97,
      "details_url": "https://realitydefender.example.com/reports/abc123"
    }
  ]
}

Digital source type

The digital_source_type enum classifies AI involvement in content production, aligned with the IPTC digitalsourcetype vocabulary. Schema: digital-source-type.json
ValueDescriptionWhen to use
digital_captureCaptured by a digital device (camera, scanner, screen recording) with no AI involvementPhotos from a product shoot, screen recordings of app demos
digital_creationCreated by a human using digital tools (Photoshop, Illustrator, After Effects) without AI generationHand-designed banner ads, manually composed layouts
trained_algorithmic_mediaGenerated entirely by a trained AI model (DALL-E, Midjourney, Stable Diffusion, Sora)AI-generated hero images, AI-produced video spots
composite_with_trained_algorithmic_mediaHuman-created content combined with AI-generated elementsProduct photo with AI-generated background, human-shot video with AI visual effects
algorithmic_mediaProduced by deterministic algorithms without machine learning (procedural generation, rule-based systems)Programmatic visualizations, procedural pattern generation
composite_captureMultiple digital captures composited together without AIPanoramic stitching, multi-exposure HDR composites
composite_syntheticComposite of multiple elements where at least one is AI-generatedStock photo composited with AI-generated background, AI text overlay on captured video
human_editsContent augmented, corrected, or enhanced by humans using non-generative toolsColor-corrected product photography, manually retouched portraits, human copy editing
data_driven_mediaAssembled from structured data feeds (DCO templates, product catalogs, weather-triggered variants)Dynamic creative optimization, catalog-driven product carousels, weather-responsive ads

Choosing the right value

For mixed-production creatives, choose the value that best describes the overall creative at the level where provenance is attached. If you need to distinguish AI involvement per-asset, attach provenance at the individual asset level instead (see Inheritance below). Common patterns:
  • AI image + human copy: Attach trained_algorithmic_media to the image asset, digital_creation to the text asset, and composite_with_trained_algorithmic_media at the manifest level
  • DCO with AI-generated headlines: data_driven_media at the manifest level, trained_algorithmic_media on the AI-generated text assets
  • Human photographer + AI background removal: composite_with_trained_algorithmic_media at the manifest level

Human oversight

The human_oversight enum describes the level of human involvement in an AI-assisted creation process.
ValueDescription
noneFully automated with no human involvement in generation
prompt_onlyHuman provided the prompt or instructions but did not review outputs
selectedHuman selected from multiple AI-generated candidates
editedHuman edited or refined AI-generated output
directedHuman directed the creative process with AI as an assistive tool
This field is relevant when digital_source_type indicates AI involvement. For non-AI content, omit it.

Inheritance

Provenance attaches at three levels in the creative hierarchy. The most specific provenance wins, and replacement is full-object — there is no field-level merging.
creative-asset.provenance        (1) default for the creative in the library
  creative-manifest.provenance   (2) default for this manifest
    individual asset .provenance (3) override for a specific asset

Resolution rules

  1. If an individual asset has provenance, use it
  2. Otherwise, if the manifest has provenance, use it
  3. Otherwise, if the creative asset has provenance, use it
  4. Otherwise, no provenance is declared for that asset

Example: mixed creative

A creative where the image is AI-generated but the copy is human-written. The manifest-level provenance covers the overall creative. The image asset overrides with its own, more specific provenance.
{
  "$schema": "/schemas/core/creative-manifest.json",
  "format_id": {
    "agent_url": "https://creative.adcontextprotocol.org",
    "id": "display_300x250"
  },
  "provenance": {
    "digital_source_type": "composite_with_trained_algorithmic_media",
    "declared_by": { "role": "agency" }
  },
  "assets": {
    "banner_image": {
      "url": "https://cdn.novabrands.example.com/hero_ai.jpg",
      "width": 300,
      "height": 250,
      "provenance": {
        "digital_source_type": "trained_algorithmic_media",
        "ai_tool": {
          "name": "DALL-E 3",
          "version": "3.0",
          "provider": "OpenAI"
        },
        "human_oversight": "selected",
        "declared_by": { "role": "agency" },
        "c2pa": {
          "manifest_url": "https://cdn.novabrands.example.com/c2pa/hero_ai.c2pa"
        }
      }
    },
    "headline": {
      "content": "Nutrition dogs love"
    },
    "clickthrough_url": {
      "url": "https://novabrands.example.com/products"
    }
  }
}
In this example:
  • banner_image uses its own provenance: trained_algorithmic_media with full AI tool details
  • headline inherits the manifest-level provenance: composite_with_trained_algorithmic_media
  • clickthrough_url also inherits the manifest-level provenance
Note that the image’s provenance is a complete replacement. Even though the manifest-level provenance has declared_by, the image asset must re-declare it in its own provenance object if that information should carry through.

Artifact inheritance

For content artifacts (publisher content), the same pattern applies:
artifact.provenance                  (1) default for the artifact
  artifact.assets[].provenance       (2) override for a specific inline asset
{
  "$schema": "/schemas/content-standards/artifact.json",
  "property_rid": "01916f3a-a1d3-7000-8000-000000000030",
  "artifact_id": "article_ai_trends_2026",
  "provenance": {
    "digital_source_type": "digital_creation",
    "declared_by": { "role": "platform" }
  },
  "assets": [
    {
      "type": "text",
      "role": "title",
      "content": "AI trends reshaping the industry in 2026"
    },
    {
      "type": "image",
      "url": "https://cdn.aimagazine.example.com/illustration.jpg",
      "alt_text": "Conceptual illustration of neural networks",
      "provenance": {
        "digital_source_type": "trained_algorithmic_media",
        "ai_tool": { "name": "Midjourney", "version": "v7" },
        "human_oversight": "directed",
        "declared_by": { "role": "platform" }
      }
    }
  ]
}
The article text inherits digital_creation from the artifact. The illustration overrides with its own trained_algorithmic_media provenance.

Trust model

Provenance is a claim by the declaring party. It is not proof. The enforcing party should verify independently.
In advertising, the party declaring provenance and the party enforcing it have competing incentives. A buyer submitting a creative has reason to claim the content is human-made — AI-generated creatives may face placement restrictions, mandatory disclosure labels, or outright rejection on certain inventory. A seller accepting that creative has the opposite incentive: publishing AI-generated content without proper disclosure creates regulatory liability for the publisher, not the advertiser. AdCP handles this tension by treating provenance as a claim, not a fact. The buyer declares; the seller verifies. Verification happens at each enforcement point independently, using AI detection services (via get_creative_features), C2PA manifest validation, or both. No party needs to trust any other party’s assertion. The protocol provides the structure for claims and the integration points for verification — the supply chain provides the adversarial pressure that keeps both honest. The declared_by field identifies who attached the provenance claim. The verification array carries any detection results the declaring party wants to disclose. But the party enforcing a provenance requirement runs its own verification through existing governance infrastructure.

Declaring party roles

RoleDescription
creatorThe party that created or generated the content
advertiserThe brand or advertiser that owns the content
agencyAgency acting on behalf of the advertiser
platformAd platform or publisher that processed the content
toolAutomated tool or service that attached provenance metadata

Buyer-attached verification

The verification array on the provenance object lets the declaring party share detection results for transparency. Multiple services can independently evaluate the same content:
{
  "verification": [
    {
      "verified_by": "Hive Moderation",
      "verified_time": "2026-02-15T15:00:00Z",
      "result": "ai_generated",
      "confidence": 0.96,
      "details_url": "https://hive.example.com/reports/abc123"
    },
    {
      "verified_by": "Reality Defender",
      "verified_time": "2026-02-15T15:05:00Z",
      "result": "ai_generated",
      "confidence": 0.93
    }
  ]
}
These results are supplementary. A seller that requires provenance verification runs its own detection through get_creative_features rather than trusting the buyer’s attached results. Verification results use one of four outcomes:
ResultDescription
authenticContent verified as non-AI-generated
ai_generatedContent detected as AI-generated
ai_modifiedContent detected as AI-modified (original non-AI content with AI alterations)
inconclusiveDetection was unable to reach a confident determination

Example: provenance through a campaign

Acme Brands is running a spring campaign. Their agency, Meridian Media, uses an AI image generator to produce a set of display banners — photorealistic product shots with AI-generated backgrounds. Meridian attaches provenance to the creative manifest: digital_source_type is composite_with_trained_algorithmic_media, ai_tool identifies the generator, and disclosure.required is true with eu_ai_act_article_50 and ca_sb_942 listed as applicable regulations. For the EU jurisdiction, Meridian sets render_guidance.persistence to continuous with positions preferring overlay — expressing the EU AI Act’s requirement for persistent labeling. The campaign is submitted to Pinnacle Publishing through AdCP. Pinnacle’s ad operations platform checks the provenance claim, then runs the creative through its verification pipeline via get_creative_features. The AI detection service returns ai_modified with 0.94 confidence — consistent with the declared source type. Pinnacle’s system confirms the claim, reads the render guidance for the serving jurisdiction, applies the required disclosure label with the specified persistence, and clears the creative for serving. The provenance metadata, the detection result, the render guidance, and the disclosure decision are all recorded and auditable. If Meridian had declared digital_capture instead — claiming no AI involvement — Pinnacle’s detection service would have flagged the inconsistency. The creative would be held for review, not served.

C2PA integration

The c2pa field provides a soft reference to C2PA Content Credentials — the cryptographic provenance standard developed by the Coalition for Content Provenance and Authenticity.
{
  "c2pa": {
    "manifest_url": "https://cdn.acmecorp.example.com/c2pa/manifests/hero_abc123.c2pa"
  }
}

Why a URL reference

C2PA bindings are typically embedded in the media file itself. But ad tech pipelines routinely transcode, resize, and reformat creative assets — breaking file-level C2PA bindings in the process. A URL reference to the original C2PA manifest store survives this transcoding, preserving the chain of provenance through the supply chain. The reference is a pointer, not a replacement for C2PA. Any party in the chain can fetch the manifest from the URL and verify the original content credentials, even after the media file has been transcoded.

Usage pattern

  1. Creator generates content and produces a C2PA manifest
  2. Creator uploads the manifest store to a stable URL
  3. Creator attaches the manifest_url in AdCP provenance
  4. Downstream parties (agencies, platforms, sellers) can verify the original credentials at any time by fetching the manifest

Disclosure requirements

The disclosure object declares regulatory obligations for AI-generated content.
{
  "disclosure": {
    "required": true,
    "jurisdictions": [
      {
        "country": "US",
        "region": "CA",
        "regulation": "ca_sb_942",
        "label_text": "Created with AI",
        "render_guidance": {
          "persistence": "flexible",
          "positions": ["prominent", "footer"]
        }
      },
      {
        "country": "DE",
        "regulation": "eu_ai_act_article_50",
        "label_text": "KI-generiert",
        "render_guidance": {
          "persistence": "continuous",
          "positions": ["overlay", "subtitle"]
        }
      },
      {
        "country": "CN",
        "regulation": "cn_deep_synthesis",
        "label_text": "AI-generated content",
        "render_guidance": {
          "persistence": "initial",
          "min_duration_ms": 3000,
          "positions": ["overlay", "pre_roll"]
        }
      }
    ]
  }
}
FieldRequiredDescription
requiredYesWhether AI disclosure is required based on applicable regulations
jurisdictionsNoArray of jurisdictions where disclosure obligations apply
jurisdictions[].countryYesISO 3166-1 alpha-2 country code
jurisdictions[].regionNoSub-national region code (e.g., CA for California)
jurisdictions[].regulationYesRegulation identifier
jurisdictions[].label_textNoRequired disclosure label text in the local language
jurisdictions[].render_guidanceNoHow the disclosure should be rendered for this jurisdiction
jurisdictions[].render_guidance.persistenceNoHow long the disclosure must persist: continuous, initial, or flexible
jurisdictions[].render_guidance.min_duration_msNoMinimum display duration in milliseconds (required context for initial persistence)
jurisdictions[].render_guidance.positionsNoPreferred disclosure positions in priority order (first supported wins)

Render guidance

The render_guidance object on each jurisdiction expresses the declaring party’s intent for how the disclosure should be rendered based on the regulation’s requirements. Different regulations have different persistence requirements:
  • continuous — The disclosure must remain visible or audible throughout the content display duration. For video/audio, the full playback. For static formats (display, DOOH), the full display slot. For DOOH, “content duration” means the ad’s display slot within the rotation, not the screen’s full rotation cycle.
  • initial — The disclosure must appear at the start for a minimum duration before it may be removed. Pair with min_duration_ms to specify how long — without it, the duration is at the publisher’s discretion.
  • flexible — Disclosure presence is sufficient; the publisher controls timing and duration.
When multiple sources specify persistence for the same jurisdiction (e.g., brief required_disclosures[].persistence and provenance render_guidance.persistence), the most restrictive mode applies: continuous > initial > flexible. The positions array is an ordered preference list. The first position that the serving format supports should be used. For example, ["overlay", "subtitle"] means “prefer overlay, fall back to subtitle if overlay is not available.” Not all position-persistence combinations are meaningful. Positions with inherently bounded duration — end_card, pre_roll — cannot satisfy continuous persistence because they appear only for part of the content. Creative agents should not request continuous on these positions, and formats should not claim continuous support for them. For audio-only environments (podcast, streaming audio, smart speakers), only audio, pre_roll, and companion positions are applicable. Visual positions (overlay, footer, subtitle) are undefined without a screen. Creative agents building for audio formats should restrict render_guidance.positions to audio-compatible values. Render guidance travels with the creative through the supply chain. At serve time, the publisher reads the guidance from provenance and renders accordingly. Governance agents can audit whether the publisher followed the declared guidance.

Multi-asset aggregation

When a creative is assembled from multiple assets with different render_guidance for the same jurisdiction (common in DCO), the most restrictive persistence applies across the assembled creative: if any asset requires continuous, the assembled creative requires continuous. This follows the same precedence as conflict resolution: continuous > initial > flexible.

Enforcement vs self-reported compliance

For formats where the publisher controls the rendering surface (hosted video, display banners, SSAI), the publisher can enforce render guidance directly — rendering an overlay, controlling its duration, and verifying compliance. For opaque, self-rendering creatives (MRAID, JavaScript tags, VPAID), the creative controls its own viewport. The publisher cannot inject or enforce disclosure rendering inside the creative’s sandbox. In these cases, disclosure compliance depends on the creative agent embedding the disclosure during build. The format’s disclosure_capabilities should reflect this: only claim persistence modes the format’s rendering layer can verify or enforce, not modes that rely on creative self-compliance. Governance agents can verify self-rendered disclosures post-hoc via get_creative_features by rendering the creative in a headless environment and inspecting for disclosure presence.

Known regulation identifiers

IdentifierRegulationStatus
eu_ai_act_article_50EU AI Act Article 50Enforcement August 2026
ca_sb_942California SB 942Live since January 2026
cn_deep_synthesisChina Deep Synthesis ProvisionsIn effect
Regulation identifiers are conventions, not a closed enum. New regulations can be referenced without protocol changes.

Creative policy enforcement

Sellers can require provenance on submitted creatives through the provenance_required field in creative-policy:
{
  "$schema": "/schemas/core/creative-policy.json",
  "co_branding": "optional",
  "landing_page": "any",
  "templates_available": false,
  "provenance_required": true
}
When provenance_required is true:
  1. Buyers must attach provenance to creative submissions
  2. The seller may independently verify claims via get_creative_features
  3. Creatives without provenance are rejected
This field is surfaced in product discovery through get_products, so buyers know the requirement before submitting creatives.

Where provenance attaches

SchemaFieldDescription
creative-assetprovenanceDefault for the creative in the library
creative-manifestprovenanceDefault for all assets in this manifest
image-assetprovenanceOverride for a specific image
video-assetprovenanceOverride for a specific video
audio-assetprovenanceOverride for a specific audio file
text-assetprovenanceOverride for specific text content
html-assetprovenanceOverride for HTML content
css-assetprovenanceOverride for CSS content
javascript-assetprovenanceOverride for JavaScript content
vast-assetprovenanceOverride for a VAST tag
daast-assetprovenanceOverride for a DAAST tag
url-assetprovenanceOverride for a URL asset
artifactprovenanceDefault for the content artifact
artifact.assets[] (text, image, video, audio)provenanceOverride for a specific inline asset

Schema reference

SchemaLocation
Provenance object/schemas/core/provenance.json
Digital source type enum/schemas/enums/digital-source-type.json
Creative asset (with provenance)/schemas/core/creative-asset.json
Creative manifest (with provenance)/schemas/core/creative-manifest.json
Creative policy (provenance_required)/schemas/core/creative-policy.json
Artifact (with provenance)/schemas/content-standards/artifact.json