Attribution Reporting

Draft Community Group Report,

This version:
https://wicg.github.io/attribution-reporting-api
Issue Tracking:
GitHub
Inline In Spec
Editors:
(Google Inc.)
(Google Inc.)
(Google Inc.)

Abstract

An API to report that an event may have been caused by another cross-site event. These reports are designed to transfer little enough data between sites that the sites can’t use them to track individual users.

Status of this document

This specification was published by the Web Platform Incubator Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

1. Introduction

This section is non-normative

This specification describes how web browsers can provide a mechanism to the web that supports measuring and attributing conversions (e.g. purchases) to ads a user interacted with on another site. This mechanism should remove one need for cross-site identifiers like third-party cookies.

1.1. Overview

Pages/embedded sites are given the ability to register attribution sources and attribution triggers, which can be linked by the User Agent to generate and send attribution reports containing information from both of those events.

A reporter https://reporter.example embedded on https://source.example is able to measure whether an interaction on the page lead to an action on https://destination.example by registering an attribution source with attribution destinations of « https://destination.example ». Reporters are able to register sources through a variety of surfaces, but ultimately the reporter is required to provide the User Agent with an HTTP-response header which allows the source to be eligible for attribution.

At a later point in time, the reporter, now embedded on https://destination.example, may register an attribution trigger. Reporters can register triggers by sending an HTTP-response header containing information about the action/event that occurred. Internally, the User Agent attempts to match the trigger to previously registered source events based on where the sources/triggers were registered and configurations provided by the reporter.

If the User Agent is able to attribute the trigger to a source, it will generate and send an attribution report to the reporter via an HTTP POST request at a later point in time.

2. HTML monkeypatches

2.1. API for elements

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};

HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLAreaElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;

Add the following content attributes:

a

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when the a is navigated.

area

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when the area is navigated.

img

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when set.

script

attributionsrc - A string containing zero or more URLs to which a background attributionsrc request will be made when set.

The IDL attribute attributionSrc must reflect the respective content attribute of the same name.

Whenever an img or a script element is created or element’s attributionSrc attribute is set or changed, run make background attributionsrc requests with element and "event-source-or-trigger" and the current state of element’s referrerpolicy content attribute or referrerpolicy content attribute.

More precisely specify which mutations are relevant for the attributionsrc attribute.

Modify update the image data as follows:

After the step

Set request’s priority to the current state...

add the step

  1. If the element has an attributionsrc attribute, set request’s Attribution Reporting Eligibility to "event-source-or-trigger".

A script fetch options has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

Modify prepare the script element as follows:

After the step

Let fetch priority be the current state of el’s fetchpriority content attribute.

add the step

  1. Let Attribution Reporting eligibility be "event-source-or-trigger" if el has an attributionsrc content attribute and "unset" otherwise.

Add "and Attribution Reporting eligibility is Attribution Reporting eligibility." to the step

Let options be a script fetch options whose...

Modify set up the classic script request and set up the module script request as follows:

Add "and its Attribution Reporting eligibility is options’s Attribution Reporting eligibility."

Modify follow the hyperlink as follows:

After the step

If subject’s link types includes...

add the steps

  1. Let navigationSourceEligible be false.

  2. If subject has an attributionsrc attribute:

    1. Set navigationSourceEligible to true.

    2. Make background attributionsrc requests with subject and "navigation-source", and referrerPolicy.

Add "and navigationSourceEligible set to navigationSourceEligible" to the step

Navigate targetNavigable...

2.2. Window open steps

Modify the tokenize the features argument as follows:

Replace the step

Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase.

with

Collect a sequence of code points that are not feature separators code points from features given position. Set value to the collected code points, converted to ASCII lowercase. Set originalCaseValue to the collected code points.

Replace the step

If name is not the empty string, then set tokenizedFeatures[name] to value.

with the steps

  1. If name is not the empty string:

    1. Switch on name:

      "attributionsrc"

      Run the following steps:

      1. If tokenizedFeatures[name] does not exist, set tokenizedFeatures[name] to a new list.

      2. Append originalCaseValue to tokenizedFeatures[name].

      Anything else

      Set tokenizedFeatures[name] to value.

Modify the window open steps as follows:

After the step

Let tokenizedFeatures be the result of tokenizing features.

add the steps

  1. Let navigationSourceEligible be false.

  2. If tokenizedFeatures["attributionsrc"] exists:

    1. Assert: tokenizedFeatures["attributionsrc"] is a list.

    2. Set navigationSourceEligible to true.

    3. Set attributionSrcUrls to a new list.

    4. For each value of tokenizedFeatures["attributionsrc"]:

      1. If value is the empty string, continue.

      2. Let decodedSrcBytes be the result of percent-decoding value.

      3. Let decodedSrc be the UTF-8 decode without BOM of decodedSrcBytes.

      4. Parse decodedSrc relative to the entry settings object, and set urlRecord to the resulting URL record, if any. If parsing failed, continue.

      5. Append urlRecord to attributionSrcUrls.

Use attributionSrcUrls and referrerPolicy with make a background attributionsrc request.

In each step that calls navigate, set navigationSourceEligible to navigationSourceEligible.

Add the following item to navigation params:

navigationSourceEligible

A boolean indicating whether the navigation can register a navigation source in its response. Defaults to false.

Modify navigate as follows:

Add an optional boolean parameter called navigationSourceEligible (default false).

In the step

Set navigationParams to a new navigation params with...

add the property

navigationSourceEligible

navigationSourceEligible

Use/propagate navigationSourceEligible to the navigation request's Attribution Reporting eligibility.

Enforce attribution-scope privacy limits.

3. Network monkeypatches

dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};

partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};

partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

A request has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

To get an eligibility from AttributionReportingRequestOptions given an AttributionReportingRequestOptions options:

  1. Let eventSourceEligible be options’s eventSourceEligible.

  2. Let triggerEligible be options’s triggerEligible.

  3. If (eventSourceEligible, triggerEligible) is:

    (false, false)

    Return "empty".

    (false, true)

    Return "trigger".

    (true, false)

    Return "event-source".

    (true, true)

    Return "event-source-or-trigger".

Check permissions policy.

"Attribution-Reporting-Eligible" is a Dictionary Structured Header set on a request that indicates which registrations, if any, are allowed on the corresponding response. Its values are not specified and its allowed keys are:

"event-source"

An event source may be registered.

"navigation-source"

A navigation source may be registered.

"trigger"

A trigger may be registered.

"Attribution-Reporting-Support" is a Dictionary Structured Header set on a request that indicates which registrars, if any, the corresponding response can use. Its values are not specified and its allowed keys are the registrars.

To obtain a dictionary structured header value given a list of strings keys and a set of strings allowedKeys:

  1. For each key of allowedKeys, optionally append the concatenation of « "not-", key » to keys.

  2. Optionally, shuffle keys.

  3. Let entries be a new list.

  4. For each key of keys:

    1. Let value be true.

    2. Optionally, set value to a token corresponding to one of strings in allowedKeys.

    3. Let params be a new map.

    4. For each key of allowedKeys, optionally set params[key] to an arbitrary bare item.

    5. Append a structured dictionary member with the key key, the value value, and the parameters params to entries.

  5. Return a dictionary containing entries.

Note: The user agent MAY "grease" the dictionary structured headers according to the preceding algorithm to help ensure that recipients use a proper structured header parser, rather than naive string equality or contains operations, which makes it easier to introduce backwards-compatible changes to the header definition in the future. Including the allowed keys as dictionary values or parameters helps ensure that only the dictionary’s keys are interpreted by the recipient. Likewise, shuffling the dictionary members helps ensure that, e.g., "key1, key2" is treated equivalently to "key2, key1".

In the following example, only the "trigger" key should be interpreted by the recipient after the header has been parsed as a structured dictionary:

EXAMPLE: Greased Attribution-Reporting-Eligible header
Attribution-Reporting-Eligible: not-event-source, trigger=event-source;navigation-source=3

In the following example, only the "os" key should be interpreted by the recipient after the header has been parsed as a structured dictionary:

EXAMPLE: Greased Attribution-Reporting-Support header
Attribution-Reporting-Support: os=web

To set Attribution Reporting headers given a request request:

  1. Let headers be request’s header list.

  2. Let eligibility be request’s Attribution Reporting eligibility.

  3. Delete "Attribution-Reporting-Eligible" from headers.

  4. Delete "Attribution-Reporting-Support" from headers.

  5. If eligibility is "unset", return.

  6. Let keys be a new list.

  7. If eligibility is:

    "empty"

    Do nothing.

    "event-source"

    Append "event-source" to keys.

    "navigation-source"

    Append "navigation-source" to keys.

    "trigger"

    Append "trigger" to keys.

    "event-source-or-trigger"

    Append "event-source" and "trigger" to keys.

  8. Let supportedRegistrars be the result of getting supported registrars.

  9. If supportedRegistrars is empty, clear keys.

  10. Let eligibleDict be the result of obtaining a dictionary structured header value with keys and the set containing all the eligible keys.

  11. Set a structured field value given ("Attribution-Reporting-Eligible", eligibleDict) in headers.

  12. Let supportDict be the result of obtaining a dictionary structured header value with supportedRegistrars and the set containing all the registrars.

  13. Set a structured field value given ("Attribution-Reporting-Support", supportDict) in headers.

3.1. Fetch monkeypatches

Modify fetch as follows:

After the step

If request’s header list does not contain Accept...

add the step

  1. Set Attribution Reporting headers with request.

Modify Request(input, init) as follows:

In the step

Set request to a new request with the following properties:

add the property

Attribution Reporting eligibility

request’s Attribution Reporting eligibility.

After the step

If init["priority"] exists, then:

add the step

  1. If init["attributionReporting"] exists, then set request’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with it.

3.2. XMLHttpRequest monkeypatches

An XMLHttpRequest object has an associated Attribution Reporting eligibility (an eligibility). Unless otherwise stated it is "unset".

The setAttributionReporting(options) method must run these steps:

  1. If this’s state is not opened, then throw an "InvalidStateError" DOMException.

  2. If this’s send() flag is set, then throw an "InvalidStateError" DOMException.

  3. Set this’s Attribution Reporting eligibility to the result of get an eligibility from AttributionReportingRequestOptions with options.

Modify send(body) as follows:

Add a Document parameter called document.

After the step:

Let req be a new request, initialized as follows...

Add the step:

  1. Set req’s Attribution Reporting eligibility to this’s Attribution Reporting eligibility.

  2. Set Attribution Reporting headers with req and document’s context origin.

4. Permissions Policy integration

This specification defines a policy-controlled feature identified by the string "attribution-reporting". Its default allowlist is *.

5. Clear Site Data integration

In clear DOM-accessible storage for origin, add the following step:

  1. Run clear site data with origin.

To clear site data given an origin origin:

  1. For each attribution source source of the attribution source cache:

    1. If source’s reporting origin and origin are same origin, remove source from the attribution source cache.

  2. For each event-level report report of the event-level report cache:

    1. If report’s reporting origin and origin are same origin, remove report from the event-level report cache.

  3. For each aggregatable attribution report report of the aggregatable attribution report cache:

    1. If report’s reporting origin and origin are same origin, remove report from the aggregatable attribution report cache.

Note: We deliberately do not remove matching entries from the attribution rate-limit cache and aggregatable debug rate-limit cache, as doing so would allow a site to reset and therefore exceed the intended rate limits at will.

6. Structures

6.1. Registration info

A registration info is a struct with the following items:

preferred platform (default null)

Null or a registrar.

report header errors (default false)

A boolean.

6.2. Trigger state

A trigger state is a struct with the following items:

trigger data

A non-negative 64-bit integer.

report window

A report window.

6.3. Randomized response output configuration

A randomized response output configuration is a struct with the following items:

max attributions per source

A positive integer.

trigger specs

A trigger spec map.

6.4. Randomized source response

A randomized source response is null or a list of trigger states.

6.5. Attribution filtering

A filter value is a set of strings.

A filter map is a map whose keys are strings and whose values are filter values.

A filter config is a struct with the following items:

map

A filter map.

lookback window

Null or a positive duration.

6.6. Suitable origin

A suitable origin is an origin that is suitable.

6.7. Source type

A source type is one of the following:

"navigation"

The source was associated with a top-level navigation.

"event"

The source was not associated with a top-level navigation.

6.8. Report window

A report window is a struct with the following items:

start

A moment.

end

A moment, strictly greater than start.

A report window list is a list of report windows. It has the following constraints:

A report window list list’s total window is a report window struct with the following fields:

start

The start of list[0].

end

The end of list[list’s size - 1].

Note: The total window is conceptually a union of report windows, because there are no gaps in time between any of the windows.

6.9. Summary operator

A summary operator summarizes the triggers attributed to an attribution source. Its value is one of the following:

"count"

Number of triggers attributed.

"value_sum"

Sum of the value of triggers.

6.10. Summary bucket

A summary bucket is a struct with the following items:

start

An unsigned 32-bit integer.

end

An unsigned 32-bit integer.

A summary bucket list is a list of summary buckets. It has the following constraints:

6.11. Trigger-data matching mode

A trigger-data matching mode is one of the following:

"exact"

Trigger data must be less than the default trigger data cardinality. Otherwise, no event-level attribution takes place.

"modulus"

Trigger data is taken modulo the default trigger data cardinality.

6.12. Trigger specs

A trigger spec is a struct with the following items:

event-level report windows

A report window list.

A trigger spec map is a map whose keys are unsigned 32-bit integers and values are trigger specs.

To find a matching trigger spec given an attribution source source and an unsigned 64-bit integer triggerData:

  1. Let specs be source’s trigger specs.

  2. If source’s trigger-data matching mode is:

    "exact"

    Run the following steps:

    1. If specs[triggerData] exists, return its entry.

    2. Return an error.

    "modulus"

    Run the following steps:

    1. If specs is empty, return an error.

    2. Let keys be specs’s keys.

    3. Let index be the remainder when dividing triggerData by keys’s size.

    4. Return the entry for specs[keys[index]].

6.13. Aggregatable debug reporting config

An aggregatable debug reporting config is a struct with the following items:

key piece (default 0)

A non-negative 128-bit integer.

debug data (default an empty map)

A map whose keys are debug data types and whose values are aggregatable contributions.

aggregation coordinator (default default aggregation coordinator)

An aggregation coordinator.

6.14. Attribution scopes

An attribution scopes is a struct with the following items:

limit

A positive 32-bit integer representing the number of distinct values allowed per attribution destination for the source’s reporting origin.

values

A set of strings.

max event states

A positive integer representing the maximum number of trigger states for event sources per attribution destination for the source’s reporting origin.

6.15. Attribution source

An attribution source is a struct with the following items:

internal ID

An internal ID.

source origin

A suitable origin.

event ID

A non-negative 64-bit integer.

attribution destinations

A set of sites.

reporting origin

A suitable origin.

source type

A source type.

expiry

A duration.

trigger specs

A trigger spec map.

aggregatable report window

A report window.

priority

A 64-bit integer.

source time

A moment.

number of event-level reports

Number of event-level reports created for this attribution source.

max number of event-level reports

The maximum number of event-level reports that can be created for this attribution source.

event-level attributable (default true)

A boolean.

dedup keys

A set of dedup keys associated with this attribution source.

randomized response (default null)

A randomized source response.

randomized trigger rate (default 0)

A number between 0 and 1 (both inclusive).

event-level epsilon

A double.

filter data

A filter map.

debug key

Null or a non-negative 64-bit integer.

aggregation keys

A map whose keys are strings and whose values are non-negative 128-bit integers.

remaining aggregatable attribution budget

A non-negative integer.

named budgets

A map whose keys are strings and whose values are non-negative integers.

remaining named budgets

A map whose keys are strings and whose values are non-negative integers.

aggregatable dedup keys

A set of aggregatable dedup key values associated with this attribution source.

debug reporting enabled

A boolean.

number of aggregatable attribution reports

Number of aggregatable attribution reports created for this attribution source.

trigger-data matching mode

A trigger-data matching mode.

cookie-based debug allowed (default false)

A boolean.

fenced

A boolean.

remaining aggregatable debug budget

A non-negative integer.

number of aggregatable debug reports

Number of aggregatable debug reports created for this attribution source.

aggregatable debug reporting config

An aggregatable debug reporting config.

destination limit priority

A 64-bit integer.

attribution scopes (default null)

Null or an attribution scopes.

An attribution source source’s expiry time is source’s source time + source’s expiry.

An attribution source source’s source site is the result of obtaining a site from source’s source origin.

6.16. Aggregatable trigger data

An aggregatable trigger data is a struct with the following items:

key piece

A non-negative 128-bit integer.

source keys

A set of strings.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.17. Aggregatable values configuration

An aggregatable key value is a struct with the following items:

value

A non-negative 32-bit integer.

filtering ID

A non-negative integer.

An aggregatable values configuration is a struct with the following items:

values

A map whose keys are strings and whose values are aggregatable key values.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.18. Aggregatable dedup key

An aggregatable dedup key is a struct with the following items:

dedup key

Null or a non-negative 64-bit integer.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.19. Named budget

A named budget is a struct with the following items:

name

Null or a string.

filters

A list of filter configs.

negated filters

A list of filter configs.

6.20. Event-level trigger configuration

An event-level trigger configuration is a struct with the following items:

trigger data

A non-negative 64-bit integer.

dedup key

Null or a non-negative 64-bit integer.

priority

A 64-bit integer.

filters

A list of filter configs.

negated filters

A list of filter configs.

value

A positive unsigned 32-bit integer.

6.21. Aggregation coordinator

An aggregation coordinator is one of a user-agent-determined set of suitable origins that specifies which aggregation service deployment to use.

6.22. Aggregatable source registration time configuration

An aggregatable source registration time configuration is one of the following:

"exclude"

"source_registration_time" is excluded from an aggregatable attribution report's shared info.

"include"

"source_registration_time" is included in an aggregatable attribution report's shared info.

6.23. Attribution trigger

An attribution trigger is a struct with the following items:

attribution destination

A site.

trigger time

A moment.

reporting origin

A suitable origin.

filters

A list of filter configs.

negated filters

A list of filter configs.

debug key

Null or a non-negative 64-bit integer.

event-level trigger configurations

A set of event-level trigger configuration.

aggregatable trigger data

A list of aggregatable trigger data.

aggregatable values configurations

A list of aggregatable values configuration.

aggregatable dedup keys

A list of aggregatable dedup key.

debug reporting enabled

A boolean.

aggregation coordinator

An aggregation coordinator.

aggregatable source registration time configuration

An aggregatable source registration time configuration.

trigger context ID

Null or a string.

fenced

A boolean.

aggregatable filtering ID max bytes

A positive integer.

aggregatable debug reporting config

An aggregatable debug reporting config.

attribution scopes

A set of strings.

named budgets

A list of named budgets.

6.24. Attribution report

An attribution report is a struct with the following items:

reporting origin

A suitable origin.

report time

A moment.

external ID

A UUID formatted as a string.

internal ID

An internal ID.

An attribution debug info is a tuple with the following items:

source debug key

Null or a non-negative 64-bit integer.

trigger debug key

Null or a non-negative 64-bit integer.

6.25. Event-level report

An event-level report is an attribution report with the following additional items:

event ID

A non-negative 64-bit integer.

source type

A source type.

trigger data

A non-negative 64-bit integer.

randomized trigger rate

A number between 0 and 1 (both inclusive).

trigger priority

A 64-bit integer.

trigger time

A moment.

source ID

A string.

attribution destinations

A set of sites.

attribution debug info

An attribution debug info.

6.26. Aggregatable report

An aggregatable contribution is a struct with the following items:

key

A non-negative 128-bit integer.

value

A non-negative 32-bit integer.

filtering ID

A non-negative integer.

An aggregatable report is an attribution report with the following additional items:

contributions

A list of aggregatable contributions.

effective attribution destination

A site.

aggregation coordinator

An aggregation coordinator.

An aggregatable attribution report is an aggregatable report with the following additional items:

source time

A moment.

source registration time configuration

An aggregatable source registration time configuration.

is null report (default false)

A boolean.

trigger context ID

Null or a string.

filtering ID max bytes

A positive integer.

attribution debug info

An attribution debug info.

source ID

Null or a string.

An aggregatable debug report is an aggregatable report.

6.27. Attribution rate-limits

A rate-limit scope is one of the following:

An attribution rate-limit record is a struct with the following items:

scope

A rate-limit scope.

source site

A site.

attribution destination

A site.

reporting origin

A suitable origin.

time

A moment.

expiry time

Null or a moment.

entity ID

Null for fake reports or an internal ID for an event-level report, aggregatable attribution report, or attribution source.

deactivated for unexpired destination limit (default false)

A boolean.

destination limit priority (default null)

Null or a 64-bit integer.

6.28. Aggregatable debug rate-limits

An aggregatable debug rate-limit record is a struct with the following items:

context site

A site.

reporting site

A site.

time

A moment.

consumed budget

A positive integer.

6.29. Attribution debug data

A debug data type is a non-empty string that specifies the set of data that is contained in a verbose debug report or in an aggregatable debug report.

A source debug data type is a debug data type for source registrations. Possible values are:

A trigger debug data type is a debug data type for trigger registrations. Possible values are:

An OS debug data type is a debug data type for OS registrations. Possible values are:

A header errors debug data type is a debug data type for registration header errors. Possible values are:

6.30. Verbose debug report

A verbose debug data is a struct with the following items:

data type

A debug data type.

body

A map whose fields are determined by the data type.

A verbose debug report is a struct with the following items:

data

A list of verbose debug data.

reporting origin

A suitable origin.

6.31. Triggering result

A triggering status is one of the following:

Note: "noised" only applies for triggering event-level attribution when it is attributed successfully but dropped as the noise was applied to the source.

A trigger debug data is a tuple with the following items:

data type

A trigger debug data type.

report

Null or an attribution report.

A triggering result is a tuple with the following items:

status

A triggering status.

debug data

Null or a trigger debug data.

6.32. Destination rate-limit result

A destination rate-limit result is one of the following:

7. Storage

A user agent holds an attribution source cache, which is a set of attribution sources.

A user agent holds an event-level report cache, which is a set of event-level reports.

A user agent holds an aggregatable attribution report cache, which is a set of aggregatable attribution reports.

A user agent holds an attribution rate-limit cache, which is a set of attribution rate-limit records.

A user agent holds an aggregatable debug rate-limit cache, which is a set of aggregatable debug rate-limit records.

The above caches are collectively known as the attribution caches. The attribution caches are shared among all environment settings objects.

Note: This would ideally use storage bottles to provide access to the attribution caches. However attribution data is inherently cross-site, and operations on storage would need to span across all storage bottle maps.

An internal ID is an integer.

To get the next internal ID, return an internal ID strictly greater than any previously returned by this algorithm. The user agent MAY reset this sequence when no attribution cache entry contains an internal ID.

8. Constants

Valid source expiry range is a 2-tuple of positive durations that controls the minimum and maximum value that can be used as an expiry, respectively. Its value is (1 day, 30 days).

Min report window is a positive duration that controls the minimum duration from an attribution source’s source time and any end in aggregatable report window or event-level report windows. Its value is 1 hour.

Max entries per filter data is a positive integer that controls the maximum size of an attribution source's filter data. Its value is 50.

Max values per filter data entry is a positive integer that controls the maximum size of each value of an attribution source's filter data. Its value is 50.

Max length per filter string is a positive integer that controls the maximum length of an attribution source's filter data's keys and its values's items. Its value is 25.

Attribution rate-limit window is a positive duration that controls the rate-limiting window for attribution. Its value is 30 days.

Max destinations per source is a positive integer that controls the maximum size of an attribution source's attribution destinations. Its value is 3.

Max settable event-level attributions per source is a positive integer that controls the maximum value of max number of event-level reports. Its value is 20.

Max settable event-level report windows is a positive integer that controls the maximum size of event-level report windows. Its value is 5.

Default event-level attributions per source is a map that controls how many times a single attribution source can create an event-level report by default. Its value is «[ navigation → 3, event → 1 ]».

Allowed aggregatable budget per source is a positive integer that controls the total required aggregatable budget of all aggregatable reports created for an attribution source. Its value is 65536.

Max aggregation keys per source registration is a positive integer that controls the maximum size of an attribution source's aggregation keys. Its value is 20.

Max length per aggregation key identifier is a positive integer that controls the maximum length of an attribution source's aggregation keys's keys. Its value is 25.

Default trigger data cardinality is a map that controls the valid range of trigger data. Its value is «navigation → 8, event → 2».

Max distinct trigger data per source is a positive integer that controls the maximum size of a trigger spec map for an attribution source. Its value is 32.

Max length per trigger context ID is a positive integer that controls the maximum length of an attribution trigger's trigger context ID. Its value is 64.

Default filtering ID value is a non-negative integer. Its value is 0. It is the default value for flexible contribution filtering of aggregatable reports.

Default filtering ID max bytes is a positive integer that controls the max bytes used if none is explicitly chosen. Its value is 1. The max bytes value limits the size of filtering IDs within an aggregatable attribution report.

Valid filtering ID max bytes range is a set of positive integers that controls the allowable values of max bytes. Its value is the range 1 to 8, inclusive.

Max contributions per aggregatable debug report is a positive integer that controls the maximum size of an aggregatable debug report's contributions. Its value is 2.

Aggregatable debug rate-limit window is a positive duration that controls the rate-limiting window for aggregatable debug reporting. Its value is 1 day.

Max aggregatable debug budget per rate-limit window is a tuple consisting of two positive integers. The first controls the total required aggregatable budget of all aggregatable debug reports with a given context site per aggregatable debug rate-limit window. The second controls the total required aggregatable budget of all aggregatable debug reports with a given (context site, reporting site) per aggregatable debug rate-limit window. Its value is (220, 65536).

Default max event states is a positive integer that controls the default max event states. Its value is 3.

Max length of attribution scope for source is a positive integer that controls the maximum length of an attribution scope from an attribution source's values. Its value is 50.

Max attribution scopes per source is a positive integer that controls the maximum size of an attribution source's values. Its value is 20.

Max length per budget name for source is a positive integer that controls the maximum length of keys of an attribution source's named budgets and remaining named budgets. Its value is 25.

Max named budgets per source registration is a positive integer that controls the maximum size of an attribution source's remaining named budgets and named budgets. Its value is 25.

9. Vendor-Specific Values

Max pending sources per source origin is a positive integer that controls how many attribution sources can be in the attribution source cache per source origin.

Max settable event-level epsilon is a non-negative double that controls the default and maximum values that a source registration can specify for the epsilon parameter used by compute the channel capacity of a source and obtain a randomized source response.

Max trigger-state cardinality is a positive integer that controls the maximum size of the set of possible trigger states for any one attribution source.

Randomized null attribution report rate excluding source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null attribution reports generated for an attribution trigger whose aggregatable source registration time configuration is "exclude". If automation local testing mode is true, this is 0.

Randomized null attribution report rate including source registration time is a double between 0 and 1 (both inclusive) that controls the randomized number of null attribution reports generated for an attribution trigger whose aggregatable source registration time configuration is "include". If automation local testing mode is true, this is 0.

Max event-level reports per attribution destination is a positive integer that controls how many event-level reports can be in the event-level report cache per site in attribution destinations.

Max aggregatable attribution reports per attribution destination is a positive integer that controls how many aggregatable attribution reports can be in the aggregatable attribution report cache per effective attribution destination.

Max event-level channel capacity per source is a map that controls how many bits of information can be exposed associated with a single attribution source. The keys are «navigation, event». The values are non-negative doubles.

Max event-level attribution scopes channel capacity per source is a map that controls how many bits of information can be exposed due to attribution scopes for a single attribution source. The keys are «navigation, event». The values are non-negative doubles.

Max aggregatable reports per source is a tuple consisting of two positive integers. The first controls how many aggregatable attribution reports can be created by attribution triggers attributed to a single attribution source. The second controls how many aggregatable debug reports can be created for an attribution source.

Max destinations covered by unexpired sources is a positive integer that controls the maximum number of distinct sites across all attribution destinations for unexpired attribution sources with a given (source site, reporting origin site).

Destination rate-limit window is a positive duration that controls the rate-limiting window for destinations.

Max destinations per rate-limit window is a tuple consisting of two integers. The first controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given source site per destination rate-limit window. The second controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given (source site, reporting origin site) per destination rate-limit window.

Max destinations per source reporting site per day is an integer that controls the maximum number of distinct sites across all attribution destinations for attribution sources with a given (source site, reporting origin site) per day.

Max source reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create attribution sources per attribution rate-limit window.

Max source reporting origins per source reporting site is a positive integer that controls the maximum number of distinct reporting origins for a (source site, reporting origin site) that can create attribution sources per origin rate-limit window.

Origin rate-limit window is a positive duration that controls the rate-limiting window for max source reporting origins per source reporting site.

Max attribution reporting origins per rate-limit window is a positive integer that controls the maximum number of distinct reporting origins for a (source site, attribution destination) that can create event-level reports per attribution rate-limit window.

Max attributions per rate-limit window is a positive integer that controls the maximum number of attributions for a (source site, attribution destination, reporting origin site) per attribution rate-limit window. This attribution limit is separate for event-level and aggregate reporting.

Randomized aggregatable attribution report delay is a positive duration that controls the random delay to deliver an aggregatable attribution report. If automation local testing mode is true, this is 0.

Default aggregation coordinator is the aggregation coordinator that controls how to obtain the public key for encrypting an aggregatable report by default.

10. General Algorithms

10.1. Serialize an integer

To serialize an integer, represent it as a string of the shortest possible decimal number.

This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201

10.2. Parsing JSON fields

Note: The "Attribution-Reporting-Register-Source" and "Attribution-Reporting-Register-Trigger" response headers contain JSON-encoded data, rather than structured values, because of limitations on nesting in the latter. The recursive nature of JSON makes it more amenable to future extensions.

To parse an optional 64-bit signed integer given a map map, a string key, and a possibly null 64-bit signed integer default:

  1. If map[key] does not exist, return default.

  2. If map[key] is not a string, return an error.

  3. Let value be the result of applying the rules for parsing integers to map[key].

  4. If value is an error, return an error.

  5. If value cannot be represented by a 64-bit signed integer, return an error.

  6. Return value.

To parse an optional 64-bit unsigned integer given a map map, a string key, and a possibly null 64-bit unsigned integer default:

  1. If map[key] does not exist, return default.

  2. If map[key] is not a string, return an error.

  3. Let value be the result of applying the rules for parsing non-negative integers to map[key].

  4. If value is an error, return an error.

  5. If value cannot be represented by a 64-bit unsigned integer, return an error.

  6. Return value.

10.3. Serialize attribution destinations

To serialize attribution destinations destinations, run the following steps:

  1. Assert: destinations is not empty.

  2. Assert: destinations is sorted in ascending order, with a being less than b if a, serialized, is less than b, serialized.

  3. Let destinationStrings be a list.

  4. For each destination in destinations:

    1. Assert: destination is not an opaque origin.

    2. Append destination serialized to destinationStrings.

  5. If destinationStrings’s size is equal to 1, return destinationStrings[0].

  6. Return destinationStrings.

Note: destinations is required to be sorted to avoid revealing extra information about the original source registration, namely the order of the "destination" field in the original JSON registration, which can be used to distinguish semantically equivalent registrations.

To check if a scheme is suitable given a string scheme:

  1. If scheme is "http" or "https", return true.

  2. Return false.

To check if an origin is suitable given an origin origin:

  1. If origin is not a potentially trustworthy origin, return false.

  2. If origin’s scheme is not suitable, return false.

  3. Return true.

10.4. Parsing filter data

To parse filter values given a value:

  1. If value is not a map, return an error.

  2. Let result be a new filter map.

  3. For each filterdata of value:

    1. If filter starts with "_", return an error.

    2. If data is not a list, return an error.

    3. Let set be a new set.

    4. For each d of data:

      1. If d is not a string, return an error.

      2. Append d to set.

    5. Set result[filter] to set.

  4. Return result.

To parse filter data given a value:

  1. Let map be the result of running parse filter values with value.

  2. If map is an error, return it.

  3. If map’s size is greater than the max entries per filter data, return an error.

  4. For each filterset of map:

    1. If filter’s length is greater than the max length per filter string, return an error.

    2. If set’s size is greater than the max values per filter data entry, return an error.

    3. For each s of set:

      1. If s’s length is greater than the max length per filter string, return an error.

  5. Return map.

To parse filter config given a value:

  1. If value is not a map, return an error.

  2. Let lookbackWindow be null.

  3. If value["_lookback_window"] exists:

    1. If value["_lookback_window"] is not a positive integer, return an error.

    2. Set lookbackWindow to the duration of value["_lookback_window"] seconds.

    3. Remove value["_lookback_window"].

  4. Let map be the result of running parse filter values with value.

  5. If map is an error, return it.

  6. Let filter be a filter config with the items:

    map

    map

    lookback window

    lookbackWindow

  7. Return filter.

10.5. Parsing filters

To parse filters given a value:

  1. Let filtersList be a new list.

  2. If value is a map, then:

    1. Let filterConfig be the result of running parse filter config with value.

    2. If filterConfig is an error, return it.

    3. Append filterConfig to filtersList.

    4. Return filtersList.

  3. If value is not a list, return an error.

  4. For each data of value:

    1. Let filterConfig be the result of running parse filter config with data.

    2. If filterConfig is an error, return it.

    3. Append filterConfig to filtersList.

  5. Return filtersList.

To parse a filter pair given a map map:

  1. Let positive be a list of filter configs, initially empty.

  2. If map["filters"] exists, set positive to the result of running parse filters with map["filters"].

  3. If positive is an error, return it.

  4. Let negative be a list of filter configs, initially empty.

  5. If map["not_filters"] exists, set negative to the result of running parse filters with map["not_filters"].

  6. If negative is an error, return it.

  7. Return the tuple (positive, negative).

10.6. Parsing aggregation coordinator

To parse an aggregation coordinator given value:

  1. If value is not a string, return an error.

  2. Let url be the result of running the URL parser on value.

  3. If url is failure or null, return an error.

  4. If url’s origin is not an aggregation coordinator, return an error.

  5. Return url’s origin.

10.7. Parsing aggregatable debug reporting config

An aggregatable-debug-reporting JSON key is one of the following:

To parse aggregatable debug reporting data given a dataList, a positive integer maxValue, and a set of debug data types supportedTypes:

  1. Assert: maxValue is less than or equal to allowed aggregatable budget per source.

  2. If dataList is not a list, return an error.

  3. Let debugDataMap be a new map.

  4. Let unknownTypes be a new set.

  5. Let unspecifiedContribution be null.

  6. For each data of dataList:

    1. If data is not a map, return an error.

    2. If data["key_piece"] does not exist, return an error.

    3. If data["key_piece"] is not a string, return an error.

    4. Let dataKeyPiece be the result of running parse an aggregation key piece with data["key_piece"].

    5. If dataKeyPiece is an error, return an error.

    6. If data["value"] does not exist, return an error.

    7. If data["value"] is not an integer or is less than or equal to 0 or is greater than maxValue, return an error.

    8. Let contribution be a new aggregatable contribution with the items:

      key

      dataKeyPiece

      value

      data["value"]

      filtering ID

      default filtering ID value

    9. If data["types"] does not exist, return an error.

    10. Let dataTypes be data["types"].

    11. If dataTypes is not a list or is empty, return an error.

    12. For each type of dataTypes:

      1. If type is not a string, return an error.

      2. If type is "unspecified":

        1. If unspecifiedContribution is not null, return an error.

        2. Set unspecifiedContribution to contribution.

        3. Continue.

      3. If supportedTypes contains type:

        1. If debugDataMap[type] exists, return an error.

        2. Set debugDataMap[type] to contribution.

      4. Otherwise:

        1. If unknownTypes contains type, return an error.

        2. Append type to unknownTypes.

  7. If unspecifiedContribution is null, return debugDataMap.

  8. For each type of supportedTypes:

    1. If debugDataMap[type] does not exist, set debugDataMap[type] to unspecifiedContribution.

  9. Return debugDataMap.

To parse an aggregatable debug reporting config given a map map, a positive integer maxValue, a set of debug data types supportedTypes, and an aggregatable debug reporting config default:

  1. If map["key_piece"] does not exist, return default.

  2. If map["key_piece"] is not a string, return default.

  3. Let keyPiece be the result of running parse an aggregation key piece with map["key_piece"].

  4. If keyPiece is an error, return default.

  5. Let debugDataMap be a new map.

  6. If map["debug_data"] exists:

    1. Set debugDataMap to the result of running parse aggregatable debug reporting data with map["debug_data"], maxValue, and supportedTypes.

    2. If debugDataMap is an error, return default.

  7. Let aggregationCoordinator be default aggregation coordinator.

  8. If map["aggregation_coordinator_origin"] exists:

    1. Set aggregationCoordinator to the result of parsing an aggregation coordinator.

    2. If aggregationCoordinator is an error, return default.

  9. Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config with the items:

    key piece

    keyPiece

    debug data

    debugDataMap

    aggregation coordinator

    aggregationCoordinator

  10. Return aggregatableDebugReportingConfig.

Note: The parsing errors are intentionally ignored in this algorithm with default returned to avoid data loss from the optional debug reporting feature.

10.8. Getting registration info

To get registration info from a header list given a header list headers:

  1. If headers does not contain "Attribution-Reporting-Info", return a new registration info.

  2. Let map be the result of getting "Attribution-Reporting-Info" from headers with a type of "dictionary".

  3. If map is not a map, return an error.

  4. Let preferredPlatform be null.

  5. If map["preferred-platform"] exists:

    1. Let preferredPlatformValue be map["preferred-platform"][0].

    2. If preferredPlatformValue is not a registrar, return an error.

    3. Set preferredPlatform to preferredPlatformValue.

  6. Let reportHeaderErrors be false.

  7. If map["report-header-errors"] exists:

    1. Let reportHeaderErrorsValue be map["report-header-errors"][0].

    2. If reportHeaderErrorsValue is not a boolean, return an error.

    3. Set reportHeaderErrors to reportHeaderErrorsValue.

  8. Let registrationInfo be a new registration info struct whose items are:

    preferred platform

    preferredPlatform

    report header errors

    reportHeaderErrors

  9. Return registrationInfo.

Require preferredPlatformValue to be a token.

10.9. Cookie-based debugging

To check if cookie-based debugging is allowed given a suitable origin reportingOrigin and a site contextSite:

  1. Assert: contextSite is not an opaque origin.

  2. Let domain be the canonicalized domain name of reportingOrigin’s host.

  3. Let contextDomain be the canonicalized domain name of contextSite’s host.

  4. If the User Agent’s cookie policy or user controls do not allow cookie access for domain on contextDomain within a third-party context, return blocked.

  5. Return allowed.

10.10. Obtaining context origin

To obtain the context origin of a node node, return node’s node navigable's top-level traversable's active document's origin.

10.11. Obtaining a randomized response

To obtain a randomized response given trueValue, a set possibleValues, and a double randomPickRate:

  1. Assert: randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate, return a random item from possibleValues with uniform probability.

  4. Otherwise, return trueValue.

10.12. Parsing aggregation key piece

To parse an aggregation key piece given a string input, perform the following steps. This algorithm will return either a non-negative 128-bit integer or an error.

  1. If input’s code point length is not between 3 and 34 (both inclusive), return an error.

  2. If the first character is not a U+0030 DIGIT ZERO (0), return an error.

  3. If the second character is not a U+0058 LATIN CAPITAL LETTER X character (X) and not a U+0078 LATIN SMALL LETTER X character (x), return an error.

  4. Let value be the code point substring from 2 to the end of input.

  5. If the characters within value are not all ASCII hex digits, return an error.

  6. Interpret value as a hexadecimal number and return as a non-negative 128-bit integer.

10.13. Should processing be blocked by reporting-origin limit

Given an attribution rate-limit record newRecord:

  1. Let max be max source reporting origins per rate-limit window.

  2. Let scopeSet be « "source" ».

  3. If newRecord’s scope is "event-attribution" or "aggregatable-attribution":

    1. Set max to max attribution reporting origins per rate-limit window.

    2. Set scopeSet to « "event-attribution", "aggregatable-attribution" ».

  4. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  5. Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «newRecord’s reporting origin».

  6. If distinctReportingOrigins’s size is greater than max, return blocked.

  7. Return allowed.

10.14. Can attribution rate-limit record be removed

Given an attribution rate-limit record record and a moment now:

  1. If the duration from record’s time and now is <= attribution rate-limit window , return false.

  2. If record’s scope is "event-attribution" or "aggregatable-attribution", return true.

  3. If record’s expiry time is after now, return false.

  4. Return true.

10.15. Obtaining and delivering a verbose debug report

To obtain and deliver a verbose debug report given a list of verbose debug data data, a suitable origin reportingOrigin, and a boolean fenced:

  1. If fenced is true, return.

  2. Let debugReport be a verbose debug report with the items:

    data

    data

    reporting origin

    reportingOrigin

  3. Queue a task to attempt to deliver a verbose debug report with debugReport.

10.16. Making a background attributionsrc request

An eligibility is one of the following:

"unset"

Depending on context, a trigger may or may not be registered.

"empty"

Neither a source nor a trigger may be registered.

"event-source"

An event source may be registered.

"navigation-source"

A navigation source may be registered.

"trigger"

A trigger may be registered.

"event-source-or-trigger"

An event source or a trigger may be registered.

To validate a background attributionsrc eligibility given an eligibility eligibility:

  1. Assert: eligibility is "navigation-source" or "event-source-or-trigger".

To make a background attributionsrc request given a URL url, an origin contextOrigin, an eligibility eligibility, a boolean fenced, a Document document, and a referrer policy referrerPolicy:

  1. Validate eligibility.

  2. If url’s origin is not suitable, return.

  3. If contextOrigin is not suitable, return.

  4. Let context be document’s relevant settings object.

  5. If context is not a secure context, return.

  6. If document is not allowed to use the "attribution-reporting" feature, return.

  7. Let supportedRegistrars be the result of getting supported registrars.

  8. If supportedRegistrars is empty, return.

  9. Let request be a new request with the following properties:

    method

    "GET"

    URL

    url

    keepalive

    true

    Attribution Reporting eligibility

    eligibility

    referrer policy

    referrerPolicy

  10. Fetch request with processResponse being process an attributionsrc response with contextOrigin, eligibility, and fenced.

Audit other properties on request and set them properly.

Support header-processing on redirects. Due to atomic HTTP redirect handling, we cannot process registrations through integration with fetch. [Issue #839]

Check for transient activation with "navigation-source".

To make background attributionsrc requests given an HTMLAttributionSrcElementUtils element, an eligibility eligibility, and a referrer policy referrerPolicy:

  1. Let attributionSrc be element’s attributionSrc.

  2. Let tokens be the result of splitting attributionSrc on ASCII whitespace.

  3. For each token of tokens:

    1. Parse token, relative to element’s node document. If that is not successful, continue. Otherwise, let url be the resulting URL record.

    2. Let fenced be true if element’s node navigable is a fenced navigable, false otherwise.

    3. Run make a background attributionsrc request with url, element’s context origin, eligibility, fenced, element’s node document, and referrerPolicy.

Consider allowing the user agent to limit the size of tokens.

To process an attributionsrc response given a suitable origin contextOrigin, an eligibility eligibility, a boolean fenced, and a response response:

  1. Validate eligibility.

  2. Run process an attribution eligible response with contextOrigin, eligibility, fenced, and response.

To get the registration platform given a header value or null webHeader, a header value or null osHeader, and a registrar or null preferredPlatform:

  1. If webHeader and osHeader are both null, return null.

  2. If preferredPlatform is null:

    1. If webHeader and osHeader are both not null, return null.

    2. If webHeader is not null and the user agent supports web registrations, return "web".

    3. If osHeader is not null and the user agent supports OS registrations, return "os".

    4. Return null.

  3. If preferredPlatform is:

    "web"
    1. If webHeader is null, return null.

    2. If the user agent supports web registrations, return "web".

    3. If osHeader is not null and the user agent supports OS registrations, return "os".

    4. Return null.

    "os"
    1. If osHeader is null, return null.

    2. If the user agent supports OS registrations, return "os".

    3. If webHeader is not null and the user agent supports web registrations, return "web".

    4. Return null.

To process an attribution source response given a suitable origin contextOrigin, a suitable origin reportingOrigin, a source type sourceType, a header value or null webSourceHeader, a header value or null osSourceHeader, a registration info registrationInfo, and a boolean fenced:

  1. Let platform be the result of get the registration platform with webSourceHeader, osSourceHeader, and registrationInfo’s preferred platform.

  2. If platform is null, return.

  3. Let reportHeaderErrors be registrationInfo’s report header errors.

  4. If platform is:

    "web"
    1. Let source be the result of running parse source-registration JSON with webSourceHeader, contextOrigin, reportingOrigin, sourceType, current wall time, and fenced.

    2. If source is an error:

      1. If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "Attribution-Reporting-Register-Source", webSourceHeader, reportingOrigin, contextOrigin, and fenced.

      2. Return.

    3. If sourceType is "navigation", enforce the attribution-scope privacy limit.

    4. Process source.

    "os"
    1. Let osSourceRegistrations be the result of running get OS registrations from a header value with osSourceHeader.

    2. If osSourceRegistrations is an error:

      1. If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "Attribution-Reporting-Register-OS-Source", osSourceHeader, reportingOrigin, contextOrigin, and fenced.

      2. Return.

    3. Process osSourceRegistrations according to an implementation-defined algorithm.

    4. Run obtain and deliver debug reports on OS registrations with "os-source-delegated", osSourceRegistrations, contextOrigin, and fenced.

To process an attribution trigger response given a suitable origin contextOrigin, a suitable origin reportingOrigin, a response response, a header value or null webTriggerHeader, a header value or null osTriggerHeader, a registration info registrationInfo, and a boolean fenced:

  1. Let platform be the result of get the registration platform with webTriggerHeader, osTriggerHeader, and registrationInfo’s preferred platform.

  2. If platform is null, return.

  3. Let reportHeaderErrors be registrationInfo’s report header errors.

  4. If platform is:

    "web"
    1. Let destinationSite be the result of obtaining a site from contextOrigin.

    2. Let trigger be the result of running create an attribution trigger with webTriggerHeader destinationSite, reportingOrigin, current wall time, and fenced.

    3. If trigger is an error:

      1. If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "Attribution-Reporting-Register-Trigger", webTriggerHeader, reportingOrigin, contextOrigin, and fenced.

      2. Return.

    4. Maybe defer and then complete trigger attribution with trigger.

    "os"
    1. Let osTriggerRegistrations be the result of running get OS registrations from a header value with osTriggerHeader.

    2. If osTriggerRegistrations is an error:

      1. If reportHeaderErrors is true, run obtain and deliver debug reports on registration header errors with "Attribution-Reporting-Register-OS-Trigger", osTriggerHeader, reportingOrigin, contextOrigin, and fenced.

      2. Return.

    3. Process osTriggerRegistrations according to an implementation-defined algorithm.

    4. Run obtain and deliver debug reports on OS registrations with "os-trigger-delegated", osTriggerRegistrations, contextOrigin, and fenced.

To process an attribution eligible response given a suitable origin contextOrigin, an eligibility eligibility, a boolean fenced, and a response response:

  1. The user-agent MAY ignore the response; if so, return.

    Note: The user-agent may prevent attribution for a number of reasons, such as user opt-out. In these cases, it is preferred to abort the API flow at response time rather than at request time so this state is not immediately detectable. Attribution may also be blocked if the reporting origin is not enrolled.

  2. Queue a task on the networking task source to proceed with the following steps.

    Note: This algorithm can be invoked from while in parallel.

  3. Assert: eligibility is "navigation-source" or "event-source" or "event-source-or-trigger".

  4. Let reportingOrigin be response’s URL's origin.

  5. If reportingOrigin is not suitable, return.

  6. Let sourceHeader be the result of getting "Attribution-Reporting-Register-Source" from response’s header list.

  7. Let triggerHeader be the result of getting "Attribution-Reporting-Register-Trigger" from response’s header list.

  8. Let osSourceHeader be the result of getting "Attribution-Reporting-Register-OS-Source" from response’s header list.

  9. Let osTriggerHeader be the result of getting "Attribution-Reporting-Register-OS-Trigger" from response’s header list.

  10. Let registrationInfo be the result of getting registration info from response’s header list.

  11. If registrationInfo is an error, return.

  12. If eligibility is:

    "navigation-source"
    "event-source"

    Run the following steps:

    1. Let sourceType be "navigation".

    2. If eligibility is "event-source", set sourceType to "event".

    3. Run process an attribution source response with contextOrigin, reportingOrigin, sourceType, sourceHeader, osSourceHeader, and registrationInfo.

    "event-source-or-trigger"

    Run the following steps:

    1. Let hasSourceRegistration be false.

    2. If sourceHeader or osSourceHeader is not null, set hasSourceRegistration to true.

    3. Let hasTriggerRegistration be false.

    4. If triggerHeader or osTriggerHeader is not null, set hasTriggerRegistration to true.

    5. If both hasSourceRegistration and hasTriggerRegistration are true, return.

    6. If hasSourceRegistration is true:

      1. Run process an attribution source response with contextOrigin, reportingOrigin, "event", sourceHeader, osSourceHeader, registrationInfo, and fenced.

    7. If hasTriggerRegistration is true:

      1. Run process an attribution trigger response with contextOrigin, reportingOrigin, response, triggerHeader, osTriggerHeader, registrationInfo, and fenced.

To obtain and deliver debug reports on registration header errors given a header name headerName, a header value headerValue, a suitable origin reportingOrigin, a suitable origin contextOrigin, and a boolean fenced:

  1. Let contextSite be the result of obtaining a site from contextOrigin.

  2. Let body be a new map with the following key/value pairs:

    "context_site"

    contextSite, serialized.

    "header"

    headerName.

    "value"

    headerValue.

  3. Let data be a new verbose debug data with the items:

    data type

    "header-parsing-error"

    body

    body

  4. Run obtain and deliver a verbose debug report with « data », reportingOrigin, and fenced.

Note: The user agent may optionally include error details of any type in body["error"].

10.17. Attribution debugging

To check if attribution debugging can be enabled given an attribution debug info debugInfo:

  1. If debugInfo’s source debug key is null, return false.

  2. If debugInfo’s trigger debug key is null, return false.

  3. Return true.

To serialize an attribution debug info given a map data and an attribution debug info debugInfo:

  1. If the result of checking if attribution debugging can be enabled with debugInfo is false, return.

  2. Set data["source_debug_key"] to debugInfo’s source debug key, serialized.

  3. Set data["trigger_debug_key"] to debugInfo’s trigger debug key, serialized.

Note: We require both source and trigger debug keys to be present to avoid a privacy leak from one-sided third-party cookie access.

10.18. Obtaining and delivering an aggregatable debug report

To check if aggregatable debug reporting should be blocked by rate-limit given an aggregatable debug rate-limit record newRecord:

  1. Let matchingRecords be all aggregatable debug rate-limit records record of aggregatable debug rate-limit cache where all of the following are true:

  2. Let totalBudget be newRecord’s consumed budget.

  3. Let totalSameReportingBudget be totalBudget.

  4. For each record of matchingRecords:

    1. Increment totalBudget value by record’s consumed budget.

    2. If record’s reporting site and newRecord’s reporting site are equal, increment totalSameReportingBudget by record’s consumed budget.

  5. If totalBudget is greater than max aggregatable debug budget per rate-limit window[0], return blocked.

  6. If totalSameReportingBudget is greater than max aggregatable debug budget per rate-limit window[1], return blocked.

  7. Return allowed.

To obtain and deliver an aggregatable debug report given a list of aggregatable contributions contributions, a suitable origin reportingOrigin, a site effectiveDestination, an aggregation coordinator aggregationCoordinator, and a moment now:

  1. Assert: effectiveDestination is not an opaque origin.

  2. Let report be a new aggregatable debug report with the items:

    reporting origin

    reportingOrigin

    effective attribution destination

    effectiveDestination

    report time

    now

    external ID

    The result of generating a random UUID

    internal ID

    The result of getting the next internal ID

    contributions

    contributions

    aggregation coordinator

    aggregationCoordinator

  3. Queue a task to attempt to deliver an aggregatable debug report with report.

To obtain and deliver an aggregatable debug report on registration given a list contributions, a site contextSite, an origin reportingOrigin, a possibly null attribution source source, a site effectiveDestination, an aggregation coordinator aggregationCoordinator, and a moment now:

  1. If contributions is empty:

    1. Run obtain and deliver an aggregatable debug report with «», contextSite, reportingOrigin, effectiveDestination, aggregationCoordinator, and now.

    2. Return.

  2. Let remainingBudget be allowed aggregatable budget per source.

  3. Let numReports be 0.

  4. If source is not null:

    1. Set remainingBudget to source’s remaining aggregatable debug budget.

    2. Set numReports to source’s number of aggregatable debug reports.

  5. Let requiredBudget be the total value of contributions.

  6. If requiredBudget is greater than remainingBudget:

    1. Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.

    2. Return.

  7. If numReports is equal to max aggregatable reports per source[1]:

    1. Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.

    2. Return.

  8. Let rateLimitRecord be a new aggregatable debug rate-limit record with the items:

    context site

    contextSite

    reporting site

    The result of obtaining a site from reportingOrigin

    time

    now

    consumed budget

    requiredBudget

  9. If the result of running check if aggregatable debug reporting should be blocked by rate-limit with rateLimitRecord is blocked:

    1. Run obtain and deliver an aggregatable debug report with «», reportingOrigin, effectiveDestination, aggregationCoordinator, and now.

    2. Return.

  10. Run obtain and deliver an aggregatable debug report with contributions, reportingOrigin, effectiveDestination, aggregationCoordinator, and now.

  11. If source is not null:

    1. Decrement source’s remaining aggregatable debug budget value by requiredBudget.

    2. Increment source’s number of aggregatable debug reports value by 1.

  12. Append rateLimitRecord to the aggregatable debug rate-limit cache.

  13. Remove all aggregatable debug rate-limit records entry from the aggregatable debug rate-limit cache if the duration from entry’s time and now is > aggregatable debug rate-limit window.

11. Source Algorithms

11.1. Obtaining a randomized source response

To obtain a set of possible trigger states given a randomized response output configuration config:

  1. Let possibleTriggerStates be a new set.

  2. For each triggerDataspec of config’s trigger specs:

    1. For each reportWindow of spec’s event-level report windows:

      1. Let state be a new trigger state with the items:

        trigger data

        triggerData

        report window

        reportWindow

      2. Append state to possibleTriggerStates.

  3. Let possibleValues be a new set.

  4. For each integer attributions of the range 0 to config’s max attributions per source, inclusive:

    1. Append to possibleValues all distinct attributions-size combinations of possibleTriggerStates with repetition.

  5. Return possibleValues.

To obtain a randomized source response pick rate given a positive integer states and a double epsilon:

  1. Return states / (states - 1 + eepsilon).

To obtain a randomized source response given a set of possible trigger states possibleValues and a double epsilon:

  1. Let pickRate be the result of obtaining a randomized source response pick rate with possibleValues’s size and epsilon.

  2. Return the result of obtaining a randomized response with null, possibleValues, and pickRate.

11.2. Computing channel capacity

To compute the channel capacity of a source given a positive integer states and a double epsilon:

  1. If states is 1, return 0.

  2. If states is greater than the user agent’s max trigger-state cardinality, return an error.

  3. Let pickRate be the randomized response pick rate with states and epsilon.

  4. Let p be pickRate * (states - 1) / states.

  5. Return log2(states) - h(p) - p * log2(states - 1) where h is the binary entropy function [BIN-ENT].

Note: This algorithm computes the channel capacity [CHAN] of a q-ary symmetric channel [Q-SC].

To compute the scopes channel capacity of a source given a positive integer numTriggerStates, a positive integer attributionScopeLimit, and a positive integer maxEventStates:

  1. Let totalStates be numTriggerStates + maxEventStates * (attributionScopeLimit - 1).

  2. Return log2(totalStates).

11.3. Parsing source-registration JSON

A source-registration JSON key is one of the following:

To parse an attribution destination from a string str:

  1. Let url be the result of running the URL parser on the value of the str.

  2. If url is failure or null, return an error.

  3. If url’s origin is not suitable, return an error.

  4. Return the result of obtaining a site from url’s origin.

To parse attribution destinations from a map map:

  1. If map["destination"] does not exist, return an error.

  2. Let val be map["destination"].

  3. If val is a string, set val to « val ».

  4. If val is not a list, return an error.

  5. Let result be a set.

  6. For each value of val:

    1. If value is not a string, return an error.

    2. Let destination be the result of parse an attribution destination with value.

    3. If destination is an error, return it.

    4. Append destination to result.

  7. If result is empty or its size is greater than the max destinations per source, return an error.

  8. Sort result in ascending order, with a being less than b if a, serialized, is less than b, serialized.

  9. Return result.

Note: Sorting result helps ensure that registrations with the same set of destinations are equivalent, regardless of the order of sites in the registration JSON.

To parse a duration given a map map, a string key, and a tuple of durations (clampStart, clampEnd):

  1. Assert: clampStart < clampEnd.

  2. Let seconds be null.

  3. If map[key] exists and is a non-negative integer, set seconds to map[key].

  4. Otherwise, set seconds to the result of running parse an optional 64-bit unsigned integer with map, key, and null.

  5. If seconds is an error, return an error.

  6. If seconds is null, return clampEnd.

  7. Let duration be the duration of seconds seconds.

  8. If duration is less than clampStart, return clampStart.

  9. If duration is greater than clampEnd, return clampEnd.

  10. Return duration.

Consider rejecting out-of-bounds values instead of silently clamping.

To parse aggregation keys given a map map:

  1. Let aggregationKeys be a new map.

  2. If map["aggregation_keys"] does not exist, return aggregationKeys.

  3. Let values be map["aggregation_keys"].

  4. If values is not an map, return an error.

  5. If values’s size is greater than the max aggregation keys per source registration, return an error.

  6. For each keyvalue of values:

    1. If key’s length is greater than the max length per aggregation key identifier, return an error.

    2. If value is not a string, return an error.

    3. Let keyPiece be the result of running parse an aggregation key piece with value.

    4. If keyPiece is an error, return it.

    5. Set aggregationKeys[key] to keyPiece.

  7. Return aggregationKeys.

To parse named budgets for source given a map map:

  1. Let namedBudgets be a new map.

  2. If map["named_budgets"] does not exist, return namedBudgets.

  3. Let values be map["named_budgets"].

  4. If values is not a map, return an error.

  5. If values’s size is greater than the max named budgets per source registration, return an error.

  6. For each keyvalue of values:

    1. If key’s length is greater than the max length per budget name for source, return an error.

    2. If value is not an integer or is less than 0 or is greater than allowed aggregatable budget per source, return an error.

    3. Set namedBudgets[key] to value.

  7. Return namedBudgets.

To obtain default effective windows given a source type sourceType, a moment sourceTime, and a duration eventReportWindow:

  1. Let deadlines be «» if sourceType is "event", else « 2 days, 7 days ».

  2. Remove all elements in deadlines that are greater than or equal to eventReportWindow.

  3. Append eventReportWindow to deadlines.

  4. Let lastEnd be sourceTime.

  5. Let windows be «».

  6. For each deadline of deadlines:

    1. Let window be a new report window whose items are

      start

      lastEnd

      end

      lastEnd + deadline

    2. Append window to windows.

    3. Set lastEnd to lastEnd + deadline.

  7. Return windows.

To parse top-level report windows given a map map, a moment sourceTime, a source type sourceType, and a duration expiry:

  1. If map["event_report_window"] exists and map["event_report_windows"] exists, return an error.

  2. If map["event_report_window"] exists:

    1. Let eventReportWindow be the result of running parse a duration with map, "event_report_window", and (min report window, expiry).

    2. If eventReportWindow is an error, return eventReportWindow.

    3. Return the result of obtaining default effective windows given sourceType, sourceTime, and eventReportWindow.

  3. If map["event_report_windows"] does not exist, return the result of obtaining default effective windows given sourceType, sourceTime, and expiry.

  4. Return the result of parsing report windows with map["event_report_windows"], sourceTime, and expiry.

To parse report windows given a value, a moment sourceTime, and a duration expiry:

  1. If value is not a map, return an error.

  2. Let startDuration be 0 seconds.

  3. If value["start_time"] exists:

    1. Let start be value["start_time"].

    2. If start is not a non-negative integer, return an error.

    3. Set startDuration to start seconds.

    4. If startDuration is greater than expiry, return an error.

  4. If value["end_times"] does not exist or is not a list, return an error.

  5. Let endDurations be value["end_times"].

  6. If the size of endDurations is greater than max settable event-level report windows, return an error.

  7. If endDurations is empty, return an error.

  8. Let windows be a new list.

  9. For each end of endDurations:

    1. If end is not a positive integer, return an error.

    2. Let endDuration be end seconds.

    3. If endDuration is greater than expiry, set endDuration to expiry.

    4. If endDuration is less than min report window, set endDuration to min report window.

    5. If endDuration is less than or equal to startDuration, return an error.

    6. Let window be a new report window whose items are

      start

      sourceTime + startDuration

      end

      sourceTime + endDuration

    7. Append window to windows.

    8. Set startDuration to endDuration.

  10. Return windows.

The user-agent has an associated boolean experimental Flexible Event support (default false) that exposes non-normative behavior described in the Flexible event-level configurations proposal.

To parse summary operator given a map map:

  1. Let value be "count".

  2. If map["summary_operator"] exists:

    1. If map["summary_operator"] is not a string, return an error.

    2. If map["summary_operator"] is not a summary operator, return an error.

    3. Set value to map["summary_operator"].

  3. Return value.

To parse summary buckets given a map map and an integer maxEventLevelReports:

  1. Let values be the range 1 to maxEventLevelReports, inclusive.

  2. If map["summary_buckets"] exists:

    1. If map["summary_buckets"] is not a list, is empty, or its size is greater than maxEventLevelReports, return an error.

    2. Set values to map["summary_buckets"].

  3. Let prev be 0.

  4. Let summaryBuckets be a new list.

  5. For each item of values:

    1. If item is not an integer or cannot be represented by an unsigned 32-bit integer, or is less than or equal to prev, return an error.

    2. Let summaryBucket be a new summary bucket whose items are

      start

      prev

      end

      item - 1

    3. Append summaryBucket to summaryBuckets.

    4. Set prev to item.

  6. Return summaryBuckets.

To parse trigger data into a trigger spec map given a triggerDataList, a trigger spec spec, a trigger spec map specs, and a boolean allowEmpty:

  1. If triggerDataList is not a list or its size is greater than max distinct trigger data per source, return false.

  2. If allowEmpty is false and triggerDataList is empty, return false.

  3. For each triggerData of triggerDataList:

    1. If triggerData is not an integer or cannot be represented by an unsigned 32-bit integer, or specs[triggerData] exists, return false.

    2. Set specs[triggerData] to spec.

    3. If specs’s size is greater than max distinct trigger data per source, return false.

  4. Return true.

To parse trigger specs given a map map, a moment sourceTime, a source type sourceType, a duration expiry, and a trigger-data matching mode matchingMode:

  1. Let defaultReportWindows be the result of parsing top-level report windows with map, sourceTime, sourceType, and expiry.

  2. If defaultReportWindows is an error, return an error.

  3. Let specs be a new trigger spec map.

  4. If experimental Flexible Event support is true and map["trigger_specs"] exists:

    1. If map["trigger_data"] exists, return an error.

    2. If map["trigger_specs"] is not a list or its size is greater than max distinct trigger data per source, return an error.

    3. For each item of map["trigger_specs"]:

      1. If item is not a map, return an error.

      2. Let spec be a new trigger spec with the following items:

        event-level report windows

        defaultReportWindows

      3. If item["event_report_windows"] exists:

        1. Let reportWindows be the result of parsing report windows with item["event_report_windows"], sourceTime, and expiry.

        2. If reportWindows is an error, return it.

        3. Set spec’s event-level report windows to reportWindows.

      4. If item["trigger_data"] does not exist, return an error.

      5. Let allowEmpty be false.

      6. If the result of running parse trigger data into a trigger spec map with item["trigger_data"], spec, specs, and allowEmpty is false, return an error.

  5. Otherwise:

    1. Let spec be a new trigger spec with the following items:

      event-level report windows

      defaultReportWindows

    2. If map["trigger_data"] exists:

      1. Let allowEmpty be true.

      2. If the result of running parse trigger data into a trigger spec map with map["trigger_data"], spec, specs, and allowEmpty is false, return an error.

    3. Otherwise:

      1. For each integer triggerData of the range 0 to default trigger data cardinality[sourceType], exclusive:

        1. Set specs[triggerData] to spec.

  6. If matchingMode is "modulus":

    1. Let i be 0.

    2. For each triggerData of specs’s keys:

      1. If triggerData does not equal i, return an error.

      2. Set i to i + 1.

  7. Return specs.

Invoke parse summary buckets and parse summary operator from this algorithm.

To parse a source aggregatable debug reporting config given value, a non-negative integer defaultBudget, and an aggregatable debug reporting config defaultConfig:

  1. If value is not a map, return the tuple (defaultBudget, defaultConfig).

  2. If value["budget"] does not exist, return the tuple (defaultBudget, defaultConfig).

  3. Let budget be value["budget"].

  4. If budget is not an integer or is less than or equal to 0 or is greater than allowed aggregatable budget per source, return the tuple (defaultBudget, defaultConfig).

  5. Let supportedTypes be the set of all source debug data types.

  6. Let config be the result of running parse an aggregatable debug reporting config with value, budget, supportedTypes, and defaultConfig.

  7. Return the tuple (budget, config).

To parse attribution scopes from a map map:

  1. If map["attribution_scopes"] does not exist, return null.

  2. Let value be map["attribution_scopes"].

  3. If value is not a map, return an error.

  4. If value["limit"] does not exist, return an error.

  5. Set limit to value["limit"].

  6. If limit is not an integer, cannot be represented by an unsigned 32-bit integer, or is less than or equal to zero, return an error.

  7. Let maxEventStates be the result of running parse max event states with value.

  8. If maxEventStates is an error, return an error.

  9. Let values be the result of running parse attribution scope values for source with value and limit.

  10. If values is an error, return an error.

  11. Let attributionScopes be a new attribution scopes with the items:

    limit

    limit

    values

    values

    max event states

    maxEventStates

  12. Return attributionScopes.

To parse max event states from a map map:

  1. If map["max_event_states"] does not exist, return default max event states.

  2. Let maxEventStates be map["max_event_states"].

  3. If maxEventStates is not an integer or is less than or equal to 0, return an error.

  4. If maxEventStates is greater than the user agent’s max trigger-state cardinality, return an error.

  5. Return maxEventStates.

To parse attribution scope values for source from a map map and a 32-bit positive integer limit:

  1. If map["values"] does not exist, return an error.

  2. Let result be a new set.

  3. Let values be map["values"].

  4. If values is not a list, return an error.

  5. If values is empty, return an error.

  6. For each value of values:

    1. If value is not a string, return an error.

    2. If value’s length is greater than max length of attribution scope for source, return an error.

    3. Append value to result.

  7. If result’s size is greater than limit or max attribution scopes per source, return an error.

  8. Return result.

Note: Empty attribution scopes are not allowed if limit is set, to prevent the selection of both sources with and without scopes, which would effectively result in limit + 1 scopes.

To parse source-registration JSON given a byte sequence json, a suitable origin sourceOrigin, a suitable origin reportingOrigin, a source type sourceType, a moment sourceTime, and a boolean fenced:

  1. Let value be the result of running parse JSON bytes to an Infra value with json.

  2. If value is not a map, return an error.

  3. Let attributionDestinations be the result of running parse attribution destinations with value.

  4. If attributionDestinations is an error, return it.

  5. Let sourceEventId be the result of running parse an optional 64-bit unsigned integer with value, "source_event_id", and 0.

  6. If sourceEventId is an error, return it.

  7. Let expiry be the result of running parse a duration with value, "expiry", and valid source expiry range.

  8. If expiry is an error, return it.

  9. If sourceType is "event", round expiry away from zero to the nearest day (86400 seconds).

  10. Let priority be the result of running parse an optional 64-bit signed integer with value, "priority", and

  11. If priority is an error, return it.

  12. Let destinationLimitPriority be the result of running parse an optional 64-bit signed integer with value, "destination_limit_priority", and 0.

  13. If destinationLimitPriority is an error, return it.

  14. Let filterData be a new filter map.

  15. If value["filter_data"] exists:

    1. Set filterData to the result of running parse filter data with value["filter_data"].

    2. If filterData is an error, return it.

    3. If filterData["source_type"] exists, return an error.

  16. Set filterData["source_type"] to « sourceType ».

  17. Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "debug_key", and null.

  18. If debugKey is an error, set debugKey to null.

  19. Let cookieBasedDebugAllowed be false.

  20. Let sourceSite be the result of obtaining a site from sourceOrigin.

  21. If the result of running check if cookie-based debugging is allowed with reportingOrigin and sourceSite is allowed, set cookieBasedDebugAllowed to true.

  22. If cookieBasedDebugAllowed is false, set debugKey to null.

  23. Let aggregationKeys be the result of running parse aggregation keys with value.

  24. If aggregationKeys is an error, return it.

  25. Let maxAttributionsPerSource be default event-level attributions per source[sourceType].

  26. Set maxAttributionsPerSource to value["max_event_level_reports"] if it exists.

  27. If maxAttributionsPerSource is not a non-negative integer, or is greater than max settable event-level attributions per source, return an error.

  28. Let aggregatableReportWindowEnd be the result of running parse a duration with value, "aggregatable_report_window", and (min report window, expiry).

  29. If aggregatableReportWindowEnd is an error, return it.

  30. Let debugReportingEnabled be false.

  31. If value["debug_reporting"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting"].

  32. Let aggregatableReportWindow be a new report window with the following items:

    start

    sourceTime

    end

    sourceTime + aggregatableReportWindowEnd

  33. Let triggerDataMatchingMode be "modulus".

  34. If value["trigger_data_matching"] exists:

    1. If value["trigger_data_matching"] is not a string, return an error.

    2. If value["trigger_data_matching"] is not a trigger-data matching mode, return an error.

    3. Set triggerDataMatchingMode to value["trigger_data_matching"].

  35. Let triggerSpecs be the result of parsing trigger specs with value, sourceTime, sourceType, expiry, and triggerDataMatchingMode.

  36. If triggerSpecs is an error, return it.

  37. Let attributionScopes be the result of running parse attribution scopes with value.

  38. If attributionScopes is an error, return it.

  39. Let epsilon be the user agent’s max settable event-level epsilon.

  40. Set epsilon to value["event_level_epsilon"] if it exists:

  41. If epsilon is not a double, is less than 0, or is greater than the user agent’s max settable event-level epsilon, return an error.

  42. Let aggregatableDebugBudget be 0.

  43. Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config.

  44. If value["aggregatable_debug_reporting"] exists:

    1. Set (aggregatableDebugBudget, aggregatableDebugReportingConfig) to the result of running parse a source aggregatable debug reporting config with value["aggregatable_debug_reporting"], aggregatableDebugBudget, and aggregatableDebugReportingConfig.

  45. Let aggregatableAttributionBudget be allowed aggregatable budget per source - aggregatableDebugBudget.

  46. Let namedBudgets be the result of parsing named budgets for source with value.

  47. If namedBudgets is an error, return it.

  48. If automation local testing mode is true, set epsilon to .

  49. Let source be a new attribution source struct whose items are:

    internal ID

    The result of getting the next internal ID

    source origin

    sourceOrigin

    event ID

    sourceEventId

    attribution destinations

    attributionDestinations

    reporting origin

    reportingOrigin

    expiry

    expiry

    trigger specs

    triggerSpecs

    aggregatable report window

    aggregatableReportWindow

    priority

    priority

    source time

    sourceTime

    source type

    sourceType

    number of event-level reports

    0

    max number of event-level reports

    maxAttributionsPerSource

    event-level epsilon

    epsilon

    filter data

    filterData

    debug key

    debugKey

    aggregation keys

    aggregationKeys

    remaining aggregatable attribution budget

    aggregatableAttributionBudget

    named budgets

    namedBudgets

    remaining named budgets

    namedBudgets

    debug reporting enabled

    debugReportingEnabled

    trigger-data matching mode

    triggerDataMatchingMode

    cookie-based debug allowed

    cookieBasedDebugAllowed

    fenced

    fenced

    remaining aggregatable debug budget

    aggregatableDebugBudget

    aggregatable debug reporting config

    aggregatableDebugReportingConfig

    destination limit priority

    destinationLimitPriority

    attribution scopes

    attributionScopes

  50. Return source.

Determine proper charset-handling for the JSON header value.

11.4. Processing an attribution source

To check if an attribution source exceeds the time-based destination limits given an attribution source source, run the following steps:

  1. Let matchingSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let matchingSameReportingSources be all the records in matchingSources whose associated reporting origin is same site with source’s reporting origin.

  3. Let destinations be the set of every attribution destination in matchingSources, unioned with source’s attribution destinations.

  4. Let sameReportingDestinations be the set of every attribution destination in matchingSameReportingSources, unioned with source’s attribution destinations.

  5. Let hitRateLimit be whether destinations’s size is greater than max destinations per rate-limit window[0].

  6. Let hitSameReportingRateLimit be whether sameReportingDestinations’s size is greater than max destinations per rate-limit window[1].

  7. If (hitRateLimit, hitSameReportingRateLimit) is

    (false, false)

    Return "allowed".

    (false, true)

    Return "hit reporting limit".

    (true, false)

    Return "hit global limit".

    (true, true)

    Return "hit reporting limit".

Note: When both limits are hit, we interpret it as "hit reporting limit" for debug reporting.

To check if an attribution source exceeds the per day destination limits given an attribution source source, run the following steps:

  1. Let matchingSources be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let destinations be the set of all attribution destination in matchingSources, unioned with source’s attribution destinations.

  3. Return whether destinations’s size is greater than max destinations per source reporting site per day.

To delete sources for unexpired destination limit given a set of internal IDs sourcesToDelete and a moment now:

  1. If sourcesToDelete is empty, return.

  2. For each attribution source source of the attribution source cache:

    1. Remove source from the attribution source cache if sourcesToDelete contains source’s internal ID.

  3. Let deletedEventLevelReports be a new set.

  4. For each event-level report report of the event-level report cache:

    1. If sourcesToDelete contains report’s source ID and report’s trigger time is greater than or equal to now:

      1. Append report’s internal ID to deletedEventLevelReports.

      2. Remove report from the event-level report cache.

    Note: Leaking browsing history of destinations deactivated for unexpired destination limit from event-level reports whose trigger time is earlier than now is mitigated by the presence of fake reports. Event-level reports whose trigger time is greater than or equal to now must be deleted to avoid exposing whether an attribution source has a randomized response.

  5. Let deletedAggregatableReports be a new set.

  6. For each aggregatable attribution report report of the aggregatable attribution report cache:

    1. If report’s source ID is not null and sourcesToDelete contains report’s source ID:

      1. Append report’s internal ID to deletedAggregatableReports.

      2. Remove report from the aggregatable attribution report cache.

  7. For each attribution rate-limit record record of the attribution rate-limit cache:

    1. If record’s scope is:

      "source"

      Set record’s deactivated for unexpired destination limit to true if sourcesToDelete contains record’s entity ID.

      "event-attribution"

      Remove record from the attribution rate-limit cache if deletedEventLevelReports contains record’s entity ID.

      "aggregatable-attribution"

      Remove record from the attribution rate-limit cache if deletedAggregatableReports contains record’s entity ID.

A destination limit record is a struct with the following items:

attribution destination

A site.

priority

A 64-bit integer.

time

A moment

source ID

An internal ID.

To get sources to delete for the unexpired destination limit given an attribution source source, run the following steps:

  1. Let destinationRecords be a new list.

  2. For each attribution rate-limit record record of the attribution rate-limit cache:

    1. If record’s scope is not "source", continue.

    2. If record’s deactivated for unexpired destination limit is true, continue.

    3. If record’s source site and source’s source site are not equal, continue.

    4. If record’s reporting origin and source’s reporting origin are not same site, continue.

    5. If record’s expiry time is less than or equal to source’s source time, continue.

    6. Assert: record’s destination limit priority is not null.

    7. Assert: record’s entity ID is not null.

    8. Let destinationRecord be a new destination limit record struct whose items are:

      attribution destination

      record’s attribution destination

      priority

      record’s destination limit priority

      time

      record’s time

      source ID

      record’s entity ID

    9. Append destinationRecord to destinationRecords.

  3. For each site destination of source’s attribution destinations:

    1. Let destinationRecord be a new destination limit record struct whose items are:

      attribution destination

      destination

      priority

      source’s destination limit priority

      time

      source’s source time

      source ID

      source’s internal ID

    2. Append destinationRecord to destinationRecords.

  4. Sort destinationRecords in descending order, with a less than b if the following steps return true:

    1. If a’s priority is less than b’s priority, return true.

    2. If a’s priority is greater than b’s priority, return false.

    3. If a’s time is less than b’s time, return true.

    4. If a’s time is greater than b’s time, return false.

    5. If a’s serialized attribution destination is less than b’s serialized attribution destination, return true.

    6. Return false.

  5. Let sourcesToDelete be a new set.

  6. Let newDestinations be a new set.

  7. For each destination limit record record of destinationRecords:

    1. Let destination be record’s attribution destination.

    2. If newDestinations’s size is less than the user agent’s max destinations covered by unexpired sources, append destination to newDestinations.

    3. Otherwise, if newDestinations does not contain destination:

      1. Append record’s source ID to sourcesToDelete.

  8. Return sourcesToDelete.

To check if an attribution source should be blocked by reporting-origin per site limit given an attribution source source:

  1. Let matchingRateLimitRecords be all attribution rate-limit records record in the attribution rate-limit cache where all of the following are true:

  2. Let distinctReportingOrigins be the set of all reporting origin in matchingRateLimitRecords, unioned with «source’s reporting origin».

  3. If distinctReportingOrigins’s size is greater than max source reporting origins per source reporting site, return blocked.

  4. Return allowed.

To obtain a fake report given an attribution source source and a trigger state triggerState:

  1. Let specEntry be the entry for source’s trigger specs[triggerState’s trigger data].

  2. Let triggerTime be triggerState’s report window's start.

  3. Let priority be 0.

  4. Let fakeReport be the result of running obtain an event-level report with source, triggerTime, triggerDebugKey set to null, priority, and specEntry.

  5. Assert: fakeReport’s report time is equal to triggerState’s report window's end.

  6. Return fakeReport.

To obtain and deliver a verbose debug report on source registration given a source debug data types dataType, an attribution source source, a boolean isNoised, and a boolean destinationLimitReplaced:

  1. If source’s debug reporting enabled is false, return.

  2. If source’s cookie-based debug allowed is false, return.

  3. Let body be a new map with the following key/value pairs:

    "attribution_destination"

    source’s attribution destinations, serialized.

    "source_event_id"

    source’s event ID, serialized.

    "source_site"

    source’s source site, serialized.

  4. If source’s debug key is not null, set body["source_debug_key"] to source’s debug key, serialized.

  5. Let dataTypeToReport be dataType.

  6. If dataType is:

    "source-destination-global-rate-limit"
    "source-reporting-origin-limit"

    Set dataTypeToReport to "source-success".

  7. If dataTypeToReport is "source-success" and isNoised is true, set dataTypeToReport to "source-noised".

  8. If dataTypeToReport is:

    "source-destination-limit"

    Set body["limit"] to the user agent’s max destinations covered by unexpired sources, serialized.

    "source-destination-rate-limit"

    Set body["limit"] to the user agent’s max destinations per rate-limit window[1], serialized.

    "source-destination-per-day-rate-limit"

    Set body["limit"] to the user agent’s max destinations per source reporting site per day, serialized.

    "source-storage-limit"

    Set body["limit"] to the user agent’s max pending sources per source origin, serialized.

    "source-channel-capacity-limit"
    1. Let sourceType be source’s source type.

    2. Set body["limit"] to the user agent’s max event-level channel capacity per source[sourceType].

    "source-scopes-channel-capacity-limit"
    1. Assert: source’s attribution scopes is not null.

    2. Let sourceType be source’s source type.

    3. Set body["limit"] to the user agent’s max event-level attribution scopes channel capacity per source[sourceType].

    "source-trigger-state-cardinality-limit"

    Set body["limit"] to the user agent’s max trigger-state cardinality, serialized.

    "source-reporting-origin-per-site-limit"

    Set body["limit"] to the user agent’s max source reporting origins per source reporting site, serialized.

    "source-max-event-states-limit"
    1. Assert: source’s attribution scopes is not null.

    2. Set body["limit"] to source’s attribution scopes's max event states.

  9. If destinationLimitReplaced is true, set body["source_destination_limit"] to the user agent’s max destinations covered by unexpired sources, serialized.

Note: The "source_destination_limit" field may be included to indicate that max destinations covered by unexpired sources was hit, which is not reported as "source-destination-limit" to prevent side-channel leakage of cross-origin data.

  1. Let data be a new verbose debug data with the items:

    data type

    dataTypeToReport

    body

    body

  2. Run obtain and deliver a verbose debug report with « data », source’s reporting origin, and source’s fenced.

To obtain and deliver an aggregatable debug report on source registration given a source debug data type dataType, an attribution source source, a boolean isNoised, and a boolean destinationLimitReplaced:

  1. If source’s fenced is true, return.

  2. Let config be source’s aggregatable debug reporting config.

  3. Let debugDataMap be config’s debug data.

  4. If debugDataMap is empty, return.

  5. Let dataTypesToReport be a new set.

  6. If dataType is "source-success" and isNoised is true, append "source-noised" to dataTypesToReport.

  7. Otherwise, append dataType to dataTypesToReport.

  8. If destinationLimitReplaced is true, append "source-destination-limit-replaced" to dataTypesToReport.

  9. Let contributions be a new list.

  10. For each dataTypeToReport of dataTypesToReport:

    1. If debugDataMap[dataTypeToReport] exists:

      1. Let contribution be a new aggregatable contribution with items:

        key

        debugDataMap[dataTypeToReport]'s key bitwise-OR config’s key piece

        value

        debugDataMap[dataTypeToReport]'s value

        filtering ID

        default filtering ID value

      2. Append contribution to contributions.

  11. Run obtain and deliver an aggregatable debug report on registration with contributions, source’s source site, source’s reporting origin, source, source’s attribution destinations[0], config’s aggregation coordinator, and source’s source time.

To obtain and deliver debug reports on source registration given a source debug data type dataType, an attribution source source, an optional boolean isNoised (default false), and an optional boolean destinationLimitReplaced (default false):

  1. Run obtain and deliver a verbose debug report on source registration with dataTypes, source, isNoised, and destinationLimitReplaced.

  2. Run obtain and deliver an aggregatable debug report on source registration with dataTypes, source, isNoised, and destinationLimitReplaced.

To delete expired sources given a moment now:

  1. For each source of the attribution source cache:

    1. If source’s expiry time is less than now, remove source from the attribution source cache.

To find sources with common destinations and reporting origin given an attribution source pendingSource:

  1. Let matchingSources be a new list.

  2. For each source of the user agent’s attribution source cache:

    1. Let commonDestinations be the intersection of source’s attribution destinations and pendingSource’s attribution destinations.

    2. If commonDestinations is empty, continue.

    3. If source’s reporting origin and pendingSource’s reporting origin are not same origin, continue.

    4. Append source to matchingSources.

  3. Return matchingSources.

To remove associated event-level reports and rate-limit records given an internal ID sourceId and a moment minTriggerTime:

  1. For each event-level report report of the event-level report cache:

    1. If report’s source ID is not equal to sourceId, continue.

    2. If report’s trigger time is less than minTriggerTime, continue.

    3. Remove report from the event-level report cache.

    4. Remove all attribution rate-limit records entry from the attribution rate-limit cache where entry’s entity ID is equal to report’s internal ID.

To remove sources with unselected attribution scopes for destination given a site destination and an attribution source pendingSource:

  1. Let scopeRecords be a new list.

  2. Let scopes be pendingSource’s attribution scopes's values.

  3. For each source of the attribution source cache:

    1. If source’s reporting origin and pendingSource’s reporting origin are not same origin, continue.

    2. If source’s attribution destinations does not contain destination, continue.

    3. If source’s attribution scopes's is null, continue.

    4. For each scope in source’s attribution scopes's values:

      1. Append the tuple (scope, source) to scopeRecords.

  4. Sort scopeRecords in ascending order with a being less than b if any of the following are true:

  5. Let selectedScopes be scopes, cloned.

  6. Let sourcesToRemove be a new set.

  7. For each record of scopeRecords:

    1. If selectedScopes’s size is less than pendingSource’s attribution scopes's limit, append record[0] to selectedScopes.

    2. Otherwise, if selectedScopes does not contain record[0], append record[1] to sourcesToRemove.

  8. For each source of the sourcesToRemove:

    1. Remove associated event-level reports and rate-limit records with source’s internal ID and pendingSource’s source time.

    2. Remove source from the attribution source cache.

To remove sources with unselected attribution scopes given an attribution source pendingSource:

  1. If pendingSource’s attribution scopes is null, return.

  2. Assert: pendingSource’s attribution destinations is sorted in ascending order, with a being less than b if a, serialized, is less than b, serialized.

  3. For each destination in pendingSource’s attribution destinations:

    1. Remove sources with unselected attribution scopes for destination with destination and pendingSource.

To remove or update sources for attribution scopes given an attribution source pendingSource:

  1. Let pendingScopes be pendingSource’s attribution scopes.

  2. Let matchingSources be the result of running find sources with common destinations and reporting origin with pendingSource.

  3. For each source of matchingSources:

    1. Let existingScopes be source’s attribution scopes.

    2. If pendingScopes is null:

      1. Set source’s attribution scopes to null if existingScopes is not null.

    3. Otherwise:

      1. If existingScopes is null or existingScopes’s max event states is not equal to pendingScopes’s max event states or existingScopes’s limit is less than pendingScopes’s limit:

        1. Remove associated event-level reports and rate-limit records with source’s internal ID and pendingSource’s source time.

        2. Remove source from the attribution source cache.

  4. Remove sources with unselected attribution scopes with pendingSource.

To process an attribution source given an attribution source source:

  1. Delete expired sources with source’s source time.

  2. Let randomizedResponseConfig be a new randomized response output configuration whose items are:

    max attributions per source

    source’s max number of event-level reports

    trigger specs

    source’s trigger specs

  3. Let epsilon be source’s event-level epsilon.

  4. Let possibleTriggerStates be the result of obtaining a set of possible trigger states with randomizedResponseConfig.

  5. Let numPossibleTriggerStates be possibleTriggerStates’s size.

  6. Let channelCapacity be the result of computing the channel capacity of a source with numPossibleTriggerStates and epsilon.

  7. If channelCapacity is an error:

    1. Run obtain and deliver debug reports on source registration with "source-trigger-state-cardinality-limit" and source.

    2. Return.

  8. Let sourceType be source’s source type.

  9. If channelCapacity is greater than max event-level channel capacity per source[sourceType]:

    1. Run obtain and deliver debug reports on source registration with "source-channel-capacity-limit" and source.

    2. Return.

  10. If source’s attribution scopes is not null:

    1. Let attributionScopes be attribution scopes.

    2. If sourceType is "event" and numPossibleTriggerStates is greater than attributionScopes’s max event states:

      1. Run obtain and deliver debug reports on source registration with "source-max-event-states-limit" and source.

      2. Return.

    3. Let scopesChannelCapacity be the result of computing the scopes channel capacity of a source with numPossibleTriggerStates, attributionScopes’s limit, and attributionScopes’s max event states.

    4. If scopesChannelCapacity is greater than max event-level attribution scopes channel capacity per source[sourceType]:

      1. Run obtain and deliver debug reports on source registration with "source-scopes-channel-capacity-limit" and source.

      2. Return.

  11. Set source’s randomized response to the result of obtaining a randomized source response with possibleTriggerStates and epsilon.

  12. Set source’s randomized trigger rate to the result of obtaining a randomized source response pick rate with numPossibleTriggerStates and epsilon.

  13. Set source’s number of event-level reports to 0 if source’s randomized response is null, randomized response's size otherwise.

  14. Let pendingSourcesForSourceOrigin be the set of all attribution sources pendingSource of the attribution source cache where pendingSource’s source origin and source’s source origin are same origin.

  15. If pendingSourcesForSourceOrigin’s size is greater than or equal to the user agent’s max pending sources per source origin:

    1. Run obtain and deliver debug reports on source registration with "source-storage-limit" and source.

    2. Return.

  16. Let destinationRateLimitResult be the result of running check if an attribution source exceeds the time-based destination limit with source.

  17. If destinationRateLimitResult is "hit reporting limit":

    1. Run obtain and deliver debug reports on source registration with "source-destination-rate-limit" and source.

    2. Return.

  18. If the result of running check if an attribution source exceeds the per day destination limit with source is true:

    1. Run obtain and deliver debug reports on source registration with "source-destination-per-day-rate-limit" and source.

    2. Return.

  19. Let sourcesToDeleteForDestinationLimit be the result of running get sources to delete for the unexpired destination limit with source.

  20. If sourcesToDeleteForDestinationLimit contains source’s internal ID:

    1. Run obtain and deliver debug reports on source registration with "source-destination-limit" and source.

    2. Return.

  21. Let destinationLimitReplaced be true if sourcesToDeleteForDestinationLimit is not empty, otherwise false.

  22. Run delete sources for unexpired destination limit with sourcesToDeleteForDestinationLimit and source’s source time.

  23. Remove or update sources for attribution scopes with source.

  24. Let isNoised be true if source’s randomized response is not null, otherwise false.

  25. If destinationRateLimitResult is "hit global limit":

    1. Run obtain and deliver debug reports on source registration with "source-destination-global-rate-limit", source, isNoised, and destinationLimitReplaced.

    2. Return.

  26. Let newRateLimitRecords be a new set.

  27. For each destination in source’s attribution destinations:

    1. Let rateLimitRecord be a new attribution rate-limit record with the items:

      scope

      "source"

      source site

      source’s source site

      attribution destination

      destination

      reporting origin

      source’s reporting origin

      time

      source’s source time

      expiry time

      source’s expiry time

      entity ID

      source’s internal ID

      destination limit priority

      source’s destination limit priority

    2. If the result of running should processing be blocked by reporting-origin limit with rateLimitRecord is blocked:

      1. Run obtain and deliver debug reports on source registration with "source-reporting-origin-limit", source, isNoised, and destinationLimitReplaced.

      2. Return.

    3. Append rateLimitRecord to newRateLimitRecords.

  28. For each record of newRateLimitRecords, append record to the attribution rate-limit cache.

  29. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and source’s source time is true.

  30. If source’s randomized response is not null and is a list:

    1. For each trigger state triggerState of source’s randomized response:

      1. Let fakeReport be the result of running obtain a fake report with source and triggerState.

      2. Append fakeReport to the event-level report cache.

    2. If source’s randomized response is not empty, then set source’s event-level attributable value to false.

    3. For each destination in source’s attribution destinations:

      1. Let rateLimitRecord be a new attribution rate-limit record with the items:

        scope

        "event-attribution"

        source site

        source’s source site

        attribution destination

        destination

        reporting origin

        source’s reporting origin

        time

        source’s source time

        expiry time

        null

        entity ID

        null

      2. Append rateLimitRecord to the attribution rate-limit cache.

  31. Run obtain and deliver debug reports on source registration with "source-success", source, isNoised, and destinationLimitReplaced.

  32. Append source to the attribution source cache.

Note: Because a fake report does not have a "real" effective destination, we need to subtract from the privacy budget of all possible destinations.

Note: The limits that are not reported as source-success in verbose debug reports should be checked before any limits that are reported implicitly as source-success ( source-destination-global-rate-limit and source-reporting-origin-limit) to prevent side-channel leakage of cross-origin data. Furthermore, the verbose debug data should be fully determined regardless of the result of checks on implicitly reported limits.

12. Triggering Algorithms

A trigger-registration JSON key is one of the following:

12.1. Creating an attribution trigger

To parse an event-trigger value given a map map:

  1. If experimental Flexible Event support is false or map["value"] does not exist, return 1.

  2. Let value be map["value"].

  3. If value is not an integer, cannot be represented by an unsigned 32-bit integer, or is less than or equal to zero, return an error.

  4. Return value.

To parse event triggers given a map map:

  1. Let eventTriggers be a new set.

  2. If map["event_trigger_data"] does not exist, return eventTriggers.

  3. Let values be map["event_trigger_data"].

  4. If values is not a list, return an error.

  5. For each value of values:

    1. If value is not a map, return an error.

    2. Let triggerData be the result of running parse an optional 64-bit unsigned integer with value, "trigger_data", and 0.

    3. If triggerData is an error, return it.

    4. Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "deduplication_key", and null.

    5. If dedupKey is an error, return it.

    6. Let priority be the result of running parse an optional 64-bit signed integer with value, "priority", and 0.

    7. If priority is an error, return it.

    8. Let filterPair be the result of running parse a filter pair with value.

    9. If filterPair is an error, return it.

    10. Let triggerValue be the result of running parse an event-trigger value with value.

    11. If triggerValue is an error, return it.

    12. Let eventTrigger be a new event-level trigger configuration with the items:

      trigger data

      triggerData

      dedup key

      dedupKey

      priority

      priority

      filters

      filterPair[0]

      negated filters

      filterPair[1]

      value

      triggerValue

    13. Append eventTrigger to eventTriggers.

  6. Return eventTriggers.

To parse aggregatable trigger data given a map map:

  1. Let aggregatableTriggerData be a new list.

  2. If map["aggregatable_trigger_data"] does not exist, return aggregatableTriggerData.

  3. Let values be map["aggregatable_trigger_data"].

  4. If values is not a list, return an error.

  5. For each value of values:

    1. If value is not a map, return an error.

    2. If value["key_piece"] does not exist or is not a string, return an error.

    3. Let keyPiece be the result of running parse an aggregation key piece with value["key_piece"].

    4. If keyPiece is an error, return it.

    5. Let sourceKeys be a new set.

    6. If value["source_keys"] exists:

      1. If value["source_keys"] is not a list, return an error.

      2. For each sourceKey of value["source_keys"]:

        1. If sourceKey is not a string, return an error.

        2. Append sourceKey to sourceKeys.

    7. Let filterPair be the result of running parse a filter pair with value.

    8. If filterPair is an error, return it.

    9. Let aggregatableTrigger be a new aggregatable trigger data with the items:

      key piece

      keyPiece

      source keys

      sourceKeys

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    10. Append aggregatableTrigger to aggregatableTriggerData.

  6. Return aggregatableTriggerData.

To parse aggregatable filtering ID max bytes given a map map:

  1. Let maxBytes be default filtering ID max bytes.

  2. If map["aggregatable_filtering_id_max_bytes"] exists:

    1. Set maxBytes to map["aggregatable_filtering_id_max_bytes"].

    2. If maxBytes is a positive integer and is contained in the valid filtering ID max bytes range, return maxBytes.

    3. Otherwise, return an error.

  3. Return maxBytes.

To validate aggregatable key-values value given a value:

  1. If value is not an integer, return false.

  2. If value is less than or equal to 0, return false.

  3. If value is greater than allowed aggregatable budget per source, return false.

  4. Return true.

To parse aggregatable key-values given a map map and a positive integer maxBytes:

  1. Let out be a new map.

  2. For each keyvalue of map:

    1. If value is not a map or an integer, return an error.

    2. If value is an integer:

      1. If the result of running validate aggregatable key-values value with value is false, return an error.

      2. Set out[key] to a new aggregatable key value whose items are

        value

        value

        filtering ID

        default filtering ID value

      3. Continue.

    3. If value["value"] does not exist, return an error.

    4. If the result of running validate aggregatable key-values value with value["value"] is false, return an error.

    5. Let filteringId be default filtering ID value.

    6. If value["filtering_id"] exists:

      1. Set filteringId to the result of applying the rules for parsing non-negative integers to value["filtering_id"].

      2. If filteringId is an error, return it.

      3. If filteringId is not in the range 0 to 256maxBytes, exclusive, return an error.

    7. Set out[key] to a new aggregatable key value whose items are

      value

      value["value"]

      filtering ID

      filteringId

  3. Return out.

To parse aggregatable values given a map map and a positive integer maxBytes:

  1. If map["aggregatable_values"] does not exist, return a new list.

  2. Let values be map["aggregatable_values"].

  3. If values is not a map or a list, return an error.

  4. Let aggregatableValuesConfigurations be a list of aggregatable values configurations, initially empty.

  5. If values is a map:

    1. Let aggregatableKeyValues be the result of running parse aggregatable key-values with values and maxBytes.

    2. If aggregatableKeyValues is an error, return it.

    3. Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:

      values

      aggregatableKeyValues

      filters

      «»

      negated filters

      «»

    4. Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.

    5. Return aggregatableValuesConfigurations.

  6. For each value of values:

    1. If value is not a map, return an error.

    2. If value["values"] does not exist, return an error.

    3. Let aggregatableKeyValues be the result of running parse aggregatable key-values with value["values"] and maxBytes.

    4. If aggregatableKeyValues is an error, return it.

    5. Let filterPair be the result of running parse a filter pair with value.

    6. If filterPair is an error, return it.

    7. Let aggregatableValuesConfiguration be a new aggregatable values configuration with the items:

      values

      aggregatableKeyValues

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    8. Append aggregatableValuesConfiguration to aggregatableValuesConfigurations.

  7. Return aggregatableValuesConfigurations.

To parse aggregatable dedup keys given a map map:

  1. Let aggregatableDedupKeys be a new list.

  2. If map["aggregatable_deduplication_keys"] does not exist, return aggregatableDedupKeys.

  3. Let values be map["aggregatable_deduplication_keys"].

  4. If values is not a list, return an error.

  5. For each value of values:

    1. If value is not a map, return an error.

    2. Let dedupKey be the result of running parse an optional 64-bit unsigned integer with value, "deduplication_key", and null.

    3. If dedupKey is an error, return it.

    4. Let filterPair be the result of running parse a filter pair with value.

    5. If filterPair is an error, return it.

    6. Let aggregatableDedupKey be a new aggregatable dedup key with the items:

      dedup key

      dedupKey

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    7. Append aggregatableDedupKey to aggregatableDedupKeys.

  6. Return aggregatableDedupKeys.

To parse named budgets for trigger given a map map:

  1. Let namedBudgets be a new list.

  2. If map["named_budgets"] does not exist, return namedBudgets.

  3. Let values be map["named_budgets"].

  4. If values is not a list, return an error.

  5. For each value of values:

    1. If value is not a map, return an error.

    2. Let name be null.

    3. If map["name"] exists:

      1. Set name to value["name"].

      2. If name is not a string, return an error.

    4. Let filterPair be the result of running parse a filter pair with value.

    5. If filterPair is an error, return it.

    6. Let namedBudget be a new named budget with the items:

      name

      name

      filters

      filterPair[0]

      negated filters

      filterPair[1]

    7. Append namedBudget to namedBudgets.

  6. Return namedBudgets.

To parse attribution scopes for trigger from a map map:

  1. Let result be a new set.

  2. If map["attribution_scopes"] does not exist, return result.

  3. Let values be map["attribution_scopes"].

  4. If values is not a list, return an error.

  5. For each value of values:

    1. If value is not a string, return an error.

    2. Append value to result.

  6. Return result.

To create an attribution trigger given a byte sequence json, a site destination, a suitable origin reportingOrigin, a moment triggerTime, and a boolean fenced:

  1. Let value be the result of running parse JSON bytes to an Infra value with json.

  2. If value is not a map, return an error.

  3. Let eventTriggers be the result of running parse event triggers with value.

  4. If eventTriggers is an error, return it.

  5. Let aggregatableTriggerData be the result of running parse aggregatable trigger data with value.

  6. If aggregatableTriggerData is an error, return it.

  7. Let filteringIdsMaxBytes be the result of parsing aggregatable filtering ID max bytes with value.

  8. If filteringIdsMaxBytes is an error, return it.

  9. Let aggregatableValuesConfigurations be the result of running parse aggregatable values with value and filteringIdsMaxBytes.

  10. If aggregatableValuesConfigurations is an error, return it.

  11. Let aggregatableDedupKeys be the result of running parse aggregatable dedup keys with value.

  12. If aggregatableDedupKeys is an error, return it.

  13. Let namedBudgets be the result of running parse named budgets for trigger with value.

  14. If namedBudgets is an error, return it.

  15. Let debugKey be the result of running parse an optional 64-bit unsigned integer with value, "debug_key", and null.

  16. If debugKey is an error, set debugKey to null.

  17. If the result of running check if cookie-based debugging is allowed with reportingOrigin and destination is blocked, set debugKey to null.

  18. Let filterPair be the result of running parse a filter pair with value.

  19. If filterPair is an error, return it.

  20. Let debugReportingEnabled be false.

  21. If value["debug_reporting"] exists and is a boolean, set debugReportingEnabled to value["debug_reporting"].

  22. Let aggregationCoordinator be default aggregation coordinator.

  23. If value["aggregation_coordinator_origin"] exists:

    1. Set aggregationCoordinator to the result of running parse an aggregation coordinator with value["aggregation_coordinator_origin"].

    2. If aggregationCoordinator is an error, return it.

  24. Let aggregatableSourceRegTimeConfig be "exclude".

  25. If value["aggregatable_source_registration_time"] exists:

    1. If value["aggregatable_source_registration_time"] is not a string, return an error.

    2. If value["aggregatable_source_registration_time"] is not an aggregatable source registration time configuration, return an error.

    3. Set aggregatableSourceRegTimeConfig to value["aggregatable_source_registration_time"].

  26. Let triggerContextID be null.

  27. If value["trigger_context_id"] exists:

    1. If value["trigger_context_id"] is not a string, return an error.

    2. If value["trigger_context_id"]'s length is greater than the max length per trigger context ID, return an error.

    3. Set triggerContextID to value["trigger_context_id"].

  28. Let aggregatableDebugReportingConfig be a new aggregatable debug reporting config.

  29. If value["aggregatable_debug_reporting"] exists:

    1. Let supportedTypes be the set of all trigger debug data types.

    2. Set aggregatableDebugReportingConfig to the result of running parse an aggregatable debug reporting config with value["aggregatable_debug_reporting"], allowed aggregatable budget per source, supportedTypes, and aggregatableDebugReportingConfig.

  30. Let attributionScopes be the result of running parse attribution scopes for trigger with value.

  31. If attributionScopes is an error, return it.

  32. Let trigger be a new attribution trigger with the items:

    attribution destination

    destination

    trigger time

    triggerTime

    reporting origin

    reportingOrigin

    filters

    filterPair[0]

    negated filters

    filterPair[1]

    debug key

    debugKey

    event-level trigger configurations

    eventTriggers

    aggregatable trigger data

    aggregatableTriggerData

    aggregatable values configurations

    aggregatableValuesConfigurations

    aggregatable dedup keys

    aggregatableDedupKeys

    debug reporting enabled

    debugReportingEnabled

    aggregation coordinator

    aggregationCoordinator

    aggregatable source registration time configuration

    aggregatableSourceRegTimeConfig

    trigger context ID

    triggerContextID

    fenced

    fenced

    aggregatable filtering ID max bytes

    filteringIdsMaxBytes

    aggregatable debug reporting config

    aggregatableDebugReportingConfig

    attribution scopes

    attributionScopes

  33. If aggregatableSourceRegTimeConfig is not "exclude" and the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, return an error.

  34. Return trigger.

Determine proper charset-handling for the JSON header value.

12.2. Does filter data match

To match filter values given a filter value a and a filter value b:

  1. If b is empty, then:

    1. If a is empty, then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b.

  3. If i is empty, then return false.

  4. Return true.

To match filter values with negation given a filter value a and a filter value b:

  1. If b is empty, then:

    1. If a is not empty, then return true.

    2. Otherwise, return false.

  2. Let i be the intersection of a and b.

  3. If i is not empty, then return false.

  4. Return true.

To match an attribution source against a filter config given an attribution source source, a filter config filter, a moment moment, and a boolean isNegated:

  1. Let lookbackWindow be filter’s lookback window.

  2. If lookbackWindow is not null:

    1. If the duration from moment and the source’s source time is greater than lookbackWindow:

      1. If isNegated is false, return false.

    2. Else if isNegated is true, return false.

    Note: If non-negated, the source must have been registered inside of the lookback window. If negated, it must be outside of the lookback window.

  3. Let filterMap be filter’s map.

  4. Let sourceData be source’s filter data.

  5. For each keyfilterValues of filterMap:

    1. If sourceData[key] does not exist, continue.

    2. Let sourceValues be sourceData[key].

    3. If isNegated is:

      false
      If the result of running match filter values with sourceValues and filterValues is false, return false.
      true
      If the result of running match filter values with negation with sourceValues and filterValues is false, return false.
  6. Return true.

To match an attribution source against filters given an attribution source source, a list of filter configs filters, a moment moment, and a boolean isNegated:

  1. If filters is empty, return true.

  2. For each filter of filters:

    1. If the result of running match an attribution source against a filter config with source, filter, moment, and isNegated is true, return true.

  3. Return false.

To match an attribution source against filters and negated filters given an attribution source source, a list of filter configs filters, a list of filter configs notFilters, and a moment moment:

  1. If the result of running match an attribution source against filters with source, filters, moment, and isNegated set to false is false, return false.

  2. If the result of running match an attribution source against filters with source, notFilters, moment, and isNegated set to true is false, return false.

  3. Return true.

12.3. Should send a report unconditionally

To check if an aggregatable attribution report should be unconditionally sent given an attribution trigger trigger:

  1. If trigger’s trigger context ID is not null, return true.

  2. If trigger’s aggregatable filtering ID max bytes is not equal to default filtering ID max bytes, return true.

  3. Return false.

12.4. Should attribution be blocked by rate limits

To check if attribution should be blocked by attribution rate limit given an attribution trigger trigger, an attribution source sourceToAttribute, and a scope rateLimitScope:

  1. Let matchingRateLimitRecords be all attribution rate-limit records record of attribution rate-limit cache where all of the following are true:

  2. If matchingRateLimitRecords’s size is greater than or equal to max attributions per rate-limit window, return blocked.

  3. Return allowed.

To check if attribution should be blocked by rate limits given an attribution trigger trigger, an attribution source sourceToAttribute, and an attribution rate-limit record newRecord:

  1. If the result of running check if attribution should be blocked by attribution rate limit with trigger, sourceToAttribute, and newRecord’s scope is blocked:

    1. Let debugDataType be "trigger-event-attributions-per-source-destination-limit".

    2. If newRecord’s scope is "aggregatable-attribution", set debugDataType to "trigger-aggregate-attributions-per-source-destination-limit".

    3. Return the triggering result ("dropped", (debugDataType, null)).

  2. If the result of running should processing be blocked by reporting-origin limit with newRecord is blocked:

    1. Return the triggering result ("dropped", ("trigger-reporting-origin-limit", null)).

  3. Return null.

Consider performing should processing be blocked by reporting-origin limit from triggering attribution to avoid duplicate invocation from triggering event-level attribution and triggering aggregatable attribution. [Issue #1287]

12.5. Creating aggregatable contributions

To create aggregatable contributions from aggregation keys and aggregatable values given a map aggregationKeys and a map aggregatableValues, run the following steps:

  1. Let contributions be an empty list.

  2. For each idkey of aggregationKeys:

    1. If aggregatableValues[id] does not exist, continue.

    2. Let contribution be a new aggregatable contribution with the items:

      key

      key

      value

      aggregatableValues[id]'s value

      filtering ID

      aggregatableValues[id]'s filtering ID

    3. Append contribution to contributions.

  3. Return contributions.

To create aggregatable contributions given an attribution source source and an attribution trigger trigger, run the following steps:

  1. Let aggregationKeys be the result of cloning source’s aggregation keys.

  2. For each triggerData of trigger’s aggregatable trigger data:

    1. If the result of running match an attribution source against filters and negated filters with source, triggerData’s filters, triggerData’s negated filters, and trigger’s trigger time is false, continue:

    2. For each sourceKey of triggerData’s source keys:

      1. If aggregationKeys[sourceKey] does not exist, continue.

      2. Set aggregationKeys[sourceKey] to aggregationKeys[sourceKey] bitwise-OR triggerData’s key piece.

  3. Let aggregatableValuesConfigurations be trigger’s aggregatable values configurations.

  4. For each aggregatableValuesConfiguration of aggregatableValuesConfigurations:

    1. If the result of running match an attribution source against filters and negated filters with source, aggregatableValuesConfiguration’s filters, aggregatableValuesConfiguration’s negated filters, and trigger’s trigger time is true:

      1. Return the result of running create aggregatable contributions from aggregation keys and aggregatable values with aggregationKeys and aggregatableValuesConfiguration’s values.

  5. Return a new list.

12.6. Can source create aggregatable contributions

To check if an attribution source can create aggregatable contributions given an aggregatable attribution report report and an attribution source sourceToAttribute, run the following steps:

  1. Let remainingAggregatableBudget be sourceToAttribute’s remaining aggregatable attribution budget.

  2. Assert: remainingAggregatableBudget is greater than or equal to 0.

  3. If report’s required aggregatable budget is greater than remainingAggregatableBudget, return false.

  4. Return true.

To find matching budget name given an attribution trigger trigger and an attribution source sourceToAttribute:

  1. For each named budget namedBudget of trigger’s named budgets:

    1. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, namedBudget’s filters, namedBudget’s negated filters, and trigger’s trigger time is true:

      1. Return namedBudget’s name.

  2. Return null.

To check if an attribution source can create aggregatable contributions for matched budget name given an aggregatable attribution report report, an attribution source sourceToAttribute, and a string matchedBudgetName, run the following steps:

  1. If sourceToAttribute’s remaining named budgets[matchedBudgetName] does not exist, return true.

  2. If report’s required aggregatable budget is greater than sourceToAttribute’s remaining named budgets[matchedBudgetName], return false.

  3. Return true.

12.7. Obtaining verbose debug data on trigger registration

To obtain verbose debug data body on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, a possibly null attribution source sourceToAttribute, and a possibly null attribution report report:

  1. Let body be a new map.

  2. If dataType is:

    "trigger-event-attributions-per-source-destination-limit"
    "trigger-aggregate-attributions-per-source-destination-limit"

    Set body["limit"] to the user agent’s max attributions per rate-limit window, serialized.

    "trigger-reporting-origin-limit"

    Set body["limit"] to the user agent’s max attribution reporting origins per rate-limit window, serialized.

    "trigger-event-storage-limit"

    Set body["limit"] to max event-level reports per attribution destination, serialized.

    "trigger-aggregate-storage-limit"

    Set body["limit"] to max aggregatable attribution reports per attribution destination, serialized.

    "trigger-aggregate-insufficient-budget"

    Set body["limit"] to allowed aggregatable budget per source, serialized.

    "trigger-aggregate-insufficient-named-budget"
    1. Assert: sourceToAttribute is not null.

    2. Let matchedBudgetName be the result of running find matching budget name with trigger and sourceToAttribute.

    3. Assert: matchedBudgetName is not null and sourceToAttribute’s named budgets[matchedBudgetName] exists.

    4. Set body["name"] to matchedBudgetName.

    5. Set body["limit"] to sourceToAttribute’s named budgets[matchedBudgetName], serialized.

    "trigger-aggregate-excessive-reports"

    Set body["limit"] to max aggregatable reports per source[0],

    "trigger-event-low-priority"
    "trigger-event-excessive-reports"
    1. Assert: report is not null and is an event-level report.

    2. Return the result of running obtain an event-level report body with report.

  3. Set body["attribution_destination"] to trigger’s attribution destination, serialized.

  4. If trigger’s debug key is not null, set body["trigger_debug_key"] to trigger’s debug key, serialized.

  5. If sourceToAttribute is not null:

    1. Set body["source_event_id"] to source’s event ID, serialized.

    2. Set body["source_site"] to source’s source site, serialized.

    3. If sourceToAttribute’s debug key is not null, set body["source_debug_key"] to sourceToAttribute’s debug key, serialized.

  6. Return body.

To obtain verbose debug data on trigger registration given a trigger debug data type dataType, an attribution trigger trigger, a possibly null attribution source sourceToAttribute, and a possibly null attribution report report:

  1. If trigger’s debug reporting enabled is false, return null.

  2. If the result of running check if cookie-based debugging is allowed with trigger’s reporting origin and trigger’s attribution destination is blocked, return null.

  3. If sourceToAttribute is not null and sourceToAttribute’s cookie-based debug allowed is false, return null.

  4. Let data be a new verbose debug data with the items:

    data type

    dataType.

    body

    The result of running obtain verbose debug data body on trigger registration with dataType, trigger, sourceToAttribute, and report.

  5. Return data.

12.8. Triggering event-level attribution

An event-level report a is lower-priority than an event-level report b if any of the following are true:

An event-level-report-replacement result is one of the following:

"add-new-report"

The new report should be added.

"drop-new-report-none-to-replace"

The new report should be dropped because the attributed source has reached its report limit and there is no pending report to consider for replacement.

"drop-new-report-low-priority"

The new report should be dropped because the attributed source has reached its report limit and the new report is lower-priority than all pending reports.

To maybe replace event-level report given an attribution source sourceToAttribute and an event-level report report:

  1. Assert: sourceToAttribute’s number of event-level reports is less than or equal to sourceToAttribute’s max number of event-level reports.

  2. If sourceToAttribute’s number of event-level reports is less than sourceToAttribute’s max number of event-level reports, return "add-new-report".

  3. Let matchingReports be a new list whose elements are all the elements in the event-level report cache whose report time and source ID are equal to report’s, sorted in ascending order using is lower-priority than.

  4. If matchingReports is empty:

    1. Set sourceToAttribute’s event-level attributable value to false.

    2. Return "drop-new-report-none-to-replace".

  5. Assert: sourceToAttribute’s number of event-level reports is greater than or equal to matchingReports’s size.

  6. Let lowestPriorityReport be matchingReports[0].

  7. If report is lower-priority than lowestPriorityReport, return "drop-new-report-low-priority".

  8. Remove lowestPriorityReport from the event-level report cache.

  9. Decrement sourceToAttribute’s number of event-level reports value by 1.

  10. Let rateLimitRecord be the element from attribution rate-limit cache whose entity ID is equal to lowestPriorityReport’s internal ID and scope is equal to "event-attribution".

  11. Assert: rateLimitRecord is not null.

    Note: We are making an implicit assumption that attribution rate-limit window is greater than or equal to sourceToAttribute’s expiry. If this assumption does not hold then rateLimitRecord might be null.

  12. Remove rateLimitRecord from the attribution rate-limit cache.

  13. Return "add-new-report".

This algorithm is not compatible with the behavior proposed for experimental Flexible Event support with differing event-level report windows for a given source.

To trigger event-level attribution given an attribution trigger trigger and an attribution source sourceToAttribute, run the following steps:

  1. If trigger’s event-level trigger configurations is empty, return the triggering result ("dropped", null).

  2. If sourceToAttribute’s randomized response is not null and is not empty:

    1. Assert: sourceToAttribute’s event-level attributable is false.

    2. Return the triggering result ("dropped", ("trigger-event-noise", null)).

  3. Let matchedConfig be null.

  4. For each event-level trigger configuration config of trigger’s event-level trigger configurations:

    1. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, config’s filters, config’s negated filters, and trigger’s trigger time is true:

      1. Set matchedConfig to config.

      2. Break.

  5. If matchedConfig is null:

    1. Return the triggering result ("dropped", ("trigger-event-no-matching-configurations", null)).

  6. If matchedConfig’s dedup key is not null and sourceToAttribute’s dedup keys contains it:

    1. Return the triggering result ("dropped", ("trigger-event-deduplicated", null)).

  7. Let specEntry be the result of finding a matching trigger spec with sourceToAttribute and matchedConfig’s trigger data.

  8. If specEntry is an error:

    1. Return the triggering result ("dropped", ("trigger-event-no-matching-trigger-data", null)).

  9. Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and specEntry’s value's event-level report windows's total window.

  10. If windowResult is falls before:

    1. Return the triggering result ("dropped", ("trigger-event-report-window-not-started", null)).

  11. If windowResult is falls after:

    1. Return the triggering result ("dropped", ("trigger-event-report-window-passed", null)).

  12. Assert: windowResult is falls within.

  13. Let report be the result of running obtain an event-level report with sourceToAttribute, trigger’s trigger time, trigger’s debug key, matchedConfig’s priority, and specEntry.

  14. If sourceToAttribute’s event-level attributable value is false:

    1. Return the triggering result ("dropped", ("trigger-event-excessive-reports", report)).

  15. If the result of running maybe replace event-level report with sourceToAttribute and report is:

    "add-new-report"
    1. Do nothing.

    "drop-new-report-none-to-replace"
    1. Return the triggering result ("dropped", ("trigger-event-excessive-reports", report)).

    "drop-new-report-low-priority"
    1. Return the triggering result ("dropped", ("trigger-event-low-priority", report)).

  16. Let rateLimitRecord be a new attribution rate-limit record with the items:

    scope

    "event-attribution"

    source site

    sourceToAttribute’s source site

    attribution destination

    trigger’s attribution destination

    reporting origin

    sourceToAttribute’s reporting origin

    time

    sourceToAttribute’s source time

    expiry time

    null

    entity ID

    report’s internal ID

  17. If the result of running check if attribution should be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.

  18. Let numMatchingReports be the number of entries in the event-level report cache whose attribution destinations contains trigger’s attribution destination.

  19. If numMatchingReports is greater than or equal to the user agent’s max event-level reports per attribution destination:

    1. Return the triggering result ("dropped", ("trigger-event-storage-limit", null)).

  20. Let triggeringStatus be "attributed".

  21. Let debugData be null.

  22. If sourceToAttribute’s randomized response is:

    null
    1. Append report to the event-level report cache.

    2. Append rateLimitRecord to the attribution rate-limit cache.

    not null
    1. Set triggeringStatus to "noised".

    2. Set debugData to ("trigger-event-noise", null).

  23. Increment sourceToAttribute’s number of event-level reports value by 1.

  24. If matchedConfig’s dedup key is not null, append it to sourceToAttribute’s dedup keys.

  25. If triggeringStatus is "attributed" and the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, queue a task to attempt to deliver a debug report with report.

  26. Return the triggering result (triggeringStatus, debugData).

12.9. Triggering aggregatable attribution

To trigger aggregatable attribution given an attribution trigger trigger and an attribution source sourceToAttribute, run the following steps:

  1. If the result of running check if an attribution trigger contains aggregatable data is false, return the triggering result ("dropped", null).

  2. Let windowResult be the result of check whether a moment falls within a window with trigger’s trigger time and sourceToAttribute’s aggregatable report window.

  3. If windowResult is falls after:

    1. Return the triggering result ("dropped", ("trigger-aggregate-report-window-passed", null)).

  4. Assert: windowResult is falls within.

  5. Let matchedDedupKey be null.

  6. For each aggregatable dedup key aggregatableDedupKey of trigger’s aggregatable dedup keys:

    1. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, aggregatableDedupKey’s filters, aggregatableDedupKey’s negated filters, and trigger’s trigger time is true:

      1. Set matchedDedupKey to aggregatableDedupKey’s dedup key.

      2. Break.

  7. If matchedDedupKey is not null and sourceToAttribute’s aggregatable dedup keys contains it:

    1. Return the triggering result ("dropped", ("trigger-aggregate-deduplicated", null)).

  8. Let report be the result of running obtain an aggregatable attribution report with sourceToAttribute and trigger.

  9. If report’s contributions is empty:

    1. Return the triggering result ("dropped", ("trigger-aggregate-no-contributions", null)).

  10. Let numMatchingReports be the number of entries in the aggregatable attribution report cache whose effective attribution destination equals trigger’s attribution destination and is null report is false.

  11. If numMatchingReports is greater than or equal to the user agent’s max aggregatable attribution reports per attribution destination:

    1. Return the triggering result ("dropped", ("trigger-aggregate-storage-limit", null)).

  12. Let rateLimitRecord be a new attribution rate-limit record with the items:

    scope

    "aggregatable-attribution"

    source site

    sourceToAttribute’s source site

    attribution destination

    trigger’s attribution destination

    reporting origin

    sourceToAttribute’s reporting origin

    time

    sourceToAttribute’s source time

    expiry time

    null

    entity ID

    null

  13. If the result of running check if attribution should be blocked by rate limits with trigger, sourceToAttribute, and rateLimitRecord is not null, return it.

  14. If sourceToAttribute’s number of aggregatable attribution reports value is equal to max aggregatable reports per source[0], then:

    1. Return the triggering result ("dropped", ("trigger-aggregate-excessive-reports", null)).

  15. If the result of running check if an attribution source can create aggregatable contributions with report and sourceToAttribute is false:

    1. Return the triggering result ("dropped", ("trigger-aggregate-insufficient-budget", null)).

  16. Let matchedBudgetName be the result of running find matching budget name with trigger and sourceToAttribute.

  17. If matchedBudgetName is not null and the result of running check if an attribution source can create aggregatable contributions for matched budget name with report, sourceToAttribute, and matchedBudgetName is false:

    1. Return the triggering result ("dropped", ("trigger-aggregate-insufficient-named-budget", null)).

  18. Append report to the aggregatable attribution report cache.

  19. Increment sourceToAttribute’s number of aggregatable attribution reports value by 1.

  20. Decrement sourceToAttribute’s remaining aggregatable attribution budget value by report’s required aggregatable budget.

  21. If matchedBudgetName is not null and sourceToAttribute’s remaining named budgets[matchedBudgetName] exists:

    1. Decrement sourceToAttribute’s remaining named budgets[matchedBudgetName] value by report’s required aggregatable budget.

  22. If matchedDedupKey is not null, append it to sourceToAttribute’s aggregatable dedup keys.

  23. Append rateLimitRecord to the attribution rate-limit cache.

  24. Run generate null attribution reports with trigger and report.

  25. If the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, queue a task to attempt to deliver a debug report with report.

  26. Return the triggering result ("attributed", null).

12.10. Triggering attribution

To obtain and deliver a verbose debug report on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:

  1. Let debugDataList be a new list.

  2. For each data of dataSet:

    1. Let debugData be the result of running obtain verbose debug data on trigger registration with data’s data type, trigger, sourceToAttribute, and data’s report.

    2. If debugData is not null, append debugData to debugDataList.

  3. If debugDataList is empty, return.

  4. Run obtain and deliver a verbose debug report with debugDataList, trigger’s reporting origin, and trigger’s fenced.

To obtain and deliver an aggregatable debug report on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:

  1. If trigger’s fenced is true, return.

  2. Let config be trigger’s aggregatable debug reporting config.

  3. Let debugDataMap be config’s debug data.

  4. If debugDataMap is empty, return.

  5. Let sourceKeyPiece be 0.

  6. If sourceToAttribute is not null, then set sourceKeyPiece to sourceToAttribute’s aggregatable debug reporting config's key piece.

  7. Let contributions be a new list.

  8. Let contextKeyPiece be sourceKeyPiece bitwise-OR config’s key piece.

  9. For each data of dataSet:

    1. Let type be data’s data type.

    2. If debugDataMap[type] exists:

      1. Let keyPiece be contextKeyPiece bitwise-OR debugDataMap[type]'s key.

      2. Let contribution be a new aggregatable contribution with the items:

        key

        keyPiece

        value

        debugDataMap[type]'s value

        filtering ID

        default filtering ID value

      3. Append contribution to contributions.

  10. Run obtain and deliver an aggregatable debug report on registration with contributions, trigger’s attribution destination, trigger’s reporting origin, sourceToAttribute, trigger’s attribution destination, config’s aggregation coordinator, and trigger’s trigger time.

To obtain and deliver debug reports on trigger registration given a set of trigger debug data dataSet, an attribution trigger trigger, and a possibly null attribution source sourceToAttribute:

  1. Run obtain and deliver a verbose debug report on trigger registration with dataSet, trigger, and sourceToAttribute.

  2. Run obtain and deliver an aggregatable debug report on trigger registration with dataSet, trigger, and sourceToAttribute.

To check if an attribution source and attribution trigger have matching attribution scopes given an attribution source source and an attribution trigger trigger:

  1. If trigger’s attribution scopes is empty, return true.

  2. Return whether the intersection of source’s attribution scopes's values and trigger’s attribution scopes is not empty.

An attribution source a is higher-priority than an attribution source b if the following steps return true:

  1. a’s priority is greater than b’s priority, return true.

  2. a’s priority is less than b’s priority, return false.

  3. If a’s source time is greater than b’s source time, return true.

  4. Return false.

To find matching sources given an attribution trigger trigger:

  1. Let matchingSources be a new list.

  2. For each source of the attribution source cache:

    1. If source’s attribution destinations does not contain trigger’s attribution destination, continue.

    2. If source’s reporting origin and trigger’s reporting origin are not same origin, continue.

    3. If source’s expiry time is less than or equal to trigger’s trigger time, continue.

    4. Append source to matchingSources.

  3. Let sourceToAttribute be null.

  4. For each source of matchingSources:

    1. If the result of checking if an attribution source and attribution trigger have matching attribution scopes with source and trigger is false, continue.

    2. If sourceToAttribute is null or source is higher-priority than sourceToAttribute, set sourceToAttribute to source.

  5. If sourceToAttribute is null, return the tuple (null, an empty list).

  6. Remove sourceToAttribute from matchingSources.

  7. Return the tuple (sourceToAttribute, matchingSources).

Note: We deliberately return all matching sources for deletion even if they don’t have matching attribution scopes with the attribution trigger to avoid creating multiple attribution reports from a single cross-site user interaction.

To check if an attribution trigger contains aggregatable data given an attribution trigger trigger, run the following steps:

  1. If trigger’s aggregatable trigger data is not empty, return true.

  2. If any of trigger’s aggregatable values configurations's values is not empty, return true.

  3. Return false.

To trigger attribution given an attribution trigger trigger, run the following steps:

  1. Let hasAggregatableData be the result of checking if an attribution trigger contains aggregatable data with trigger.

  2. If trigger’s event-level trigger configurations is empty and hasAggregatableData is false, return.

  3. Let (sourceToAttribute, matchingSources) be the result of running find matching sources with trigger.

  4. If sourceToAttribute is null:

    1. Run obtain and deliver debug reports on trigger registration with « ("trigger-no-matching-source", null) », trigger, and sourceToAttribute set to null.

    2. If hasAggregatableData is true, then run generate null attribution reports with trigger and report set to null.

    3. Return.

  5. If the result of running match an attribution source against filters and negated filters with sourceToAttribute, trigger’s filters, trigger’s negated filters, and trigger’s trigger time is false:

    1. Run obtain and deliver debug reports on trigger registration with « ("trigger-no-matching-filter-data", null) », trigger, and sourceToAttribute.

    2. If hasAggregatableData is true, then run generate null attribution reports with trigger and report set to null.

    3. Return.

  6. For each item of matchingSources:

    1. Remove item from the attribution source cache.

  7. Let eventLevelResult be the result of running trigger event-level attribution with trigger and sourceToAttribute.

  8. Let aggregatableResult be the result of running trigger aggregatable attribution with trigger and sourceToAttribute.

  9. Let debugDataSet be a new set.

  10. If eventLevelResult’s debug data is not null, then append eventLevelResult’s debug data to debugDataSet.

  11. If aggregatableResult’s debug data is not null, then append aggregatableResult’s debug data to debugDataSet.

  12. Run obtain and deliver debug reports on trigger registration with debugDataSet, trigger, and sourceToAttribute.

  13. If hasAggregatableData and aggregatableResult’s status is "dropped", run generate null attribution reports with trigger and report set to null.

  14. Remove all attribution rate-limit records entry from the attribution rate-limit cache if the result of running can attribution rate-limit record be removed with entry and trigger’s trigger time is true.

Consider replacing debugDataSet with a list. [Issue #1287]

12.11. Establishing report delivery time

To check whether a moment falls within a window given a moment moment and a report window window:

  1. If moment is less than window’s start, return falls before.

  2. If moment is greater than or equal to window’s end, return falls after.

  3. Return falls within.

To obtain an event-level report delivery time given a report window list windows and a moment triggerTime:

  1. If automation local testing mode is true, return triggerTime.

  2. For each window of windows:

    1. If the result of check whether a moment falls within a window with triggerTime and window is falls within, return window’s end.

  3. Assert: not reached.

To obtain an aggregatable attribution report delivery time given an attribution trigger trigger, perform the following steps. They return a moment.

  1. Let triggerTime be trigger’s trigger time.

  2. If the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, return triggerTime.

  3. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  4. Return triggerTime + r * randomized aggregatable attribution report delay.

12.12. Obtaining an event-level report

To obtain an event-level report given an attribution source source, a moment triggerTime, a possibly null non-negative 64-bit integer triggerDebugKey, a 64-bit integer priority priority, and a trigger spec map entry specEntry:

  1. Let reportTime be the result of running obtain an event-level report delivery time with specEntry’s value's event-level report windows and triggerTime.

  2. Let report be a new event-level report struct whose items are:

    event ID

    source’s event ID.

    trigger data

    specEntry’s key

    randomized trigger rate

    source’s randomized trigger rate.

    reporting origin

    source’s reporting origin.

    attribution destinations

    source’s attribution destinations.

    report time

    reportTime

    trigger priority

    priority.

    trigger time

    triggerTime.

    source ID

    source’s internal ID.

    external ID

    The result of generating a random UUID.

    internal ID

    The result of getting the next internal ID

    attribution debug info

    (source’s debug key, triggerDebugKey).

  3. Return report.

12.13. Obtaining an aggregatable report’s required budget

An aggregatable report report’s required aggregatable budget is the total value of report’s contributions.

12.14. Obtaining an aggregatable attribution report

To obtain an aggregatable attribution report given an attribution source source and an attribution trigger trigger:

  1. Let reportTime be the result of running obtain an aggregatable attribution report delivery time with trigger.

  2. Let report be a new aggregatable attribution report struct whose items are:

    reporting origin

    source’s reporting origin.

    effective attribution destination

    trigger’s attribution destination.

    source time

    source’s source time.

    report time

    reportTime.

    external ID

    The result of generating a random UUID.

    internal ID

    The result of getting the next internal ID

    attribution debug info

    (source’s debug key, trigger’s debug key).

    contributions

    The result of running create aggregatable contributions with source and trigger.

    aggregation coordinator

    trigger’s aggregation coordinator.

    source registration time configuration

    trigger’s aggregatable source registration time configuration.

    trigger context ID

    trigger’s trigger context ID

    filtering ID max bytes

    trigger’s aggregatable filtering ID max bytes

    source ID

    source’s internal ID.

  3. Return report.

12.15. Generating randomized null attribution reports

To obtain a null attribution report given an attribution trigger trigger and a moment sourceTime:

  1. Let reportTime be the result of running obtain an aggregatable attribution report delivery time with trigger.

  2. Let report be a new aggregatable attribution report struct whose items are:

    reporting origin

    trigger’s reporting origin

    effective attribution destination

    trigger’s attribution destination

    source time

    sourceTime

    report time

    reportTime

    external ID

    The result of generating a random UUID

    internal ID

    The result of getting the next internal ID

    attribution debug info

    (null, trigger’s debug key)

    contributions

    «»

    aggregation coordinator

    trigger’s aggregation coordinator

    source registration time configuration

    trigger’s aggregatable source registration time configuration

    is null report

    true

    trigger context ID

    trigger’s trigger context ID

    filtering ID max bytes

    trigger’s aggregatable filtering ID max bytes

    source ID

    Null

  3. Return report.

To obtain rounded source time given a moment sourceTime, return sourceTime in seconds since the UNIX epoch, rounded down to a multiple of a whole day (86400 seconds).

To determine if a randomized null attribution report is generated given a double randomPickRate:

  1. Assert: randomPickRate is between 0 and 1 (both inclusive).

  2. Let r be a random double between 0 (inclusive) and 1 (exclusive) with uniform probability.

  3. If r is less than randomPickRate, return true.

  4. Otherwise, return false.

To generate null attribution reports given an attribution trigger trigger and a possibly null aggregatable attribution report report:

  1. Let nullReports be a new list.

  2. If trigger’s aggregatable source registration time configuration is "exclude":

    1. Let randomizedNullReportRate be randomized null attribution report rate excluding source registration time.

    2. If the result of running check if an aggregatable attribution report should be unconditionally sent with trigger is true, set randomizedNullReportRate to 1.

    3. If report is null and the result of determining if a randomized null attribution report is generated with randomizedNullReportRate is true:

      1. Let nullReport be the result of obtaining a null attribution report with trigger and trigger’s trigger time.

      2. Append nullReport to the aggregatable attribution report cache.

      3. Append nullReport to nullReports.

  3. Otherwise:

    1. Assert: trigger’s trigger context ID is null.

    2. Let maxSourceExpiry be valid source expiry range[1].

    3. Round maxSourceExpiry away from zero to the nearest day (86400 seconds).

    4. Let roundedAttributedSourceTime be null.

    5. If report is not null, set roundedAttributedSourceTime to the result of obtaining rounded source time with report’s source time.

    6. For each integer day of the range 0 to the number of days in maxSourceExpiry, inclusive:

      1. Let fakeSourceTime be trigger’s trigger time - day days.

      2. If roundedAttributedSourceTime is not null and equals the result of obtaining rounded source time with fakeSourceTime:

        1. Continue.

      3. If the result of determining if a randomized null attribution report is generated with randomized null attribution report rate including source registration time is true:

        1. Let nullReport be the result of obtaining a null attribution report with trigger and fakeSourceTime.

        2. Append nullReport to the aggregatable attribution report cache.

        3. Append nullReport to nullReports.

  4. Return nullReports.

To shuffle a list list, reorder list’s elements such that each possible permutation has equal probability of appearance.

12.16. Deferring trigger attribution

To maybe defer and then complete trigger attribution given an attribution trigger trigger, run the following steps in parallel:

  1. Let navigation be the navigation that landed on the document from which trigger’s registration was initiated.

  2. If navigation is null, return.

  3. Let sources be all source registrations originating from background attributionsrc requests initiated by navigation.

  4. If sources is empty, return.

  5. Wait until all sources are processed.

  6. Queue a task on the networking task source to trigger attribution with trigger.

Specify this in terms of Navigation

13. Report delivery

The user agent MUST periodically run queue reports for delivery on the event-level report cache and aggregatable attribution report cache.

To queue reports for delivery given a set of attribution reports cache, run the following steps:

  1. Let reportsToSend be a new list.

  2. For each report of cache:

    1. If report’s report time is greater than the current wall time, continue.

    2. Remove report from cache.

      Note: In order to support sending, waiting, and retries across various forms of interruption, including shutdown, the user agent may need to persist reports that are in the process of being sent in some other storage.

    3. Append report to reportsToSend.

  3. Shuffle reportsToSend.

    Note: Shuffling ensures event-level reports for the same source with the same report time are never sent in the order they were created. This results in less information gained from a single attribution source.

  4. For each report of reportsToSend, run the following steps in parallel:

    1. Wait an implementation-defined random non-negative duration.

      Note: On startup, it is possible the user agent will need to send many reports whose report times passed while the browser was closed. Adding random delay prevents prevents event IDs from different source origins being joined based on the time they were received.

    2. Optionally, wait a further implementation-defined duration.

      Note: This is intended to allow user agents to optimize device resource usage.

    3. Run attempt to deliver a report with report.

13.1. Encode an unsigned k-bit integer

To encode an unsigned k-byte integer given an integer integerToEncode and an integer byteLength, return the representation of integerToEncode as a big-endian byte sequence of length byteLength, left padding with zeroes as necessary.

13.2. Obtaining an aggregatable report’s debug mode

An aggregatable report report’s debug mode is the result of running the following steps:

  1. If report is an:

    aggregatable attribution report
    1. If the result of checking if attribution debugging can be enabled with report’s attribution debug info is true, return enabled.

    2. Return disabled.

    aggregatable debug report

    Return disabled.

13.3. Obtaining an aggregatable report’s shared info

An aggregatable report report’s shared info is the result of running the following steps:

  1. Let reportingOrigin be report’s reporting origin.

  2. Let api be null.

  3. If report is an:

    aggregatable attribution report

    Set api to "attribution-reporting".

    aggregatable debug report

    Set api to "attribution-reporting-debug".

  4. Let sharedInfo be a map of the following key/value pairs:

    "api"

    api

    "attribution_destination"

    report’s effective attribution destination, serialized

    "report_id"

    report’s external ID

    Note: The inclusion of "report_id" in the shared info is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure.

    "reporting_origin"

    reportingOrigin, serialized

    "scheduled_report_time"

    report’s report time in seconds since the UNIX epoch, serialized

    "version"

    "1.0"

    Note: The "version" value needs to be bumped if the aggregation service upgrades.

  5. If report’s debug mode is enabled, set sharedInfo["debug_mode"] to "enabled".

  6. If report is an aggregatable attribution report and report’s source registration time configuration is "include": set sharedInfo["source_registration_time"] to the result of obtaining rounded source time with report’s source time, serialized.

  7. Return the string resulting from executing serialize an infra value to a json string on sharedInfo.

13.4. Obtaining an aggregatable report’s aggregation service payloads

To obtain the public key for encryption given an aggregation coordinator aggregationCoordinator:

  1. Let url be a new URL record.

  2. Set url’s scheme to aggregationCoordinator’s scheme.

  3. Set url’s host to aggregationCoordinator’s host.

  4. Set url’s port to aggregationCoordinator’s port.

  5. Set url’s path to «".well-known", "aggregation-service", "v1", "public-keys"».

  6. Return a user-agent-determined public key from url or an error in the event that the user agent failed to obtain the public key from url. This step may be asynchronous.

Specify this in terms of fetch.

Note: The user agent might enforce weekly key rotation. If there are multiple keys, the user agent might independently pick a key uniformly at random for every encryption operation. The key should be uniquely identifiable.

An aggregatable report report’s plaintext payload is the result of running the following steps:

  1. Let payloadData be a new list.

  2. Let maxContributions be null.

  3. If report is an:

    aggregatable attribution report

    Set maxContributions to max aggregation keys per source registration.

    aggregatable debug report

    Set maxContributions to max contributions per aggregatable debug report.

  4. Let contributions be report’s contributions.

  5. Assert: contributions’s size is less than or equal to maxContributions.

  6. While contributions’s size is less than maxContributions:

    1. Let nullContribution be a new aggregatable contribution with the items:

      key

      0

      value

      0

      filtering ID

      default filtering ID value

    2. Append nullContribution to contributions.

  7. For each contribution of contributions:

    1. Let contributionData be a map of the following key/value pairs:

      "bucket"

      The result of encoding an unsigned k-byte integer given contribution’s key and 16.

      "value"

      The result of encoding an unsigned k-byte integer given contribution’s value and 4.

      "id"

      The result of encoding an unsigned k-byte integer given contribution’s filtering ID and report’s filtering ID max bytes.

    2. Append contributionData to payloadData.

  8. Let payload be a map of the following key/value pairs:

    "data"

    payloadData

    "operation"

    "histogram"

  9. Return the byte sequence resulting from CBOR encoding payload.

To obtain the encrypted payload given an aggregatable report report and a public key pkR, run the following steps:

  1. Let plaintext be report’s plaintext payload.

  2. Let encodedSharedInfo be report’s shared info, encoded.

  3. Let info be the concatenation of «"aggregation_service", encodedSharedInfo».

  4. Set up HPKE sender’s context with pkR and info.

  5. Return the byte sequence or an error resulting from encrypting plaintext with the sender’s context.

To obtain the aggregation service payloads given an aggregatable report report, run the following steps:

  1. Let pkR be the result of running obtain the public key for encryption with report’s aggregation coordinator.

  2. If pkR is an error, return pkR.

  3. Let encryptedPayload be the result of running obtain the encrypted payload with report and pkR.

  4. If encryptedPayload is an error, return encryptedPayload.

  5. Let aggregationServicePayloads be a new list.

  6. Let aggregationServicePayload be a map of the following key/value pairs:

    "payload"

    encryptedPayload, base64 encoded

    "key_id"

    A string identifying pkR

  7. If report’s debug mode is enabled, set aggregationServicePayload["debug_cleartext_payload"] to report’s plaintext payload, base64 encoded.

  8. Append aggregationServicePayload to aggregationServicePayloads.

  9. Return aggregationServicePayloads.

13.5. Serialize attribution report body

To obtain an event-level report body given an attribution report report, run the following steps:

  1. Let data be a map of the following key/value pairs:

    "attribution_destination"

    report’s attribution destinations, serialized.

    "randomized_trigger_rate"

    report’s randomized trigger rate, rounded to 7 digits after the decimal point

    "source_type"

    report’s source type

    "source_event_id"

    report’s event ID, serialized

    "trigger_data"

    report’s trigger data, serialized

    "report_id"

    report’s external ID

    Note: The inclusion of "report_id" in the report body is intended to allow the report recipient to perform deduplication and prevent double counting, in the event that the user agent retries reports on failure.

    "scheduled_report_time"

    report’s report time in seconds since the UNIX epoch, serialized

  2. Run serialize an attribution debug info with data and report’s attribution debug info.

  3. Return data.

To serialize an event-level report report, run the following steps:

  1. Let data be the result of running obtain an event-level report body with report.

  2. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To obtain an aggregatable report body given an aggregatable report report, run the following steps:

  1. Assert: report’s effective attribution destination is not an opaque origin.

  2. Let aggregationServicePayloads be the result of running obtain the aggregation service payloads.

  3. If aggregationServicePayloads is an error, return aggregationServicePayloads.

  4. Let data be a map of the following key/value pairs:

    "shared_info"

    report’s shared info

    "aggregation_service_payloads"

    aggregationServicePayloads

    "aggregation_coordinator_origin"

    report’s aggregation coordinator, serialized

  5. Return data.

To serialize an aggregatable attribution report report, run the following steps:

  1. Let data be the result of running obtain an aggregatable report body with report.

  2. Run serialize an attribution debug info with data and report’s attribution debug info.

  3. If report’s trigger context ID is not null, set data["trigger_context_id"] to report’s trigger context ID.

  4. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To serialize an aggregatable debug report report, run the following steps:

  1. Let data be the result of running obtain an aggregatable report body with report.

  2. Return the byte sequence resulting from executing serialize an infra value to JSON bytes on data.

To serialize an attribution report report, run the following steps:

  1. Assert: report is an event-level report or an aggregatable attribution report.

  2. If report is an:

    event-level report

    Return the result of running serialize an event-level report with report.

    aggregatable attribution report

    Return the result of running serialize an aggregatable attribution report with report.

13.6. Serialize verbose debug report body

To serialize a verbose debug report report, run the following steps:

  1. Let collection be a new list.

  2. For each debugData of report’s data:

    1. Let data be a map of the following key/value pairs:

      "type"

      debugData’s data type

      "body"

      debugData’s body

    2. Append data to collection.

  3. Return the byte sequence resulting from executing serialize an Infra value to JSON bytes on collection.

13.7. Get report request URL

To generate a report URL given a suitable origin reportingOrigin and a list of strings path:

  1. Let reportUrl be a new URL record.

  2. Set reportUrl’s scheme to reportingOrigin’s scheme.

  3. Set reportUrl’s host to reportingOrigin’s host.

  4. Set reportUrl’s port to reportingOrigin’s port.

  5. Let fullPath be «".well-known", "attribution-reporting"».

  6. Append path to fullPath.

  7. Set reportUrl’s path to fullPath.

  8. Return reportUrl.

To generate an attribution report URL given an attribution report report and an optional boolean isDebugReport (default false):

  1. Assert: report is an event-level report or an aggregatable attribution report.

  2. Let path be a new list.

  3. If isDebugReport is true, append "debug" to path.

  4. If report is an:

    event-level report

    Append "report-event-attribution" to path.

    aggregatable attribution report

    Append "report-aggregate-attribution" to path.

  5. Return the result of running generate a report URL with report’s reporting origin and path.

To generate a verbose debug report URL given a verbose debug report report:

  1. Let path be «"debug", "verbose"».

  2. Return the result of running generate a report URL with report’s reporting origin and path.

To generate an aggregatable debug report URL given an aggregatable debug report report:

  1. Let path be «"debug", "report-aggregate-debug"».

  2. Return the result of running generate a report URL with report’s reporting origin and path.

13.8. Creating a report request

To create a report request given a URL url and a byte sequence body:

  1. Let headers be a new header list containing a header named "Content-Type" whose value is "application/json".

  2. Let request be a new request with the following properties:

    method

    "POST"

    URL

    url

    header list

    headers

    body

    A body whose source is body.

    referrer

    "no-referrer"

    client

    null

    origin

    url’s origin

    window

    "no-window"

    service-workers mode

    "none"

    initiator

    ""

    mode

    "same-origin"

    unsafe-request flag

    set

    credentials mode

    "omit"

    cache mode

    "no-store"

  3. Return request.

13.9. Issuing a report request

This algorithm constructs a request and attempts to deliver it to a suitable origin.

To attempt to deliver a report given an attribution report report, run the following steps:

  1. Assert: Neither the event-level report cache nor the aggregatable attribution report cache contains report.

  2. The user-agent MAY ignore the report; if so, return.

  3. Let url be the result of executing generate an attribution report URL on report.

  4. Let data be the result of executing serialize an attribution report on report.

  5. If data is an error, return.

  6. Let request be the result of executing create a report request on url and data.

  7. Queue a task to fetch request.

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error. To prevent the report recipient from learning additional information about whether a user is online, retries might be limited in number and subject to random delays.

13.10. Issuing a debug report request

To attempt to deliver a debug report given an attribution report report:

  1. The user-agent MAY ignore the report; if so, return.

  2. Let url be the result of executing generate an attribution report URL on report with isDebugReport set to true.

  3. Let data be the result of executing serialize an attribution report on report.

  4. If data is an error, return.

  5. Let request be the result of executing create a report request on url and data.

  6. Fetch request.

13.11. Issuing a verbose debug request

To attempt to deliver a verbose debug report given a verbose debug report report:

  1. The user-agent MAY ignore the report; if so, return.

  2. Let url be the result of executing generate a verbose debug report URL on report.

  3. Let data be the result of executing serialize a verbose debug report on report.

  4. Let request be the result of executing create a report request on url and data.

  5. Fetch request.

13.12. Issuing an aggregatable debug request

To attempt to deliver an aggregatable debug report given an aggregatable debug report report:

  1. The user-agent MAY ignore the report; if so, return.

  2. Let url be the result of executing generate an aggregatable debug report URL on report.

  3. Let data be the result of executing serialize an aggregatable debug report on report.

  4. Let request be the result of executing create a report request on url and data.

  5. Fetch request.

This fetch should use a network partition key for an opaque origin. [Issue #220]

A user agent MAY retry this algorithm in the event that there was an error.

14. Cross App and Web Algorithms

14.1. Get OS registrations

An OS registration is a struct with the following items:

URL

A URL

debug reporting enabled

A boolean

To get OS registrations from a header value given a header value header:

  1. Let values be the result of parsing structured fields with input_string set to header and header_type set to "list".

  2. If parsing failed, return an error.

  3. Let registrations be a new list.

  4. For each value of values:

    1. If value is not a string, continue.

    2. Let url be the result of running the URL parser on value.

    3. If url is failure or null, continue.

    4. Let debugReporting be false.

    5. Let params be the parameters associated with value.

    6. If params["debug-reporting"] exists and params["debug-reporting"] is a boolean, set debugReporting to params["debug-reporting"].

    7. Let registration be a new OS registration struct whose items are:

      URL

      url

      debug reporting enabled

      debugReporting

    8. Append registration to registrations.

  5. If registrations is empty, return an error.

  6. Return registrations.

14.2. Registrars

A registrar is one of the following:

"web"

The user agent supports web registrations.

"os"

The user agent supports OS registrations.

To get supported registrars:

  1. Let supportedRegistrars be a new list.

  2. If the user agent supports web registrations, append "web" to supportedRegistrars.

  3. If the user agent supports OS registrations, append "os" to supportedRegistrars.

  4. Return supportedRegistrars.

14.3. Deliver OS registration debug reports

To obtain and deliver debug reports on OS registrations given an OS debug data type dataType, a list of OS registrations registrations, an origin contextOrigin, and a boolean fenced:

  1. Assert: registrations is not empty.

  2. Let contextSite be the result of obtaining a site from contextOrigin.

  3. For each registration of registrations:

    1. If registration’s debug reporting enabled is false, continue.

    2. Let origin be registration’s URL's origin.

    3. If origin is not suitable, continue.

    4. Let body be a new map with the following key/value pairs:

      "context_site"

      contextSite, serialized.

      "registration_url"

      registration’s URL, serialized.

    5. Let data be a new verbose debug data with the items:

      data type

      dataType

      body

      body

    6. Run obtain and deliver a verbose debug report with « data », origin, and fenced.

15. User-Agent Automation

The user-agent has an associated boolean automation local testing mode (default false).

For the purposes of user-agent automation and website testing, this document defines the below [WebDriver] extension commands to control the API configuration.

15.1. Set local testing mode

HTTP Method URI Template
POST /session/{session id}/ara/localtestingmode

The remote end steps are:

  1. If parameters is not a JSON-formatted Object, return a WebDriver error with error code invalid argument.

  2. Let enabled be the result of getting a property named "enabled" from parameters.

  3. If enabled is undefined or is not a boolean, return a WebDriver error with error code invalid argument.

  4. Set automation local testing mode to enabled.

  5. Return success with data null.

Note: Without this, reports would be subject to noise and delays, making testing difficult.

15.2. Send pending reports

HTTP Method URI Template
POST /session/{session id}/ara/sendpendingreports

The remote end steps are:

  1. For each cache of « event-level report cache, aggregatable attribution report cache »:

    1. For each report of cache:

      1. Remove report from cache.

      2. Attempt to deliver report.

  2. Return success with data null.

16. Security considerations

16.1. Same-Origin Policy

This section is non-normative.

Writes to the attribution source cache, event-level report cache, and aggregatable attribution report cache are separated by the reporting origin, and reports sent to a given origin are generated via data only written to by that origin, via HTTP response headers.

However, the attribution rate-limit cache is not fully partitioned by origin. Reads to that cache involve grouping together data submitted by multiple origins. This is the case for the following limits:

These limits are explicit relaxations of the Same-Origin Policy, in that they allow different origins to influence the API’s behavior. In particular, one risk that is introduced with these shared limits is denial of service attacks, where a group of origins could collude to intentionally hit a rate-limit, causing subsequent origins to be unable to access the API.

This trades off security for privacy, in that the limits are there to reduce the efficacy of many origins colluding together to violate privacy. API deployments should monitor for abuse using these vectors to evaluate the trade-off.

The generation of verbose debug reports involves reads to the attribution source cache, event-level report cache, aggregatable attribution report cache, and attribution rate-limit cache, and the verbose debug data sent to a given origin may encode non-Same-Origin data that are generated from grouping together data submitted by multiple origins, e.g. failures due to rate-limits that are not fully compliant with the Same-Origin Policy. This is of greater concern for source registrations as the source origin could intentionally hit a rate-limit to identify sensitive user data. These verbose debug data cannot be reported explicitly and may be reported as a source-success verbose debug report. This is a tradeoff between security and utility, and mitigates the security concern with respect to the Same-Origin Policy. The risk is of less concern for trigger registrations as attribution sources have to be registered to start with and it requires browsing activity on multiple sites.

The aggregatable debug reports may also encode non-Same-Origin data but in encrypted form. The security risk is further mitigated by the generation of null debug reports and the additive noise in the aggregation service.

16.2. Opting in to the API

This section is non-normative.

As a general principle, the API cannot be used purely at the HTTP layer without some level of opt-in from JavaScript or HTML. For HTML, this opt-in is in the form of the attributionSrc attribute, and for JavaScript, it is the various modifications to fetch, XMLHttpRequest, and the window open steps.

However, this principle is only strictly applied to registering attribution sources. For triggering attribution, we waive this requirement for the sake of compatibility with existing systems, see 347 for context.

17. Privacy considerations

17.1. Clearing site data

The attribution caches contain data about a user’s web activity. As such, the user agent MAY expose controls that allow the user to delete data from them.

17.2. Cross-site information disclosure

This section is non-normative.

The API is concerned with protecting arbitrary cross-site information from being passed from one site to another. For a given attribution source, any outcome associated with it is considered cross-site information. This includes:

The information embedded in the API output is arbitrary but can include things like browsing history and other cross-site activity. The API aims to provide some protection for this information:

17.2.1. Event-level reports

Any given attribution source has a set of possible trigger states. The choice of trigger state may encode cross-site information. To protect the cross-site information disclosure, each attribution source is subject to a randomized response mechanism [RR], which will choose a state at random with pick rate dependent on the source’s event-level epsilon, which has an upper bound of the user agent’s max settable event-level epsilon.

This introduces some level of plausible deniability into the resulting event-level reports (or lack thereof), as there is always a chance that the output was generated from a random process. We can reason about the protection this gives an individual attribution source from the lens of differential privacy [DP].

Additionally, event-level reports limit the amount of relative cross-site information associated with a particular attribution source. We model this using the notion of channel capacity [CHAN]. For every attribution source, it is possible to model its output as a noisy channel. The number of input/output symbols is governed by its associated set of possible trigger states. With the randomized response mechanism, this allows us to analyze the output as a q-ary symmetric channel [Q-SC], with q equal to the size of the set of possible trigger states. This is normatively defined in the compute the channel capacity of a source algorithm.

Note that navigation attribution sources and event attribution sources may have different channel capacities, given that event attribution sources can be registered without user activation or top-level navigation. Maximum capacity for each type is governed by the vendor-defined max event-level channel capacity per source.

17.2.2. Aggregatable attribution reports

Aggregatable attribution reports protect against cross-site information disclosure in two primary ways:

  1. For a given attribution trigger, whether it is attributed to a source is subject to one-way noise via generating null attribution reports with some probability. Note that because the noise does not drop true reports, this is only a partial mitigation, as if an attribution source never generates an aggregatable attribution report, an adversary can learn with 100% certainty that an attribution source was never matched with an attribution trigger.

  2. Cross-site information embedded in an aggregatable attribution report's contributions is encrypted with a public key, ensuring that individual contributions cannot be accessed until an aggregation service subjects them to aggregation and an additive noise process.

add links to the aggregation service noise addition algorithm.

model the channel capacity of a trigger registration.

17.2.3. Debug reports

Fine-grained cross-site information may be embedded in attribution reports sent with isDebugReport being true and certain types of verbose debug reports. These reports will only be allowed when third-party cookies are available, in which case the API caller had the capability to learn the underlying information.

17.3. Protecting against cross-site recognition

This section is non-normative.

A primary privacy goal of the API is to make linking identity between two different top-level sites difficult. This happens when either a request or a JavaScript environment has two user IDs from two different sites simultaneously. Both event-level reports and aggregatable attribution reports were designed to make this kind of recognition difficult:

17.3.1. Event-level reports

Event-level reports come bearing a fine-grained event id that can uniquely identify the source event, which may be joinable with a user’s identity. As such, for event-level reports to protect the cross-site recognition risk, they contain only a small amount (measured via channel capacity) of relative cross-site information from any of the attribution destinations. By limiting the amount of relative cross-site information embedded in event-level reports, we make it difficult for an identifier to be passed through this channel to enable cross-site recognition alone.

17.3.2. Aggregatable attribution reports

Aggregatable attribution reports only contain fine-grained cross-site information in encrypted form. In cleartext, they contain only coarse-grained information from the source site and effective attribution destination. This makes it difficult for an aggregatable attribution report to be associated with a user from either site.

The cross-site recognition risk of the data encrypted in "aggregation_service_payloads" is mitigated by the additive noise addition in the aggregation service.

17.4. Mitigating against repeated API use

fill in this section

17.5. Protecting against browsing history reconstruction

fill in this section

17.6. Reporting-delay concerns

This section is non-normative.

Sending reports some time after attribution occurs enables side-channel leakage in some situations.

17.6.1. Cross-network reporting-origin leakage

A report may be stored while the browser is connected to one network but sent while the browser is connected to a different network, potentially enabling cross-network leakage of the reporting origin.

Example: A user runs the browser with a particular browsing profile on their home network. An attribution report with a particular reporting origin is stored with a scheduled report time in the future. After the scheduled report time is reached, the user runs the browser with the same browsing profile on their employer’s network, at which point the browser sends the report to the reporting origin. Although the report itself may be sent over HTTPS, the reporting origin may be visible to the network administrator via DNS or the TLS client hello (which can be mitigated with ECH). Some reporting origins may be known to operate only or primarily on sensitive sites, so this could leak information about the user’s browsing activity to the user’s employer without their knowledge or consent.

Possible mitigations:

  1. Only send reports with a given reporting origin when the browser has already made a request to that origin on the same network. This prevents the network administrator from gaining additional information from the Attribution Reporting API. However, it increases report loss and report delays, which reduces the utility of the API for the reporting origin. It might also increase the effectiveness of timing attacks, as the origin may be able to better link the report with the user’s request that allowed the report to be released.

  2. Send reports immediately: This reduces the likelihood of a report being stored and sent on different networks. However, it increases the likelihood that the reporting origin can correlate the original request made to the reporting origin for attribution to the report, which weakens the attribution-side privacy controls of the API. In particular, this destroys the differential privacy framework we have for event-level reports. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.

  3. Use a trusted proxy server to send reports: This effectively moves the reporting origin into the report body, so only the proxy server would be visible to the network administrator.

  4. Require DNS over HTTPS: This effectively hides the reporting origin from the network administrator, but is likely impractical to enforce and is itself perhaps circumventable by the network administrator.

17.6.2. User-presence tracking

The browser only tries to send reports while it is running and while it has internet connectivity (even without an explicit check for connectivity, naturally the report will fail to be sent if there is none), so receiving or not receiving an event-level report at the expected time leaks information about the user’s presence. Additionally, because the report request inherently includes an IP address, this could reveal the user’s IP-derived whereabouts to the reporting origin, including at-home vs. at-work or approximate real-world geolocation, or reveal patterns in the user’s browsing activity.

Possible mitigations:

  1. Send reports immediately: This effectively eliminates the presence tracking, as the original request made to the reporting origin is in close temporal proximity to the report request. However, it increases the likelihood that the reporting origin can correlate the two requests, which weakens the attribution-side privacy controls of the API. It would also make the trigger priority functionality impossible, as there would be no way to replace a lower-priority report that was already sent.

  2. Send reports immediately to a trusted proxy server, which would itself send the report to the reporting origin with additional delay. This would effectively hide both the user’s IP address and their online-offline presence from the reporting origin. Compared to the previous mitigation, the proxy server could itself handle the trigger priority functionality, at the cost of increased complexity in the proxy.

17.7. Attribution scope

This section is non-normative.

It is possible for an adversary to register multiple navigation sources in response to a single navigation, and use these multiple sources, each with a different attribution scopes value, to gain additional information about a user based on which attribution scope is chosen. To prevent this abuse the number of unique attribution scope sets per reporting origin per navigation needs to be limited.

Proposed mitigation:

Limit to 1 unique attribution scope set per reporting origin per navigation. Extraneous sources will be dropped.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[CLEAR-SITE-DATA]
Mike West. Clear Site Data. URL: https://w3c.github.io/webappsec-clear-site-data/
[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[ECMASCRIPT]
ECMAScript Language Specification. URL: https://tc39.es/ecma262/multipage/
[ENCODING]
Anne van Kesteren. Encoding Standard. Living Standard. URL: https://encoding.spec.whatwg.org/
[FENCED-FRAME]
Fenced Frame. Draft Community Group Report. URL: https://wicg.github.io/fenced-frame/
[FETCH]
Anne van Kesteren. Fetch Standard. Living Standard. URL: https://fetch.spec.whatwg.org/
[HR-TIME-3]
Yoav Weiss. High Resolution Time. URL: https://w3c.github.io/hr-time/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[PERMISSIONS-POLICY-1]
Ian Clelland. Permissions Policy. URL: https://w3c.github.io/webappsec-permissions-policy/
[REFERRER-POLICY]
Jochen Eisinger; Emily Stark. Referrer Policy. URL: https://w3c.github.io/webappsec-referrer-policy/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC8949]
C. Bormann; P. Hoffman. Concise Binary Object Representation (CBOR). December 2020. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc8949
[RFC9180]
R. Barnes; et al. Hybrid Public Key Encryption. February 2022. Informational. URL: https://www.rfc-editor.org/rfc/rfc9180
[SECURE-CONTEXTS]
Mike West. Secure Contexts. URL: https://w3c.github.io/webappsec-secure-contexts/
[URL]
Anne van Kesteren. URL Standard. Living Standard. URL: https://url.spec.whatwg.org/
[WebDriver]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBDRIVER2]
Simon Stewart; David Burns. WebDriver. URL: https://w3c.github.io/webdriver/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[XHR]
Anne van Kesteren. XMLHttpRequest Standard. Living Standard. URL: https://xhr.spec.whatwg.org/

Informative References

[BIN-ENT]
Binary entropy function. URL: https://en.wikipedia.org/wiki/Binary_entropy_function
[CHAN]
Channel capacity. URL: https://en.wikipedia.org/wiki/Channel_capacity
[DP]
Differential privacy. URL: https://en.wikipedia.org/wiki/Differential_privacy
[Q-SC]
Claudio Weidmann; Gottfried Lechner. q-ary symmetric channel. URL: https://arxiv.org/pdf/0909.2009.pdf
[RR]
Randomized response. URL: https://en.wikipedia.org/wiki/Randomized_response
[STORAGE]
Anne van Kesteren. Storage Standard. Living Standard. URL: https://storage.spec.whatwg.org/

IDL Index

interface mixin HTMLAttributionSrcElementUtils {
    [CEReactions, SecureContext] attribute USVString attributionSrc;
};

HTMLAnchorElement includes HTMLAttributionSrcElementUtils;
HTMLAreaElement includes HTMLAttributionSrcElementUtils;
HTMLImageElement includes HTMLAttributionSrcElementUtils;
HTMLScriptElement includes HTMLAttributionSrcElementUtils;

dictionary AttributionReportingRequestOptions {
  required boolean eventSourceEligible;
  required boolean triggerEligible;
};

partial dictionary RequestInit {
  AttributionReportingRequestOptions attributionReporting;
};

partial interface XMLHttpRequest {
  [SecureContext]
  undefined setAttributionReporting(AttributionReportingRequestOptions options);
};

Issues Index

More precisely specify which mutations are relevant for the attributionsrc attribute.
Use attributionSrcUrls and referrerPolicy with make a background attributionsrc request.
Use/propagate navigationSourceEligible to the navigation request's Attribution Reporting eligibility.
Enforce attribution-scope privacy limits.
Check permissions policy.
This would ideally be replaced by a more descriptive algorithm in Infra. See infra/201
Require preferredPlatformValue to be a token.
Audit other properties on request and set them properly.
Support header-processing on redirects. Due to atomic HTTP redirect handling, we cannot process registrations through integration with fetch. [Issue #839]
Check for transient activation with "navigation-source".
Consider allowing the user agent to limit the size of tokens.
Consider rejecting out-of-bounds values instead of silently clamping.
Invoke parse summary buckets and parse summary operator from this algorithm.
Determine proper charset-handling for the JSON header value.
Determine proper charset-handling for the JSON header value.
Consider performing should processing be blocked by reporting-origin limit from triggering attribution to avoid duplicate invocation from triggering event-level attribution and triggering aggregatable attribution. [Issue #1287]
This algorithm is not compatible with the behavior proposed for experimental Flexible Event support with differing event-level report windows for a given source.
Consider replacing debugDataSet with a list. [Issue #1287]
Specify this in terms of Navigation
Specify this in terms of fetch.
This fetch should use a network partition key for an opaque origin. [Issue #220]
This fetch should use a network partition key for an opaque origin. [Issue #220]
add links to the aggregation service noise addition algorithm.
model the channel capacity of a trigger registration.
fill in this section
fill in this section