Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real Time Feedback and win notifications #175

Open
dialtone opened this issue May 3, 2021 · 10 comments
Open

Real Time Feedback and win notifications #175

dialtone opened this issue May 3, 2021 · 10 comments

Comments

@dialtone
Copy link
Collaborator

dialtone commented May 3, 2021

As discussed in a previous IWABG meeting, the current set of specifications in the sandbox don't provide a mechanism for DSP, publisher and SSP to receive a notification in real time (or close to real time) in case of a winning outcome of a bid.

Use cases that today depend in large or small part on real time feedback sent to all parties are:

  • Troubleshooting/Monitoring campaigns
  • Daily/Weekly/Monthly pacing
  • Transparency and accountability in billing (cost resolution)
  • Page-level frequency capping
  • Inventory private deals
  • Optimization and ML tuning

Also as added benefit, in a 1st price auction mechanism this system would allow the DSP to reduce the bid price to avoid sub optimal performance of the advertising campaigns.

The proposal presented in the IWABG meeting was asking to implement a report_win callback, to be called only when 2nd price auction was chosen to avoid leakage, to allow for this feedback loop to happen. During the discussion though it was made clear that the approach preferred would have been to introduce some noise to the process instead and potentially report regardless of auction type.

Considering the criticality of this feature it would be nice if we could see some of the suggested implementation details in the privacy sandbox specs.

@dialtone
Copy link
Collaborator Author

dialtone commented May 3, 2021

@csharrison Adding you here for notification

@dialtone
Copy link
Collaborator Author

dialtone commented May 4, 2021

I realize I didn't add needed latencies here but generally it would be particularly useful to be at most in the single digit minutes latency, imagine a campaign with a very small budget targeting a big set of FLoC or a site or a big interest group. In these cases there's a real chance of running out of budget before even 5 minutes have gone by if the customer sets the campaign up with the right, or should I say wrong, combination of parameters.

The win price to be communicated doesn't need to be any better than current CPM values to the cents precision. Our current control mechanism control in a more fine grained way than that but overall cents would be fine.

@michaelkleber
Copy link
Collaborator

Per discussion in today's WICG call, @dialtone, please do add some more information about the latency needs for the various use cases. In particular, we discussed how use cases like individual-user frequency capping are probably not a good use of this sort of technology — that should be handled with entirely on-device information flow. But for the use cases that are about aggregating across many devices, we'd like to hear more about the latency needs.

@skaurus
Copy link

skaurus commented Jun 23, 2021

In our case, the following features are depending on campaign reporting:

  • total and per-day campaign limits by clicks/views
  • reporting for our team and for our clients
  • per-user frequency, obviously, but it works based on cookies so it is irrelevant

It is fairly easy to overspend a limit of a good-performing campaign, so we slow down campaigns as they get close to the limit, the closer the slower.
Also, it should be noted that some DSPs have limits not in events, but in money. In any case, overspending of the limit is on DSP's shoulders.

My guess is that latency of half an hour should not harm these features for us.

@skaurus
Copy link

skaurus commented Jun 23, 2021

Maybe the latency should be measured not only in time, but also in the number of events?
If there are a lot of events, it is easier to hold privacy guarantees with a shorter latency. Also, in the same conditions, it is easier to overspend and so it is more important to have this shorter latency.

@michaelkleber
Copy link
Collaborator

Thank you @skaurus, all good points.

I agree that "slow down a campaign as it reaches its budget" is the sort of use case we ought to support. And I agree that the noise-related and timing-related privacy concerns are much smaller if there are a bunch of events in the time interval.

@dialtone
Copy link
Collaborator Author

Out of the usecases above:

  • Troubleshooting/Monitoring campaigns: probably fine with latency in the 5-10 minutes range depending on campaigns and other control mechanisms. If campaigns have spikes in spend due to external causes, not being able to intervene immediately would potentially cause extreme overspend scenarios.
  • Daily/Weekly/Monthly pacing: probably fine with 10-20 minutes delay although data shows that the lower you can go, the better you can adapt ML algorithms and that improves yields for publishers.
  • Transparency and accountability in billing (cost resolution): 20-30 minutes would probably be alright on this usecase. Important aspect here is that data is delivered to all 3 parties involved.
  • Inventory private deals: I can't say exactly but we may be talking very low latency here, 5 minutes or even less. In order for publishers, exchanges and DSPs to properly comply with private deals on inventory this should be as reasonably as possibly close to realtime or allow for error bounds that all parties agree to.
  • Optimization and ML tuning: 20-30 minutes is an acceptable latency

Solvable separately

  • Page-level frequency capping: this could be solved via browser flag about whether other auctions on the same page were already won by a given advertiser/creative in the same page view.

@vladimanaev
Copy link

This discussion from the mid 2021 is very important one.

Would be nice if we could see some of the suggested implementation details in the privacy sandbox specs in regards to mentioned use-cases in current framework and when Private Aggregation API will be in place.

@ajvelasquez-privacy-sandbox
Copy link
Collaborator

Hi @vladimanaev thanks for bringing this up again and sorry for the late reply. We have been working on this subject actually for the last year and recently released what we call the Real-Time Monitoring API in Chrome stable at 100%, and if you have more questions/feedback we would love to hear from you after you catch up on the discussion on what was actually released here: #430

@omriariav
Copy link
Contributor

@ajvelasquez-privacy-sandbox thank you, we will have someone from Taboola to take a look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants