-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of triggering UDF execution engine #43
Comments
Hi Fabian Höring, Thnx for the question and sorry for the delay. We're currently working on getting back to you, Thank you, |
Hi sorry for the long delay.
Sample output -
We would like to help you achieve an accurate measurement. If you are interested in collaborating to measure more please feel free to let us know what you think we may be able to help with. |
Hello @peiwenhu, Thanks for the information. I will come back to you with more information about the workloads. About compîling data-plane-shared-libraries locally. If I do this:
What command do I need to execute to compile this ? |
Hello!
For running benchmarks, you can simply use the specified command -
(For running your benchmarks, you will need to modify kv_server_udf_benchmark_test and include the code you want to benchmark.) If you run at HEAD for data-plane-shared-libraries, the following command can be used -
For running benchmarks, I don't think you have to modify your K/V server workspace. However, in general local_repository bazel rule can be used. Let us know if any more information is needed from our side. Thanks! |
Thanks. I was able to execute the benchmarks like this:
I get the same results of ~1ms for executing an empty JS function. I also ran the multi threaded roma workloads which give results of ~1000 request per sec. An additional question on that. I never succeeded to get dropped requests even with a worker size of 1 and a queue size of 1. Is this expected ?
Those tests seem to be done from the same machine which means the client can impact the server and vice versa. We probably have to run the full server and run the client with some web load injector like gatling to get representative results. |
In these benchmarks, for every request, we wait for the response individually. Hence, this behaviour is expected. |
* Release 0.11.0 (2023-07-11) (privacysandbox#40) ### Features * [Breaking change] Use UserDefinedFunctionsConfig instead of KVs for loading UDFs. * [Sharding] Add hpke for s2s communication * [Sharding] Allow for partial data lookups * [Sharding] Making downstream requests in parallel * Add bazel build flag --announce_rc * Add bool parameter to allow routing V1 requests through V2. * Add buf format pre-commit hook * Add build time directive for reentrant parser. * Add functions to retrieve instance information. * Add internal run query client and server. * Add JS hook for set query. * Add lookup client and server for communication with shards * Add MessageQueue for the request simulation system * Add query grammar and interface for set queries. * Add rate limiter for the request simulation system * Add second map to store key value set and add set value update interfaces * Add shard metadata for supporting sharded files * Add simple microbenchmarks for key value cache * Add UDF support for format data command. * Add unit tests for query lexer. * Adding cluster mappings manager * Adding padding * Apply custom lockings on the cache * Connect InternalRunQuery to the parser * Extend and simplify collect-logs to capture test outputs * Extend use of scp deps via data-plane-shared repo * Implement shard manager * Move sharding function to public so it's available for file sharding * Register a logging hook with the UDF. * Register run query hook with udf framework. * Sharding - realtime updates * Sharding read flow fixes * Simplify work done in set operations. Set operations can be passed by * Snapshot files support UDF configs. * Support reading and writing set queries to data files. * Support reading and writing set values for csv files * Support reading/writing DataRecords. Requires new DELTA format. * Support writing sharded files * Update data_loading.fb to support UDF code updates. * Update pre-commit hook versions * Update shard manager mappings continuously * Upgrade build-system to release-0.28.0 * Upgrade build-system to v0.30.1 * Upgrade scp to 0.72.0 * Use Unix domain socket for internal lookup server. * Utilize AWS deps via data-plane-shared repo ### Bug Fixes * Add internal lookup client deadline. * Catch error if insufficient args specified * Fix aggregation logic for set values. * Fix ASAN potential deadlock errors in key_value_cache_test * Proper memory management of callback hook wrappers. * Specify 2 workers for UDF execution. * Upgrade pre-commit hooks * Use shared pointer for UDF absl::Notification. ### Build Tools: Fixes * **build:** Add scope-based sections in release notes ### Documentation * Add docs for data loading capabilities * Add explanation that access control is managed by IAM for writes. * Point readme to a new sharding public explainer Bug: 290798418 Change-Id: I691da695f5727a8517ed3e9f18a3a2d8c5b9e0bf GitOrigin-RevId: 5958051464911b6da60c38bc2a83c3451adadf42 Co-authored-by: Privacy Sandbox Team <[email protected]> * Release 0.11.1 (2023-08-02) (privacysandbox#42) Bug: b/293901782 Change-Id: I4487c821883756599b3d66bb7774cc6585e653dc GitOrigin-RevId: e8cd94de2dbb081a724e9700c98fc61bbd511687 Co-authored-by: Privacy Sandbox Team <[email protected]> --------- Co-authored-by: Peiwen Hu <[email protected]> Co-authored-by: Privacy Sandbox Team <[email protected]>
We have done several web load tests with Gatling (ramp-up then constant load to 100k QPS over several steps and 120 secs each). Performance test setup (c# vs JS UDF execution)We deployed KV server version 0.16 on our own infrastructure in a container on 1 instance with 8 cores (16 threads) & 16 GB of memory with the following components:
Javascript UDF code:
We then implemented the same basic logic in a c# vanilla asp.net server and executed the same web load test for comparison. Web assembly test setupWe have deployed and benchmarked the provided c++ => WASM sample file from here (file size: 100 KB) We also tried out the provided Microsoft templates to compile c# to WASM (dotnet 9 required, file size: 30 MB, contains c# runtime)
ResultsConclusion
|
The specification has some explanations here on how JS and WASM workloads would be handled by the UDF execution engine:
https://github.com/privacysandbox/protected-auction-services-docs/blob/main/bidding_auction_services_system_design.md#adtech-code-execution-engine
https://github.com/privacysandbox/data-plane-shared-libraries/tree/main/scp/cc/roma
This design looks interesting and I'm trying to find out if it would be able to handle workloads with thousands of QPS per instance and 10ms latency. I'm wondering in particular how it would work with managed languages like c# or Java compiled to WASM.
From what it understand there will be N pre allocated workers each able to handle single threaded workloads.
The doc mentions this part about JS
What does recreating the context exactly imply in terms of performance ?
My understanding is that compiling c#/Java to WASM works like a self contained executable which means the runtime needs to be embedded inside the WASM file. If the runtime and garbage collector would be initialized all the time for each request the overhead is very probably prohibitive for workloads mentioned above.
Can you provide more information on how JS and WASM (java, c#) workloads would be handled exactly with the UDF execution and if this could handle the workloads mentioned above ?
The text was updated successfully, but these errors were encountered: