demo(benchmark-react): Use response-size-based network simulation delays#3810
demo(benchmark-react): Use response-size-based network simulation delays#3810
Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #3810 +/- ##
=======================================
Coverage 98.06% 98.06%
=======================================
Files 151 151
Lines 2843 2843
Branches 556 556
=======================================
Hits 2788 2788
Misses 11 11
Partials 44 44 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Benchmark React
Details
| Benchmark suite | Current: 3b3eac1 | Previous: ee441c6 | Ratio |
|---|---|---|---|
data-client: getlist-100 |
62.11 ops/s (± 0.49) |
57.14 ops/s (± 0.21) |
0.92 |
data-client: getlist-500 |
27.17 ops/s (± 1.99) |
26.25 ops/s (± 0.43) |
0.97 |
data-client: update-entity |
217.39 ops/s (± 15.07) |
217.39 ops/s (± 2.62) |
1 |
data-client: update-user |
185.19 ops/s (± 20.6) |
156.25 ops/s (± 12.54) |
0.84 |
data-client: getlist-500-sorted |
23.04 ops/s (± 1.56) |
22.68 ops/s (± 1.77) |
0.98 |
data-client: update-entity-sorted |
192.31 ops/s (± 22.09) |
188.68 ops/s (± 17.31) |
0.98 |
data-client: update-entity-multi-view |
243.9 ops/s (± 6.74) |
238.1 ops/s (± 36.8) |
0.98 |
data-client: list-detail-switch-10 |
8.1 ops/s (± 0.05) |
6.35 ops/s (± 1.18) |
0.78 |
data-client: update-user-10000 |
21.28 ops/s (± 1.06) |
20.62 ops/s (± 0.23) |
0.97 |
data-client: invalidate-and-resolve |
34.13 ops/s (± 1.04) |
31.85 ops/s (± 1.01) |
0.93 |
data-client: unshift-item |
175.44 ops/s (± 19.77) |
169.49 ops/s (± 19.46) |
0.97 |
data-client: delete-item |
172.41 ops/s (± 5.54) |
158.73 ops/s (± 1.67) |
0.92 |
data-client: move-item |
163.93 ops/s (± 3.78) |
149.25 ops/s (± 8.97) |
0.91 |
This comment was automatically generated by workflow using github-action-benchmark.
Replace fixed per-method network delays with a formula: 40ms base latency + 2ms per record in the response. This more realistically models how network time scales with payload size, naturally penalizing large list refetches relative to normalized cache propagation. Made-with: Cursor
b5b9bb1 to
3b3eac1
Compare
Motivation
The previous network simulation used fixed per-method delays (e.g.
fetchIssueList80ms,updateIssue50ms), which didn't reflect how real network latency scales with payload size. A 10,000-item list response had the same delay as a 100-item list, understating the cost of large refetches.Solution
Replace
NETWORK_SIM_DELAYS(fixed per-method map) with a formula-basedNETWORK_SIM_CONFIG:delay = 40ms base latency + ceil(recordCount / 20) ms
The worker's
respond()function now computes delay dynamically from the actual response data —Array.isArray(value) ? value.length : 1for the record count. This means:Libraries that must refetch large lists after mutations are penalized proportionally to the data volume, while normalized propagation (data-client) bypasses refetching entirely.
Updated README results table with fresh measurements under the new delay model. Also increased pre-mount timeout from 10s to 60s when network sim is enabled, since large list fetches can now exceed 10s.
Open questions
N/A