This repository contains benchmarks of Zarr V3 implementations.
Note
Contributions are welcomed for additional benchmarks, more implementations, or otherwise cleaning up this repository.
Also consider restarting development of the official Zarr benchmark repository: https://github.com/zarr-developers/zarr-benchmark
zarrs/zarrsviazarrs/zarrs_tools- Read executable: zarrs_benchmark_read_sync
- Round trip executable: zarrs_reencode
google/tensorstorezarr-developers/zarr-python- With and without the
ZarrsCodecPipelinefromzarrs/zarrs-python - With and without
dask
- With and without the
Implementation versions are listed in the benchmark charts.
Warning
Python benchmarks are subject to the overheads of Python and may not be using an optimal API/parameters.
Please open a PR if you can improve these benchmarks.
pydeps: install python dependencies (recommended to activate a venv first)zarrs_tools: installzarrs_tools(setCARGO_HOMEto override the installation dir)generate_data: generate benchmark databenchmark_read_all: run read all benchmarkbenchmark_read_chunks: run chunk-by-chunk benchmarkbenchmark_roundtrip: run roundtrip benchmarkbenchmark_all: run all benchmarks
All datasets are uint16 arrays.
| Name | Chunk / Shard Shape | Inner Chunk Shape | Compression | Size |
|---|---|---|---|---|
| Uncompressed | - | None | 2.00 GB | |
| Compressed | - |
zstd 0 |
83 MB | |
| Compressed + Sharded |
zstd 0 |
439 MB |
- Dell 14 Pro Premium
- CPU: Intel(R) Core(TM) Ultra 7 268V (8) @ 5.00 GHz
- Memory: 32GB LPDDR5X 8533 MT/s
- SSD: 2TB EG6 KIOXIA
- OS: Arch Linux (6.18.2)
This benchmark measures the minimum time and peak memory usage to "round trip" a dataset (potentially chunk-by-chunk).
- The disk cache is cleared between each measurement
- These are best of 5 measurements
Table of raw measurements (benchmarks_roundtrip.md)
This benchmark measures the minimum time and peak memory usage to read a dataset chunk-by-chunk into memory.
- The disk cache is cleared between each measurement
- These are best of 5 measurements
Table of raw measurements (benchmarks_read_chunks.md)
Note
zarr-python benchmarks with sharding are not visible in this plot
This benchmark measures the minimum time and peak memory usage to read a dataset subchunk-by-subchunk into memory. A subchunk is an inner chunk within a sharded chunk.
- The disk cache is cleared between each measurement
- These are best of 5 measurements
Table of raw measurements (benchmarks_read_subchunks.md)
This benchmark measures the minimum time and peak memory usage to read an entire dataset into memory.
- The disk cache is cleared between each measurement
- These are best of 5 measurements