Releases: qdrant/qdrant
Releases · qdrant/qdrant
v1.14.0
Change log
Features 🍁
- https://github.com/qdrant/qdrant/milestone/23 - Allow server-side score boosting with user-defined formula. See [Docs]
- #6256 - New
sum_scores
recommendation strategy. Useful for implementing relevance feedback. See [Docs]
Improvements 🌳
- #6325 - Incremental HNSW building. Segment optimized will partially re-use existing HNSW graph on merging segments.
- #6323, #6396 - Improve eviction strategy for unused disk cache
- #6303 - Minor optimizations for memory allocations
- #6385 - Minor internal optimizations
- #6357 - Rethink behavior of
offset
parameter in case of a query withprefetch
. Now offset is only applied to the prefetch result and is not propagated into prefetch query. - #6326 - Parallelize large segment search batches
- #6390 - Better organized
telemetry
details levels generate less overhead by excluding segment-level details
Bug Fixes 🌵
- #6289 - Scroll lock: make sure no segments are modified between scroll and retrieve
- #6297, #6293 - Prevent crash on failed shard recovery. Instead, qdrant will load dummy shard, which can be recovered via API
- #6383 - Fix delayed flush wrapper. Before this fix, payload indexes might temporary loose unflushed updates after flush operation is started and before it is finished.
- #6364 - Abort resharding if any Resharding replica is to be marked dead
Web-ui 🌾
- qdrant/qdrant-web-ui#272 - Full query auto-completion
v1.13.6
Change log
Improvements
- #6279 - In query API, read vectors/payloads once at shard level instead of in every segment, greatly improve search performance when there's lots of segments
- #6276 - In query API, don't send huge vectors/payloads over internal network, defer reads to greatly improve search performance
- #6260 - Improve performance of resharding transfers, make them faster on slow disks or with high memory pressure
Bug fixes
v1.13.5
Change log
Improvements
- #6015 - Split CPU budget into CPU and IO to better saturate resources during optimization
- #6088 - Enhance payload indices to handle IsEmpty and IsNull conditions much more efficiently
- #6022, #6023 - Optimize ID tracker in immutable segments by compressing point mappings and versions
- #6056 - Apply undersampling at shard level, significantly improve query performance on large deployments with large search limit
- #6040 - Trigger optimizers more reliably on changes, prevent optimizers potentially getting stuck
- #6085 - Significantly improve performance of point delete propagation during resharding on large deployments
- #6021 - Configure memory barriers in GPU HNSW building to prevent potential race conditions
- #6074 - Use approximate point count at start of shard transfer to make them start quicker
- #6165 - Show log message if hardware reporting is enabled
Bug fixes
- #6212 - Fix user-defined sharding not being applied in consensus snapshots, potentially corrupting cluster
- #6209 - Fix malformed user-defined sharding data in consensus snapshots if using numeric shard keys, potentially corrupting cluster
- #6014 - Fix cluster metadata not being in consensus snapshots, potentially causing cluster state desync
- #6210 - Fix resharding state not being applied with consensus snapshots, potentially causing cluster state desync
- #6202 - Fix snapshot restore error when numeric user-defined shard keys are used
- #6086 - Fix potential panic while propagating point deletions during resharding
- #6032, #6069 - Don't load or restore segments from hidden files, prevent breakage on hidden files in storage by other tools
- #6037 - Fix search panic after HNSW creation with GPU when on NVIDIA
- #6029 - Fix write rate limit not being properly set in strict mode
- #6118 - Do not rate limit reads for shard transfers in strict mode, it's internal
- #6121 - Do not rate limit shard cleanup operation in strict mode, it's internal
- #6152 - Properly rate limit batch point updates
- #6038 - Keep existing shard configuration if snapshot restore failed to prevent panic on startup
- #6010 - Use configured CA certificate for internal snapshot transfers
- #6108, #6115, #6160 - Fix opt-in anonymization of various telemetry fields
- #6065 - Don't show warning if bootstrap URI is not provided
v1.13.4
Change log
Improvements
- #5967 - Set maximum number of points in a collection with strict mode
Bug fixes
v1.13.3
Change log
Improvements
- #5903 - Enable consensus compaction by default, enables fast peer joining and recovery
- #5956, #5962 - Delete old point versions on update, prevent old points showing up in reads
- #5870 - Don't include unversioned points in reads, don't include partially persisted points in searches
- #5871 - Don't include unversioned points in writes, don't use partially persisted points in updates
- #5904, #5916, #5950 - Pass peer/bootstrap URI with environment variables, support simpler cluster setups
- #5728 - Improve consensus loop, prevent excessive Raft elections
- #5946 - Normalize URL paths in REST API
- #5917 - Add HTTP Retry-After header in REST response if rate limiter is exhausted
- #5915 - Simplify locks in RocksDB buffer wrapper, use single long rather than two
- #5942 - Add default log format property to configuration file
- #5906 - Update roadmap for 2025
Bug fixes
- #5938 - Fix panic when building of memory mapped sparse vector storage was interrupted
- #5877 - Rate limit prefetches in query API
- #5900, #5905, #5914 - Fix potential panic during consensus compaction
- #5910 - In query API, shortcut on empty retrieve query
- #5908 - Fix flush logic in RocksDB based vector storage, don't eagerly persist changes
v1.13.2
Change log
Improvements
- #5891 - Add support for GPUs not featuring half floats, falling back to full floats
Bug fixes
v1.13.1
Change log
Improvements
- #5820 - Improve performance and memory usage of segment merging in optimizers
Bug fixes
- #5848 - Fix potential panic in search after GPU HNSW building
- #5847 - Fix potential panic in GPU HNSW building when having empty payload index
- #5819 - Fix set payload by key on in-memory payload storage not persisting properly
- #5861 - Fix memory mapped sparse vector storage not flushing mappings properly
- #5838 - Fix user-defined sharding not persisting numeric shard keys properly
- #5842 - Do not flush empty memory maps to prevent panic on macOS
- #5843 - Do not flush explicitly when unloading blob storage
- #5845 - Fix potential panic in full text index due to missed bound check
v1.13.0
Change log
Features 🎨
- milestone/18 - Add GPU support for HNSW super fast indexing
- milestone/3 - Add resharding in our cloud offering, change the number of shards at runtime
- milestone/13 - Add strict mode to restrict certain type of operations on collections
- #5303 - Add Has Vector filtering condition, check if a named vector is present on a point
Improvements 🚀
- #5783 - Switch to mmap storage for payloads by default to make it more efficient, eliminating unexpected latency spikes
- #5784 - Switch to mmap storage for sparse vectors to make it more efficient, allowing better resource management
- #5781 - Compress HNSW graph links
- #5796 - Remove peer metadata for removed peers
- #5634 - Allow to set
max_optimization_threads
back to automatic - #5178 - Stream snapshots in snapshot transfer, don't put snapshots on disk first
Bug fixes 💢
- #5810 - Fix incorrect validation rules on deleted threshold in gRPC API
- #5808 - Don't allow conflicting names for dense and sparse vectors
Thanks to @gulshan-rs @ashwantmanikoth @kartik-gupta-ij @palash25 @redouan-rhazouani @weiwch @n0x29a @pedjak @agourlay @xzfc @JojiiOfficial @tellet-q @coszio @ffuugoo @KShivendu @joein @IvanPleshkov @generall @timvisee for their contributions!
v1.12.6
Change log
Improvements
- #5687 - Support 64-bit dimension indices for sparse vectors
- #5609 - Support issues API with limited API keys
- #5602 - Add support for logging in JSON format
- #5630 - Add web UI to Debian package
Bug fixes
- #5629 - Properly flush files with fsync to prevent storage issues
- #5627 - Atomically save quantization metadata
- #5628 - Atomically save chunked mmap configuration
- #5738 - Fix search panic due to unaligned data when loading old Qdrant snapshot (pre v1.8.2)
- #5643 - Improve collection validation, disallow replication factor 0 causing a panic
- #5690 - Ignore empty filter conditions rather than showing an error
- #5676 - Enforce TLS for internal node communication if URL is not explicitly provided
v1.12.5
Change log
Improvements
- #5505 - Improve point retrieval across shards by streaming results
- #5521 - Improve point searches across segments by streaming results
- #5514 - Improve facet computing across shards by streaming results
- #5405 - Make
/readyz
catch up to latest consensus commit - #5506 - Improve handling of non-transient errors, making shard transfers more robust
- #5478 - When peer starts, cancel all shard transfers related to it
- #5536 - Improve error message on sparse vector validation error
- #5546, #5580 - Improve error messages on various structure variants on validation error
- #5540 - Improve point counting on proxy segments by deduplicating point IDs
- #5522 - Expose shards keys in telemetry
- #5591 - Improve payload performance in gRPC API
- #5486 - Remove support for reading HNSW graphs of Qdrant 0.8.4 and older, simplifying behavior
- #5579 - Improve gRPC logging and error handling
- #5493 - Always log all errors when applying update to replica set
Bug fixes
- #5585 - Improve data consistency, fix point deduplication on start potentially removing newest point version
- #5527, #5528, #5531 - Improve data consistency, fix mixing point versions when applying updates, always use latest point
- #5543 - Improve data consistency, fix segment builder mixing point versions during segment optimization
- #5573 - Improve data consistency, fix proxy segment not using correct point versions when propagating point deletes
- #5581 - Improve data consistency, fix proxy segment not using correct versions when propagating payload index changes
- #5553 - Improve data consistency, fix proxy segment not flushing changes if sharing a write segment
- #5557 - Improve data consistency, fix flushing proxied segment acknowledging operations that were not actually persisted
- #5510 - Improve data consistency, fix marking points as deleted not working in all cases
- #5598 - Fix peer getting stuck on start if consensus snapshots are used
- #5484 - Fix panic on huge search limit
- #5600 - Fix prefetch with group-by internally using wrong request limit, significantly reducing search time
- #5488 - Fix incorrect timeout handling in distance matrix API
- #5495 - Fix incorrect timeout handling on remote shards
- #5596 - Fix broken replication factor in gRPC create shard key API
- #5551 - Fix potential division by zero in facets API
- #5593, #5603, #5605, qdrant/qdrant-web-ui#262 - Bump some dependencies to patch potential security vulnerabilities