A caching API built around the concept of a "dogpile lock", which allows continued access to an expiring data value while a single thread generates a new value.
dogpile.cache builds on the dogpile.core locking system, which implements the idea of "allow one creator to write while others read" in the abstract. Overall, dogpile.cache is intended as a replacement to the Beaker caching system, the internals of which are written by the same author. All the ideas of Beaker which "work" are re-implemented in dogpile.cache in a more efficient and succinct manner, and all the cruft (Beaker's internals were first written in 2005) relegated to the trash heap.
- A succinct API which encourages up-front configuration of pre-defined "regions", each one defining a set of caching characteristics including storage backend, configuration options, and default expiration time.
- A standard get/set/delete API as well as a function decorator API is provided.
- The mechanics of key generation are fully customizable. The function decorator API features a pluggable "key generator" to customize how cache keys are made to correspond to function calls, and an optional "key mangler" feature provides for pluggable mangling of keys (such as encoding, SHA-1 hashing) as desired for each region.
- The dogpile lock, first developed as the core engine behind the Beaker caching system, here vastly simplified, improved, and better tested. Some key performance issues that were intrinsic to Beaker's architecture, particularly that values would frequently be "double-fetched" from the cache, have been fixed.
- Backends implement their own version of a "distributed" lock, where the "distribution" matches the backend's storage system. For example, the memcached backends allow all clients to coordinate creation of values using memcached itself. The dbm file backend uses a lockfile alongside the dbm file. New backends, such as a Redis-based backend, can provide their own locking mechanism appropriate to the storage engine.
- Writing new backends or hacking on the existing backends is intended to be routine - all that's needed are basic get/set/delete methods. A distributed lock tailored towards the backend is an optional addition, else dogpile uses a regular thread mutex. New backends can be registered with dogpile.cache directly or made available via setuptools entry points.
- Included backends feature three memcached backends (python-memcached, pylibmc, bmemcached), a Redis backend, a backend based on Python's anydbm, and a plain dictionary backend.
- Space for third party plugins, including the first which provides the dogpile.cache engine to Mako templates.
- Python 3 compatible in place - no 2to3 required.
dogpile.cache features a single public usage object known as the CacheRegion
.
This object then refers to a particular CacheBackend
. Typical usage
generates a region using make_region()
, which can then be used at the
module level to decorate functions, or used directly in code with a traditional
get/set interface. Configuration of the backend is applied to the region
using configure()
or configure_from_config()
, allowing deferred
config-file based configuration to occur after modules have been imported:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'binary':True, 'behaviors':{"tcp_nodelay": True,"ketama":True} } ) @region.cache_on_arguments() def load_user_info(user_id): return some_database.lookup_user_by_id(user_id)
See dogpile.cache's full documentation at dogpile.cache documentation.