Dask Installation
Contents
Dask Installation¶
How to Install Dask¶
You can install Dask with conda
, with pip
, or install from source.
If you use the Anaconda distribution, Dask will be installed by default.
You can also install or upgrade Dask using the conda install command:
conda install dask
This installs Dask and all common dependencies, including pandas and NumPy.
Dask packages are maintained both on the defaults channel and on
conda-forge.
You can select the channel with the -c
flag:
conda install dask -c conda-forge
Optionally, you can obtain a minimal Dask installation using the following command:
conda install dask-core
This will install a minimal set of dependencies required to run Dask similar to (but not exactly the same as) python -m pip install dask
.
To install Dask with pip
run the following:
python -m pip install "dask[complete]" # Install everything
This installs Dask, the distributed scheduler, and common dependencies like pandas, Numpy, and others.
You can also install only the Dask library and no optional dependencies:
python -m pip install dask # Install only core parts of dask
Dask modules like dask.array
, dask.dataframe
, or
dask.distributed
won’t work until you also install NumPy, pandas, or
Tornado, respectively. This is uncommon for users but more common for
downstream library maintainers.
We also maintain other dependency sets for different subsets of functionality:
python -m pip install "dask[array]" # Install requirements for dask array
python -m pip install "dask[dataframe]" # Install requirements for dask dataframe
python -m pip install "dask[diagnostics]" # Install requirements for dask diagnostics
python -m pip install "dask[distributed]" # Install requirements for distributed dask
We have these options so that users of the lightweight core Dask scheduler aren’t required to download the more exotic dependencies of the collections (Numpy, pandas, Tornado, etc.).
To install Dask from source, clone the repository from GitHub:
git clone https://github.com/dask/dask.git
cd dask
python -m pip install .
You can also install all dependencies as well:
python -m pip install ".[complete]"
You can view the list of all dependencies within the project.optional-dependencies
field
of pyproject.toml
.
Or do a developer install by using the -e
flag
(see the Install section in the Development Guidelines):
python -m pip install -e .
Distributed Deployment¶
To run Dask on a distributed cluster you will want to also install the Dask cluster manager that matches your resource manager, like Kubernetes, SLURM, PBS, LSF, AWS, GCP, Azure, or similar technology.
Read more on this topic at Deploy Documentation
Optional dependencies¶
Specific functionality in Dask may require additional optional dependencies. For example, reading from Amazon S3 requires s3fs. These optional dependencies and their minimum supported versions are listed below.
Dependency |
Version |
Description |
---|---|---|
|
Generate profiles of Dask execution (required for |
|
|
Used for dask arrays |
|
|
Use caching for computation |
|
|
Use CityHash and FarmHash hash functions for array hashing (~2x faster than MurmurHash) |
|
|
Use |
|
|
Faster cythonized implementation of internal iterators, functions, and dictionaries |
|
Required for |
||
|
Common machine learning functions scaled with Dask |
|
|
Storing and reading data from Apache Avro files |
|
|
Storing and reading data located in Google Cloud Storage |
|
|
Graph visualization using the graphviz engine |
|
|
Storing array data in hdf5 files |
|
|
Graph visualization using the cytoscape engine |
|
|
Write graph visualizations made with graphviz engine to file |
|
|
HTML representations of Dask objects in Jupyter notebooks (required for |
|
|
Transparent use of lz4 compression algorithm |
|
|
Color map support for graph visualization |
|
|
Random bag data generation with |
|
|
Use MurmurHash hash functions for array hashing (~8x faster than SHA1) |
|
|
Required for |
|
|
Required for |
|
|
Factor CPU affinity into CPU count, intelligently infer blocksize when reading CSV files |
|
|
Support for Apache Arrow datatypes & engine when storing/reading Apache ORC or Parquet files |
|
|
Snappy compression to bs used when storing/reading Avro or Parquet files |
|
|
Storing and reading data located in Amazon S3 |
|
|
Required for |
|
|
Use sparse arrays as backend for dask arrays |
|
|
Writing and reading from SQL databases |
|
|
Serialization of worker traceback objects |
|
|
Storing and reading data from TileDB files |
|
|
Use xxHash hash functions for array hashing (~2x faster than MurmurHash, slightly slower than CityHash) |
|
|
Storing and reading data from Zarr files |
Test¶
Test Dask with py.test
:
cd dask
py.test dask
Installing Dask naively may not install all requirements by default (see the pip
section above).
You may choose to install the dask[complete]
version which includes
all dependencies for all collections:
pip install "dask[complete]"
Alternatively, you may choose to test only certain submodules depending on the libraries within your environment. For example, to test only Dask core and Dask array we would run tests as follows:
py.test dask/tests dask/array/tests
See the section on testing in the Development Guidelines for more details.