Skip to content

Latest commit

 

History

History
301 lines (227 loc) · 14.9 KB

build.md

File metadata and controls

301 lines (227 loc) · 14.9 KB

Installation

RAFT currently provides libraries for C++ and Python. The C++ libraries, including the header-only and optional shared library, can be installed with Conda.

Both the C++ and Python APIs require CMake to build from source.

Table of Contents


Installing C++ and Python through Conda

The easiest way to install RAFT is through conda and several packages are provided.

  • libraft-headers C++ headers
  • pylibraft (optional) Python library
  • raft-dask (optional) Python library for deployment of multi-node multi-GPU algorithms that use the RAFT raft::comms abstraction layer in Dask clusters.

Use the following command, depending on your CUDA version, to install all of the RAFT packages with conda (replace rapidsai with rapidsai-nightly to install more up-to-date but less stable nightly packages). mamba is preferred over the conda command.

# for CUDA 11.8
mamba install -c rapidsai -c conda-forge -c nvidia raft-dask pylibraft cuda-version=11.8
# for CUDA 12.0
mamba install -c rapidsai -c conda-forge -c nvidia raft-dask pylibraft cuda-version=12.0

Note that the above commands will also install libraft-headers and libraft.

You can also install the conda packages individually using the mamba command above. For example, if you'd like to install RAFT's headers to use in your project:

# for CUDA 12.0
mamba install -c rapidsai -c conda-forge -c nvidia libraft-headers cuda-version=12.0

Installing Python through Pip

pylibraft and raft-dask both have packages that can be installed through pip.

For CUDA 11 packages:

pip install pylibraft-cu11 --extra-index-url=https://pypi.nvidia.com
pip install raft-dask-cu11 --extra-index-url=https://pypi.nvidia.com

And CUDA 12 packages:

pip install pylibraft-cu12 --extra-index-url=https://pypi.nvidia.com
pip install raft-dask-cu12 --extra-index-url=https://pypi.nvidia.com

Building C++ and Python from source

CUDA/GPU Requirements

  • cmake 3.26.4+
  • GCC 9.3+ (9.5.0+ recommended)
  • CUDA Toolkit 11.8+
  • NVIDIA driver 450.80.02+
  • Volta architecture or better (compute capability >= 7.0)

Build Dependencies

In addition to the libraries included with cudatoolkit 11.8+, there are some other dependencies below for building RAFT from source. Many of the dependencies are optional and depend only on the primitives being used. All of these can be installed with cmake or rapids-cpm and many of them can be installed with conda.

Required

Optional

  • NCCL - Used in raft::comms API and needed to build raft-dask.
  • UCX - Used in raft::comms API and needed to build raft-dask.
  • Googletest - Needed to build tests
  • Googlebench - Needed to build benchmarks
  • Doxygen - Needed to build docs

Conda environment scripts

Conda environment scripts are provided for installing the necessary dependencies to build both the C++ and Python libraries from source. It is preferred to use mamba, as it provides significant speedup over conda:

mamba env create --name rapids_raft -f conda/environments/all_cuda-125_arch-x86_64.yaml
mamba activate rapids_raft

All of RAFT's C++ APIs can be used header-only.

The process for building from source with CUDA 11 differs slightly in that your host system will also need to have CUDA toolkit installed which is greater than, or equal to, the version you install into you conda environment. Installing CUDA toolkit into your host system is necessary because nvcc is not provided with Conda's cudatoolkit dependencies for CUDA 11. The following example will install create and install dependencies for a CUDA 11.8 conda environment:

mamba env create --name rapids_raft -f conda/environments/all_cuda-118_arch-x86_64.yaml
mamba activate rapids_raft

The recommended way to build and install RAFT from source is to use the build.sh script in the root of the repository. This script can build both the C++ and Python artifacts and provides CMake options for building and installing the headers, tests, benchmarks, and the pre-compiled shared library.

Header-only C++

build.sh uses rapids-cmake, which will automatically download any dependencies which are not already installed. It's important to note that while all the headers will be installed and available, some parts of the RAFT API depend on libraries like CUTLASS, which will need to be explicitly enabled in build.sh.

The following example will download the needed dependencies and install the RAFT headers into $INSTALL_PREFIX/include/raft.

./build.sh libraft

The -n flag can be passed to just have the build download the needed dependencies. Since RAFT's C++ headers are primarily used during build-time in downstream projects, the dependencies will never be installed by the RAFT build.

./build.sh libraft -n

Once installed, libraft headers (and dependencies which were downloaded and installed using rapids-cmake) can be uninstalled also using build.sh:

./build.sh libraft --uninstall

C++ Shared Library (optional)

A shared library must be built in order to build pylibraft. Pass the --compile-lib flag to build.sh to build the library:

./build.sh libraft --compile-lib

In above example the shared library is installed by default into $INSTALL_PREFIX/lib. To disable this, pass -n flag.

Once installed, the shared library, headers (and any dependencies downloaded and installed via rapids-cmake) can be uninstalled using build.sh:

./build.sh libraft --uninstall

ccache and sccache

ccache and sccache can be used to better cache parts of the build when rebuilding frequently, such as when working on a new feature. You can also use ccache or sccache with build.sh:

./build.sh libraft --cache-tool=ccache

C++ Tests

Compile the tests using the tests target in build.sh.

./build.sh libraft tests

The tests are broken apart by algorithm category, so you will find several binaries in cpp/build/ named *_TEST.

For example, to run the distance tests:

./cpp/build/MATRIX_TEST

It can take sometime to compile all of the tests. You can build individual tests by providing a semicolon-separated list to the --limit-tests option in build.sh:

./build.sh libraft tests -n --limit-tests=CORE_TEST;MATRIX_TEST

C++ Primitives Microbenchmarks

The benchmarks are broken apart by algorithm category, so you will find several binaries in cpp/build/ named *_PRIMS_BENCH.

./build.sh libraft bench-prims

It can take sometime to compile all of the benchmarks. You can build individual benchmarks by providing a semicolon-separated list to the --limit-bench-prims option in build.sh:

./build.sh libraft bench-prims -n --limit-bench=NEIGHBORS_PRIMS_BENCH;MATRIX_PRIMS_BENCH;LINALG_PRIMS_BENCH

Python libraries

The Python libraries can be built and installed using the build.sh script:

# to build pylibraft
./build.sh libraft pylibraft --compile-lib
# to build raft-dask (depends on pylibraft)
./build.sh libraft pylibraft raft-dask --compile-lib

setup.py can also be used to build the Python libraries manually:

cd python/raft-dask
python setup.py build_ext --inplace
python setup.py install

cd python/pylibraft
python setup.py build_ext --inplace
python setup.py install

Python tests are automatically installed with the corresponding libraries. To run Python tests:

cd python/raft-dask
py.test -s -v

cd python/pylibraft
py.test -s -v

The Python packages can also be uninstalled using the build.sh script:

./build.sh pylibraft raft-dask --uninstall

Using CMake directly

When building RAFT from source, the build.sh script offers a nice wrapper around the cmake commands to ease the burdens of manually configuring the various available cmake options. When more fine-grained control over the CMake configuration is desired, the cmake command can be invoked directly as the below example demonstrates.

The CMAKE_INSTALL_PREFIX installs RAFT into a specific location. The example below installs RAFT into the current Conda environment:

cd cpp
mkdir build
cd build
cmake -D BUILD_TESTS=ON -DRAFT_COMPILE_LIBRARY=ON -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX ../
make -j<parallel_level> install

RAFT's CMake has the following configurable flags available:

Flag Possible Values Default Value Behavior
BUILD_TESTS ON, OFF ON Compile Googletests
BUILD_PRIMS_BENCH ON, OFF OFF Compile benchmarks
CUDA_ENABLE_KERNELINFO ON, OFF OFF Enables kernelinfo in nvcc. This is useful for compute-sanitizer
CUDA_ENABLE_LINEINFO ON, OFF OFF Enable the -lineinfo option for nvcc
CUDA_STATIC_RUNTIME ON, OFF OFF Statically link the CUDA runtime
CUDA_STATIC_MATH_LIBRARIES ON, OFF OFF Statically link the CUDA math libraries
DETECT_CONDA_ENV ON, OFF ON Enable detection of conda environment for dependencies
raft_FIND_COMPONENTS compiled distributed Configures the optional components as a space-separated list
RAFT_COMPILE_LIBRARY ON, OFF ON if either BUILD_TESTS or BUILD_PRIMS_BENCH is ON; otherwise OFF Compiles all libraft shared libraries (these are required for Googletests)
RAFT_ENABLE_CUBLAS_DEPENDENCY ON, OFF ON Link against cublas library in raft::raft
RAFT_ENABLE_CUSOLVER_DEPENDENCY ON, OFF ON Link against cusolver library in raft::raft
RAFT_ENABLE_CUSPARSE_DEPENDENCY ON, OFF ON Link against cusparse library in raft::raft
RAFT_ENABLE_CUSOLVER_DEPENDENCY ON, OFF ON Link against curand library in raft::raft
RAFT_NVTX ON, OFF OFF Enable NVTX Markers

Build documentation

The documentation requires that the C++ and Python libraries have been built and installed. The following will build the docs along with the C++ and Python packages:

./build.sh libraft pylibraft raft-dask docs --compile-lib

Using RAFT C++ in downstream projects

There are a few different strategies for including RAFT in downstream projects, depending on whether the required build dependencies have already been installed and are available on the lib and include search paths.

When using the GPU parts of RAFT, you will need to enable CUDA support in your CMake project declaration:

project(YOUR_PROJECT VERSION 0.1 LANGUAGES CXX CUDA)

Note that some additional compiler flags might need to be added when building against RAFT. For example, if you see an error like this The experimental flag '--expt-relaxed-constexpr' can be used to allow this.. The necessary flags can be set with CMake:

target_compile_options(your_target_name PRIVATE $<$<COMPILE_LANGUAGE:CUDA>:--expt-extended-lambda --expt-relaxed-constexpr>)

Further, it's important that the language level be set to at least C++ 17. This can be done with cmake:

set_target_properties(your_target_name
PROPERTIES CXX_STANDARD                        17
           CXX_STANDARD_REQUIRED               ON
           CUDA_STANDARD                       17
           CUDA_STANDARD_REQUIRED              ON
           POSITION_INDEPENDENT_CODE           ON
           INTERFACE_POSITION_INDEPENDENT_CODE ON)

CMake Targets

The raft::raft CMake target is made available when including RAFT into your CMake project but additional CMake targets can be made available by adding to the COMPONENTS option in CMake's find_package(raft) (refer to CMake docs to learn more). The components should be separated by spaces. The raft::raft target will always be available. Note that the distributed component also exports additional dependencies.

Component Target Description Base Dependencies
n/a raft::raft Full RAFT header library CUDA toolkit, RMM, NVTX, CCCL, CUTLASS
compiled raft::compiled Pre-compiled template instantiations and runtime library raft::raft
distributed raft::distributed Dependencies for raft::comms APIs raft::raft, UCX, NCCL