Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

matrix::select_k: move selection and warp-sort primitives #1085

Merged
merged 42 commits into from
Jan 23, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
39c10a9
Make warp-level bitonic sort public
achirkin Dec 9, 2022
6cda736
Move spatial::*::select_topk to matrix::select_k
achirkin Dec 9, 2022
c5631bf
Fix includes style
achirkin Dec 9, 2022
fb88433
Use cmake-format
achirkin Dec 9, 2022
f64325b
Refactored warpsort module and made tests for all implementations in …
achirkin Dec 13, 2022
20d01d7
Resort to UVM when radix buffers are too big
achirkin Dec 14, 2022
4813bae
Adjust the dummy_block_sort_t to the changes in the warpsort impl
achirkin Dec 14, 2022
6cdb79a
Fix incorrect include
achirkin Dec 14, 2022
870fc86
Add benchmarks
achirkin Dec 14, 2022
2af45bf
Update CMakeLists.txt style
achirkin Dec 14, 2022
5b336ee
Update CMakeLists.txt style
achirkin Dec 14, 2022
b3e5d9c
Add mdspanified interface
achirkin Dec 15, 2022
164157b
Remove benchmarks for the legacy interface
achirkin Dec 15, 2022
69c81dd
Remove a TODO comment about a seemingly resolved bug
achirkin Dec 15, 2022
d64b12b
Merge remote-tracking branch 'rapidsai/branch-23.02' into enh-matrix-…
achirkin Dec 15, 2022
9d4476a
Fix the changed include extension
achirkin Dec 15, 2022
3e40435
Fix includes in tests
achirkin Dec 16, 2022
e20578e
Merge remote-tracking branch 'rapidsai/branch-23.02' into enh-matrix-…
achirkin Dec 16, 2022
b2c79f5
Merge branch 'branch-23.02' into enh-matrix-topk
achirkin Dec 20, 2022
98e2c2a
Address comments: bitonic_sort
achirkin Dec 20, 2022
af4c146
Replace stream argument with handle_t
achirkin Dec 20, 2022
471828e
rename files to select.* -> select_k.*
achirkin Dec 20, 2022
f6ff223
Use raft macros
achirkin Dec 20, 2022
066208d
Try to pass null and non-null arguments to select_k
achirkin Dec 20, 2022
aeaa1ef
Remove raw-pointer api from the public namespace
achirkin Dec 20, 2022
685b6bf
Updates public docs (add example usage)
achirkin Dec 21, 2022
5c42209
Merge remote-tracking branch 'rapidsai/branch-23.02' into enh-matrix-…
achirkin Jan 9, 2023
2cea50d
Add device_mem_resource
achirkin Jan 9, 2023
a31e61e
Add Doxygen docs
achirkin Jan 10, 2023
a8c5a70
Merge remote-tracking branch 'rapidsai/branch-23.02' into enh-matrix-…
achirkin Jan 10, 2023
8a5978b
Revert the memory_resource param changes in the detail namespace to a…
achirkin Jan 10, 2023
8e58cab
Merge remote-tracking branch 'rapidsai/branch-23.02' into enh-matrix-…
achirkin Jan 11, 2023
a01a75f
Remove device_mem_resource
achirkin Jan 11, 2023
c6256b7
Merge branch 'branch-23.02' into enh-matrix-topk
achirkin Jan 16, 2023
c25e859
Merge branch 'branch-23.02' into enh-matrix-topk
cjnolet Jan 19, 2023
6e56106
Merge branch 'branch-23.02' into enh-matrix-topk
achirkin Jan 20, 2023
c78d9b0
Reference a TODO issue
achirkin Jan 20, 2023
a55a6cb
Merge branch 'enh-matrix-topk' of github.com:achirkin/raft into enh-m…
achirkin Jan 20, 2023
307b113
Deprecation notice
achirkin Jan 20, 2023
c0ce160
Add [in] annotation to all arguments
achirkin Jan 20, 2023
e2cc7ad
Merge branch 'branch-23.02' into enh-matrix-topk
achirkin Jan 23, 2023
dc3043c
Merge branch 'branch-23.02' into enh-matrix-topk
cjnolet Jan 23, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions cpp/bench/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,10 @@ if(BUILD_BENCH)
bench/main.cpp
)

ConfigureBench(NAME MATRIX_BENCH PATH bench/matrix/argmin.cu bench/matrix/gather.cu bench/main.cpp)
ConfigureBench(
NAME MATRIX_BENCH PATH bench/matrix/argmin.cu bench/matrix/gather.cu bench/matrix/select_k.cu
bench/main.cpp
)

ConfigureBench(
NAME RANDOM_BENCH PATH bench/random/make_blobs.cu bench/random/permute.cu bench/random/rng.cu
Expand All @@ -127,7 +130,6 @@ if(BUILD_BENCH)
bench/neighbors/knn/ivf_pq_int8_t_int64_t.cu
bench/neighbors/knn/ivf_pq_uint8_t_uint32_t.cu
bench/neighbors/refine.cu
bench/neighbors/selection.cu
bench/main.cpp
OPTIONAL
DIST
Expand Down
133 changes: 133 additions & 0 deletions cpp/bench/matrix/select_k.cu
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/**
* TODO: reconsider how to organize shared test+bench files better
* Related Issue: https://github.com/rapidsai/raft/issues/1153
* (although this header does not depend on any gtest headers)
*/
#include "../../test/matrix/select_k.cuh"
achirkin marked this conversation as resolved.
Show resolved Hide resolved

#include <common/benchmark.hpp>

#include <raft/core/handle.hpp>
#include <raft/random/rng.cuh>
#include <raft/sparse/detail/utils.h>
#include <raft/util/cudart_utils.hpp>

#include <raft/matrix/detail/select_radix.cuh>
#include <raft/matrix/detail/select_warpsort.cuh>
#include <raft/matrix/select_k.cuh>

#include <rmm/device_uvector.hpp>
#include <rmm/mr/device/per_device_resource.hpp>
#include <rmm/mr/device/pool_memory_resource.hpp>

namespace raft::matrix {

using namespace raft::bench; // NOLINT

template <typename KeyT, typename IdxT, select::Algo Algo>
struct selection : public fixture {
explicit selection(const select::params& p)
: params_(p),
in_dists_(p.batch_size * p.len, stream),
in_ids_(p.batch_size * p.len, stream),
out_dists_(p.batch_size * p.k, stream),
out_ids_(p.batch_size * p.k, stream)
{
raft::sparse::iota_fill(in_ids_.data(), IdxT(p.batch_size), IdxT(p.len), stream);
raft::random::RngState state{42};
raft::random::uniform(handle, state, in_dists_.data(), in_dists_.size(), KeyT(-1.0), KeyT(1.0));
}

void run_benchmark(::benchmark::State& state) override // NOLINT
{
handle_t handle{stream};
using_pool_memory_res res;
try {
std::ostringstream label_stream;
label_stream << params_.batch_size << "#" << params_.len << "#" << params_.k;
state.SetLabel(label_stream.str());
loop_on_state(state, [this, &handle]() {
select::select_k_impl<KeyT, IdxT>(handle,
Algo,
in_dists_.data(),
in_ids_.data(),
params_.batch_size,
params_.len,
params_.k,
out_dists_.data(),
out_ids_.data(),
params_.select_min);
});
} catch (raft::exception& e) {
state.SkipWithError(e.what());
}
}

private:
const select::params params_;
rmm::device_uvector<KeyT> in_dists_, out_dists_;
rmm::device_uvector<IdxT> in_ids_, out_ids_;
};

const std::vector<select::params> kInputs{
{20000, 500, 1, true}, {20000, 500, 2, true}, {20000, 500, 4, true},
{20000, 500, 8, true}, {20000, 500, 16, true}, {20000, 500, 32, true},
{20000, 500, 64, true}, {20000, 500, 128, true}, {20000, 500, 256, true},

{1000, 10000, 1, true}, {1000, 10000, 2, true}, {1000, 10000, 4, true},
{1000, 10000, 8, true}, {1000, 10000, 16, true}, {1000, 10000, 32, true},
{1000, 10000, 64, true}, {1000, 10000, 128, true}, {1000, 10000, 256, true},

{100, 100000, 1, true}, {100, 100000, 2, true}, {100, 100000, 4, true},
{100, 100000, 8, true}, {100, 100000, 16, true}, {100, 100000, 32, true},
{100, 100000, 64, true}, {100, 100000, 128, true}, {100, 100000, 256, true},

{10, 1000000, 1, true}, {10, 1000000, 2, true}, {10, 1000000, 4, true},
{10, 1000000, 8, true}, {10, 1000000, 16, true}, {10, 1000000, 32, true},
{10, 1000000, 64, true}, {10, 1000000, 128, true}, {10, 1000000, 256, true},
};

#define SELECTION_REGISTER(KeyT, IdxT, A) \
namespace BENCHMARK_PRIVATE_NAME(selection) \
{ \
using SelectK = selection<KeyT, IdxT, select::Algo::A>; \
RAFT_BENCH_REGISTER(SelectK, #KeyT "/" #IdxT "/" #A, kInputs); \
}

SELECTION_REGISTER(float, int, kPublicApi); // NOLINT
SELECTION_REGISTER(float, int, kRadix8bits); // NOLINT
SELECTION_REGISTER(float, int, kRadix11bits); // NOLINT
SELECTION_REGISTER(float, int, kWarpAuto); // NOLINT
SELECTION_REGISTER(float, int, kWarpImmediate); // NOLINT
SELECTION_REGISTER(float, int, kWarpFiltered); // NOLINT
SELECTION_REGISTER(float, int, kWarpDistributed); // NOLINT
SELECTION_REGISTER(float, int, kWarpDistributedShm); // NOLINT

SELECTION_REGISTER(double, int, kRadix8bits); // NOLINT
SELECTION_REGISTER(double, int, kRadix11bits); // NOLINT
SELECTION_REGISTER(double, int, kWarpAuto); // NOLINT

SELECTION_REGISTER(double, size_t, kRadix8bits); // NOLINT
SELECTION_REGISTER(double, size_t, kRadix11bits); // NOLINT
SELECTION_REGISTER(double, size_t, kWarpImmediate); // NOLINT
SELECTION_REGISTER(double, size_t, kWarpFiltered); // NOLINT
SELECTION_REGISTER(double, size_t, kWarpDistributed); // NOLINT
SELECTION_REGISTER(double, size_t, kWarpDistributedShm); // NOLINT

} // namespace raft::matrix
123 changes: 0 additions & 123 deletions cpp/bench/neighbors/selection.cu

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* Copyright (c) 2022, NVIDIA CORPORATION.
* Copyright (c) 2022-2023, NVIDIA CORPORATION.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
Expand All @@ -16,76 +16,76 @@

#pragma once

#include "topk/radix_topk.cuh"
#include "topk/warpsort_topk.cuh"
#include "select_radix.cuh"
#include "select_warpsort.cuh"

#include <raft/core/nvtx.hpp>

#include <rmm/cuda_stream_view.hpp>
#include <rmm/mr/device/device_memory_resource.hpp>

namespace raft::spatial::knn::detail {
namespace raft::matrix::detail {

/**
* Select k smallest or largest key/values from each row in the input data.
*
* If you think of the input data `in_keys` as a row-major matrix with len columns and
* batch_size rows, then this function selects k smallest/largest values in each row and fills
* in the row-major matrix `out` of size (batch_size, k).
* If you think of the input data `in_val` as a row-major matrix with `len` columns and
* `batch_size` rows, then this function selects `k` smallest/largest values in each row and fills
* in the row-major matrix `out_val` of size (batch_size, k).
*
* @tparam T
* the type of the keys (what is being compared).
* @tparam IdxT
* the index type (what is being selected together with the keys).
*
* @param[in] in
* @param[in] in_val
* contiguous device array of inputs of size (len * batch_size);
* these are compared and selected.
* @param[in] in_idx
* contiguous device array of inputs of size (len * batch_size);
* typically, these are indices of the corresponding in_keys.
* typically, these are indices of the corresponding in_val.
* @param batch_size
* number of input rows, i.e. the batch size.
* @param len
* length of a single input array (row); also sometimes referred as n_cols.
* Invariant: len >= k.
* @param k
* the number of outputs to select in each input row.
* @param[out] out
* @param[out] out_val
* contiguous device array of outputs of size (k * batch_size);
* the k smallest/largest values from each row of the `in_keys`.
* the k smallest/largest values from each row of the `in_val`.
* @param[out] out_idx
* contiguous device array of outputs of size (k * batch_size);
* the payload selected together with `out`.
* the payload selected together with `out_val`.
* @param select_min
* whether to select k smallest (true) or largest (false) keys.
* @param stream
* @param mr an optional memory resource to use across the calls (you can provide a large enough
* memory pool here to avoid memory allocations within the call).
*/
template <typename T, typename IdxT>
void select_topk(const T* in,
const IdxT* in_idx,
size_t batch_size,
size_t len,
int k,
T* out,
IdxT* out_idx,
bool select_min,
rmm::cuda_stream_view stream,
rmm::mr::device_memory_resource* mr = nullptr)
void select_k(const T* in_val,
const IdxT* in_idx,
size_t batch_size,
size_t len,
int k,
T* out_val,
IdxT* out_idx,
bool select_min,
rmm::cuda_stream_view stream,
rmm::mr::device_memory_resource* mr = nullptr)
{
common::nvtx::range<common::nvtx::domain::raft> fun_scope(
"matrix::select_topk(batch_size = %zu, len = %zu, k = %d)", batch_size, len, k);
"matrix::select_k(batch_size = %zu, len = %zu, k = %d)", batch_size, len, k);
// TODO (achirkin): investigate the trade-off for a wider variety of inputs.
const bool radix_faster = batch_size >= 64 && len >= 102400 && k >= 128;
if (k <= raft::spatial::knn::detail::topk::kMaxCapacity && !radix_faster) {
topk::warp_sort_topk<T, IdxT>(
in, in_idx, batch_size, len, k, out, out_idx, select_min, stream, mr);
if (k <= select::warpsort::kMaxCapacity && !radix_faster) {
select::warpsort::select_k<T, IdxT>(
in_val, in_idx, batch_size, len, k, out_val, out_idx, select_min, stream, mr);
} else {
topk::radix_topk<T, IdxT, (sizeof(T) >= 4 ? 11 : 8), 512>(
in, in_idx, batch_size, len, k, out, out_idx, select_min, stream, mr);
select::radix::select_k<T, IdxT, (sizeof(T) >= 4 ? 11 : 8), 512>(
in_val, in_idx, batch_size, len, k, out_val, out_idx, select_min, stream, mr);
}
}

} // namespace raft::spatial::knn::detail
} // namespace raft::matrix::detail
Loading