Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of select-top-k WARP_SORT implementation #606

Merged
merged 13 commits into from
May 16, 2022

Conversation

achirkin
Copy link
Contributor

@achirkin achirkin commented Apr 1, 2022

A few simplifications and tricks to improve the performance of the kernel:

  • Promote some constants to static constexpr
  • Allow capacity < WarpSize
  • Reduce the frequency of sort operations for filtered version
  • Remove warp_sort::load to simplify the api and implementation

@github-actions github-actions bot added the cpp label Apr 1, 2022
@achirkin achirkin added enhancement New feature or request non-breaking Non-breaking change 2 - In Progress Currenty a work in progress improvement Improvement / enhancement to an existing function and removed cpp enhancement New feature or request labels Apr 1, 2022
@github-actions github-actions bot added the cpp label Apr 4, 2022
@achirkin achirkin force-pushed the enh-knn-topk-optimization branch from a45a2f1 to dba9c57 Compare April 4, 2022 06:55
@achirkin achirkin marked this pull request as ready for review April 6, 2022 09:27
@achirkin achirkin requested a review from a team as a code owner April 6, 2022 09:27
@achirkin
Copy link
Contributor Author

achirkin commented Apr 6, 2022

NB from previous discussions: I considered merging done() and store(..) together, but decided against that for two reasons:

  1. Atm, store(...) is rather fast and can be used to store the content multiple times, after done() has been called once. Looks like a useful behavior.
  2. I couldn't come up with a clean solution without the use of virtual functions, but I'm afraid introducing one would incur extra overheads.

@achirkin achirkin requested a review from tfeher April 6, 2022 09:32
@achirkin achirkin force-pushed the enh-knn-topk-optimization branch from 631ed31 to 42470d5 Compare April 13, 2022 11:57
@achirkin
Copy link
Contributor Author

NB: more tests are coming in #618

@achirkin achirkin removed the 2 - In Progress Currenty a work in progress label May 5, 2022
Copy link
Contributor

@tfeher tfeher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Artem, thanks for the PR! It looks good, just a found a few small things.

Question, is the following statement true? With these modifications, capacity < warpsize is allowed. Since the capacity is chosen automatically according to the k value, the existing test cover these changes.

Please add to the PR description that the warp_sort::load functions are removed to simplify the API.

@achirkin
Copy link
Contributor Author

Question, is the following statement true? With these modifications, capacity < warpsize is allowed. Since the capacity is chosen automatically according to the k value, the existing test cover these changes.

Yes, that is true. Also a few more tests are coming in #618 (which expands the api a little by allowing to pass nullptr as input indices).

@achirkin achirkin requested a review from tfeher May 16, 2022 14:34
Copy link
Contributor

@tfeher tfeher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Artem for addressing the issues! The PR looks good to me.

@cjnolet
Copy link
Member

cjnolet commented May 16, 2022

@gpucibot merge

@rapids-bot rapids-bot bot merged commit a1ace03 into rapidsai:branch-22.06 May 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review cpp improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants