-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add lds and sts inline ptx instructions to force vector instruction generation #273
Add lds and sts inline ptx instructions to force vector instruction generation #273
Conversation
…r usage in all contraction based kernels so that n is along x dir and m is along y dir blocks
…kernels. --add launch config generator function to launch optimal grid size kernel for these pairwise dist kernels
…ed up over previous version. -- improve logic of the grid launch config generator for x-dir blocks
…ced val for pre-volta arch
… for subsequent gridStrideX variations. this overall improves perf of fusedL2NN to 1.85x over previous version. --Also remove checking keys only check values in fusedL2nn test case, as it may happen a row has multiple keys with same min val
…und in launchConfigGenerator. --Use constexpr in shmemSize.
…e sure next grid stride doesn't pollute shmem before completion of this calculation
…t iteration of grid stride
@teju85 to help with review. |
@mdoijade did you get a chance to check whether this change causes any perf regressions, especially in the distance prims? (asking because AFAIK, they are only ones who are using these methods in their kernels) |
@teju85 yes I did checked the performance, there is no regression due to this patch to distance prims. also can you rerun the tests it looks one of the config hit intermittent CI issue of docker installation. |
rerun tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes LGTM
@teju85 can we merge this PR now? btw the above failure in "gpuCI/raft/gpu/cuda/11.0/python/3.7/ubuntu16.04" is CI docker installation issue. |
@dantegd can we get some help in getting the CI passed for this PR, please? |
rerun tests |
@teju85 can we merge this now? |
@gpucibot merge |
Adds inline ptx assembly for lds & sts instructions for float, float2, float4, double, double2.
This ensures that compiler doesn't mistakenly generate non-vectorized instructions whenever we need it to generate vectorized version.
Also this ensures that we always generate non-generic ld/st instructions eliminating compiler from generating generic ld/st instructions.
These functions now requires the given shmem pointer should be aligned by the vector length, like for float4 lds/sts shmem pointer should be aligned by 16 bytes else it might silently fail or can also give runtime error.