Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
-
Updated
Feb 16, 2024 - Jupyter Notebook
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
Fast, differentiable sorting and ranking in PyTorch
row-major matmul optimization
A performance comparison of standard matrix functions between CPU and GPU using Nvidia CUDA on Visual Studio using C++
a custom CUDA kernel for windowed matrix multiplication
SNU CSE Scalable High Performance Computing (M1522.006700) - 2023 Autumn
Winning submission for StartHack 2024: HPC optimized multi-GPU/CPU inference
A beginner's guide to CUDA programming
Snippet repository for learning parallel GPU programming with CUDA.
Add a description, image, and links to the cuda-kernel topic page so that developers can more easily learn about it.
To associate your repository with the cuda-kernel topic, visit your repo's landing page and select "manage topics."