0% found this document useful (0 votes)
125 views

GPU Computing For Data Science - John Joo

GPUs are well-suited for parallelizable tasks like Monte Carlo simulations, matrix operations, and deep learning. They have thousands of cores optimized for floating-point operations. While CPUs are better for sequential tasks, GPUs can accelerate data science workflows by offloading compute-intensive parts of the code. Programming frameworks like CUDA, OpenCL, and libraries in Python and R make it easier to leverage GPUs for applications like numerical analysis, machine learning, and financial modeling.

Uploaded by

Fabio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views

GPU Computing For Data Science - John Joo

GPUs are well-suited for parallelizable tasks like Monte Carlo simulations, matrix operations, and deep learning. They have thousands of cores optimized for floating-point operations. While CPUs are better for sequential tasks, GPUs can accelerate data science workflows by offloading compute-intensive parts of the code. Programming frameworks like CUDA, OpenCL, and libraries in Python and R make it easier to leverage GPUs for applications like numerical analysis, machine learning, and financial modeling.

Uploaded by

Fabio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

GPU Computing for Data Science

John Joo
[email protected]
Data Science Evangelist @ Domino Data Lab
Outline
• Why use GPUs?

• Example applications in data science

• Programming your GPU


Case Study:
Monte Carlo Simulations
• Simulate behavior when randomness
is a key component

• Average the results of many


simulations

• Make predictions
Little Information in One “Noisy Simulation”
Price(t+1) = Price(t) e InterestRate•dt + noise
Many “Noisy Simulations” ➡ Actionable Information
Price(t+1) = Price(t) e InterestRate•dt + noise
Monte Carlo Simulations Are Often Slow
• Lots of simulation data is required to
create valid models

• Generating lots of data takes time

• CPU works sequentially


CPUs designed for sequential, complex tasks

Source: Mythbusters https://youtu.be/-P28LKWTzrI


GPUs designed for parallel, low level tasks

Source: Mythbusters https://youtu.be/-P28LKWTzrI


GPUs designed for parallel, low level tasks

Source: Mythbusters https://youtu.be/-P28LKWTzrI


Applications of GPU Computing in Data Science
• Matrix Manipulation
Algorithms for GPU Acceleration
• Numerical Analysis
• Inherently parallel
• Sorting
• Matrix operations
• FFT
• High FLoat-point Operations Per Sec
• String matching
(FLOPS)
• Monte Carlo simulations

• Machine learning

• Search
GPUs Make Deep Learning Accessible

Google
Stanford AI Lab
Datacenter

# of machines 1,000 3

# of CPUs or
2,000 CPUs 12 GPUs
GPUs

Cores 16,000 18,432

Power used 600 kW 4 kW

Cost $5,000,000 $33,000

Adam Coates, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, Ng Andrew ; JMLR W&CP 28 (3) : 1337–1345, 2013
CPU vs GPU Architecture:
Structured for Different Purposes

CPU GPU
4-8 High Performance Cores 100s-1000s of bare bones cores
Both CPU and GPU are required

Compute intensive
functions

Everything else
CPU GPU

General Purpose GPU Computing (GPGPU)


Heterogeneous Computing
Getting Started: Hardware
• Need a computer with GPU

• GPU should not be operating your


display

Spin up a GPU/CPU computer with 1 click.


8 CPU cores, 15 GB RAM
1,536 GPU cores, 4GB RAM
Getting Started: Hardware


Getting Started: Software

Programming CPU Programming GPU


• Sequential • Parallel

• Write code top to bottom • Multi-threaded - race conditions

• Can do complex tasks • Low level tasks

• Independent • Dependent on CPU


Talking to your GPU

CUDA and OpenCL are GPU computing frameworks


Choosing How to Interface with GPU:
Simplicity vs Flexibility
High Application
specific
libraries
General
Simplicity purpose GPU
libraries
Custom
CUDA/
Low
OpenCL code
Low
Flexibility High
Application Specific Libraries
Python R

• Theano - Symbolic math • cudaBayesreg - fMRI

• TensorFlow - ML • mxnet - NN

• Lasagne - NN • rpud -SVM

• Pylearn2 - ML • rgpu - bioinformatics

• mxnet - NN

• ABSsysbio - Systems Bio Tutorial on using Theano, Lasagne, and no-learn:


http://blog.dominodatalab.com/gpu-computing-and-deep-learning/
General Purpose GPU Libraries

• Python and R wrappers for basic matrix


and linear algebra operations

• scikit-cuda

• cudamat

• gputools

• HiPLARM

• Drop-in library
Drop-in Library

Also works for Python!


http://scelementary.com/2015/04/09/nvidia-nvblas-in-numpy.html
Credit: NVIDIA
Custom CUDA/OpenCL Code
1. Allocate memory on the GPU

2. Transfer data from CPU to GPU

3. Launch the kernel to operate on the CPU


cores

4. Transfer results back to CPU


Example of using Python and CUDA:
Monte Carlo Simulations

• Using PyCuda to interface Python and


CUDA

• Simulating 3 million paths, 100 time steps


each
Python Code for CPU

Python/PyCUDA Code for GPU

8 more lines of code


Python Code for CPU

Python/PyCUDA Code for CPU


1. Allocate memory on the GPU
Python Code for CPU

Python/PyCUDA Code for CPU


2. Transfer data from CPU to GPU
Python Code for CPU

Python/PyCUDA Code for CPU


3. Launch the kernel to operate on the CPU cores
Python Code for CPU

Python/PyCUDA Code for CPU


4. Transfer results back to CPU
Python Code for CPU

26 sec

Python/PyCUDA Code for CPU

8 more lines of code


1.5 sec

17x speed up
Some sample Jupyter notebooks
• https://app.dominodatalab.com/johnjoo/gpu_examples

• Monte Carlo example using PyCUDA

• PyCUDA example compiling CUDA C for kernel


instructions

• Scikit-cuda example of matrix multiplication

• Calculating a distance matrix using rpud


More resources
• NVIDIA
• https://developer.nvidia.com/how-to-cuda-python
• Berkeley GPU workshop
• http://www.stat.berkeley.edu/scf/paciorek-
gpuWorkshop.html
• Duke Statistics on GPU (Python)
• http://people.duke.edu/~ccc14/sta-663/
CUDAPython.html
• Andreas Klockner’s webpage (Python)
• http://mathema.tician.de/
• Summary of GPU libraries
• http://fastml.com/running-things-on-a-gpu/
More resources
• Walk through of CUDA programming in R
• http://blog.revolutionanalytics.com/2015/01/parallel-
programming-with-gpus-and-r.html
• List of libraries for GPU computing in R
• https://cran.r-project.org/web/views/
HighPerformanceComputing.html
• Matrix computations in Machine Learning
• http://numml.kyb.tuebingen.mpg.de/numl09/
talk_dhillon.pdf
Questions?
[email protected]

blog.dominodatalab.com
[email protected]

blog.dominodatalab.com

You might also like