Tutorial: CUDA programming in Python with numba and cupy
Vložit
- čas přidán 11. 06. 2024
- /Using the GPU can substantially speed up all kinds of numerical problems. Conventional wisdom dictates that for fast numerics you need to be a C/C++ wizz. It turns out that you can get quite far with only python. In this video, I explain how you can use cupy together with numba to perform calculations on NVIDIA GPU's. Production quality is not the best, but I hope you may find it useful.
00:00 Introduction: GPU programming in python, why?
06:52 Cupy intro
08:39 Cupy demonstration in Google colab
19:54 Cupy summary
20:21 Numba.cuda and kernels intro
25:07 Grids, blocks and threads
27:12 Matrix multiplication kernel
29:20 Tiled matrix multiplication kernel and shared memory
34:31 Numba.cuda demonstration in Google colab
44:25 Final remarks
Edit 3/9/2021: the notebook is use for demonstration can be found here colab.research.google.com/dri...
Edit 9/9/2021: at 23:56 one of the grid elements should be labeled 1,3 instead of 1,2. Thanks to _______ for pointing this out. - Věda a technologie
I have been looking into gpu programming using numba and python for a while, this seems to be the best tutorial I was able to find so far.. . thank you
Thank you so much. Probably the best introdution to CUDA with Python. The example you use, while very basic, touches on usage of blocks, which is usually omitted in other introduction-level tutorials. Great stuff! Hope you return with some more videos. I have subscribed!
Cuda is bullshit closed source. Just wait for Tenstorrent, it's gonna be HUGE.
Definitely a lot of new material not seen else where - not a run-of-the-mill video. Great job on originality.
This reminds me a lot of the mindset you need to program in assembly.
Really great introduction to GPU programming. I hope you make a new one soon.
wanted to comment that the information in this presentation is very well structured and the flow is excellent.
Thanks man!
thank you so much, it is the best explaination i found. Please keep going and give us more information and examples on that
Thank you so very much. This is the exact kind of material I was looking for on this very specific subject. Kudos.
Really nice video, thank you for sharing!
this was such an excellent video, thank you so much!
Thanks a lot! Still the best guide I could find.
wait i tought that this made by some popular channel, done pretty well and then saw, 29 subscribers
you would be surprised what powerpoint can do. To be honest I don't enjoy making videos that much, it's a lot of work, it always turns out kind of shit (especially audio and webcam quality), and I get nothing in return. But when I encounter a really niche topic that I struggled with myself and I don't find many resources for it I figure I make it myself hopefully such that it may be useful to someone else.
@@nickcorn93 "nickcorn93
nickcorn93
2 hours ago
you would be surprised what powerpoint can do." not only powerpoint))))))
Just what I needed! Thanks!
Great video, nick!
Thank you so much sir, you are an amazing human being !
This is a great video!
Thank you so much. Keep up the hard work. Just hoping that more and more libraries in python will support GPU computations soon.
Really learnt a lot here, thanks!💪
thank you. good video!!! it was very helpful
Thank you, this is gold
Thanks a lot really got me started .
Very helpful, thank you.
Excellent explanation, keep going with this content man ;)
great tut ! thanks
Thanks for the video, I found the first half and the wrap up really excellent.
wold love to see a video on what are a few CUDA programming challenges
fantastic video.
This is really helpful for my computing. Thank you.
Thanks for sharing INFO
thank you! super helpful
Perfect Video! Saw was revealing to me to understand how it works. Thank you! I am a new subscriber of your channel. Regards from Buenos Aires, Argentina
This was really good. Thanks for posting this!
Great intro for me. Waiting for my new GPU (likely 4060 Ti) for me to dig deeper into Python, CUDA, deep learning ...
Thank you very much
Great video
Very educational. One thing I've missed: The function matmul is running on the PC or the GPU?
Thank you so much
Thank you for this tutorial, it has been very helpful! But since it is only an introduction could anyone tell me what I should watch or read next on this topic? Thanks in advance for the advice!
VERY helpful, thank you!!!!
Great tutorial, Nick! One minor critique: your pronunciation of ‘array’ was confusing…a more standard pronunciation is “uh-RAY”.
What about if you want to develop a library for neural net work?
A highly specialized library
Muito bom...
You say ARRay, I say arRAY. Let's call the whole thing off. But seriously, good stuff.
I kept thinking, "huh? what is he talking about?? Oh, he meant an ARRay!" lol
Other than that, awesome vid!
Interesting, so I've basically been pronouncing array incorrectly my whole life. Will try to watch out for that in the future.
@@nickcorn93 I've heard other people saying it your way too.
@@nickcorn93 it was very distracting. Work on it google it and use the pronunciation feature.
Otherwise outstanding and very useful tutorial.
I am unable to install cupyx from pip any help
Can you do a tutorial series on how to accelerate things using cuda python?
I've thought about it but it's a lot of work to make and edit a silly video like this, and at the moment I really don't have the time. I don't get anything for making these videos.
hi, I have a program that I want to translate to numba. could you help me?
- what should the program do?
- who is the program for?
- what is it currently written in?
Thanks for the video, it isn´t very information about, sorry for my english
is it only me or the cooling fan going brrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr.
Hi, I m trying this on my local computer, but cannot install Cupy, I have NVida geforece RTX 3060. EDIT: Installed CUDA 11.6 toolkit and it works now.
What is your OS? You may be having issues if you are using windows and pip. Easiest to install cupy in a conda virtual environment, as it will also install the cuda toolkit.
@@nickcorn93 Sorry for bother you, the problem was not installing Cuda Toolkit, srly I hate people who doesnt watch full video closely and ask stupid questions....and now I m one of them :D. Thx alot for this tutorial in 2 months i will try write my own GPU operator for my program, would be interting if this will be faster than CPU. (Btw using normal Visual code in python 3.10 env. on win 11, so far so good. (Altrough i have some code output delay problem when using openCV for some strange reason)
Wait. At 12:10, the narrator says the timeit magic function reports a duration of 5 ms, but the number is 0.01 ms from 6 ms. The number us far away from 5 compared to 6. It shoukd be 6 ms if he's rounding, not 5 ms. He's truncating the decimals to arrive at an integer.
Congratulations, you have invalidated the entire video by spotting this massive mistake ;) !
@@nickcorn93 🆗.
all these tutorials using light mode while I learn at night... I'm gonna go blind :X
Cupy does not install well through the use of pip
typically it is easier via conda yes.
GPUs aren't general purpose... sigh... They are really good at specific executing the same operation on many data banks. It just happens to be similair type of needs for graphics an machine learning
Isn't that what I say in this video? Did you even watch it?
Approximate arbitrary function? There are caveats.
Something is seriously off with your fast matmul implementation, it's 3 orders of magnitude slower than the built-in method (12.5 ms vs 8.82 us)?
You probably have some host-device copying going on?
The matmul example shown is the example from the numba documentation so I don't think it's wrong. It's (relatively) slow because matrix multiplication is something that is so common, it is insanely optimized in available implementations. You won't write a matrix multiplication implementation with numba that's faster than cupy. But if you have something custom you need to do, a custom kernel can be faster than a combination of cupy operations.
There is a python opencl package (pyopencl)
a = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
b = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
krnl = ReductionKernel(ctx, numpy.float32, neutral="0",
reduce_expr="a+b", map_expr="x[i]*y[i]",
arguments="__global float *x, __global float *y")
my_dot_prod = krnl(a, b).get()
🙂 Benefit is it works on ALL GPU's not only Nvidia, (works on intel built in cpu gpu's and on amd gpus)