How to Run CUDA C or C++ on Jupyter (Google Colab)

Introduction:

If you’re looking to leverage the power of GPU-accelerated computing for your CUDA C or C++ projects, running them on Jupyter Notebooks in Google Colab can be a convenient and efficient solution. Follow these steps to get started:

Step 1: Setting up Google Colab for CUDA

First, open Google Colab (https://colab.research.google.com/) and create a new notebook.

Step 2: Installing CUDA Toolkit

Install the CUDA toolkit by running the following commands in a code cell:

!apt-get --purge remove cuda nvidia* libnvidia-*
!dpkg -l | grep cuda- | awk '{print $2}' | xargs -n1 dpkg --purge
!apt-get remove cuda-
!apt autoremove
!apt-get update
!wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64
!dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64
!apt-key add /var/cuda-repo-9-2-local/7fa2af80.pub
!apt-get update
!apt-get install cuda-9.2

Step 3: Compiling and Running CUDA C/C++ Code

You can now compile and run your CUDA C or C++ code in the notebook. Just remember to set the runtime type to GPU by navigating to ‘Runtime’ -> ‘Change runtime type’ -> Select ‘GPU’ as the hardware accelerator.

Step 4: Running a Sample CUDA Program

As a quick test, you can run a sample CUDA program like the vector addition code below:

#include <stdio.h>

__global__ void add(int a, int b, int* c) {
  *c = a + b;
}

int main() {
  int a = 2, b = 7, c;
  int* d_c;
  cudaMalloc((void**)&d_c, sizeof(int));
  add<<<1, 1>>>(a, b, d_c);
  cudaMemcpy(&c, d_c, sizeof(int), cudaMemcpyDeviceToHost);
  printf("%d + %d = %d\n", a, b, c);
  cudaFree(d_c);

  return 0;
}

By following these steps, you can seamlessly run your CUDA C or C++ code on Jupyter Notebooks in Google Colab, harnessing the power of GPU acceleration for your projects.