Difference between revisions of "Editing Running CUDA Samples on Kong"

From NJIT-ARCS HPC Wiki
Jump to: navigation, search
(Importing text file)
 
(No difference)

Latest revision as of 16:32, 5 October 2020

This tutorial demonstrates how to compile and run a GPU job using CUDA sample code.

Make a directory to hold the samples
kong-41 ~>: mkdir gpu
kong-42 ~>: cd gpu


Copy the sample files from AFS. Make sure to copy all of the files.
kong-43 gpu>: cp -r /afs/cad/linux/cuda-6.5.14/samples/ .

Change directories to matrixMul
kong-44 gpu>: cd samples/0_Simple/matrixMul

Load the cuda module
kong-45 matrixMul>: module load cuda

Build the binary.
kong-46 matrixMul>: make
"/afs/cad/linux/cuda-6.5.14"/bin/nvcc -ccbin g++ -m64 -gencode arch=compute_11,code=sm_11 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_50,code=compute_50 -o matrixMul matrixMul.o
nvcc warning : The 'compute_11', 'compute_12', 'compute_13', 'sm_11', 'sm_12', and 'sm_13' architectures are deprecated, and may be removed in a future release.
mkdir -p ../../bin/x86_64/linux/release
cp matrixMul ../../bin/x86_64/linux/release

Create a submit script

<source lang="bash">

  1. !/bin/sh
  2. Usage: gputest.sh
  3. Change job name and email address as needed
  1. -- our name ---
  2. $ -N matrixMul
  3. $ -S /bin/sh
  4. Make sure that the .e and .o file arrive in the
  5. working directory
  6. $ -cwd
  7. Merge the standard out and standard error to one file
  8. $ -j y
  9. Send mail at submission and completion of script
  10. $ -m be
  11. $ -M UCID@njit.edu
  12. Request a gpu
  13. $ -l gpu=1

/bin/echo Running on host: `hostname`. /bin/echo In directory: `pwd` /bin/echo Starting on: `date`

  1. Load CUDA module

. /opt/modules/init/bash module load cuda

  1. Full path to executable

/home/g/UCID/gpu/samples/0_Simple/matrixMul/matrixMul </source>

Submit the job 
kong-47 matrixMul>: qsub gpusubmit.sh
Your job 390030 ("matrixMul") has been submitted
View the output 
kong-48 matrixMul>: cat matrixMul.o390030
Running on host: node151.
In directory: /home/g/UCID/gpu/samples/0_Simple/matrixMul
Starting on: Wed Nov 5 14:46:48 EST 2014
[Matrix Multiply Using CUDA] - Starting...
GPU Device 0: "Tesla K20Xm" with compute capability 3.5
MatrixA(320,320), MatrixB(640,320)
Computing result using CUDA Kernel...
done
Performance= 274.18 GFlop/s, Time= 0.478 msec, Size= 131072000 Ops, WorkgroupSize= 1024 threads/block
Checking computed result for correctness: Result = PASS
Note: For peak performance, please refer to the matrixMulCUBLAS example.