GPGPU

From ArchWiki
Jump to: navigation, search

Related articles

GPGPU stands for General-purpose computing on graphics processing units. In Linux, there are currently two major GPGPU frameworks: OpenCL and CUDA

OpenCL

OpenCL (Open Computing Language) is an open, royalty-free parallel programming specification developed by the Khronos Group, a non-profit consortium.

The OpenCL specification describes a programming language, a general environment that is required to be present, and a C API to enable programmers to call into this environment.

Arch Linux provides multiple packages for all of these.

To execute programs that use OpenCL, you need to install a runtime compatible with your hardware:

  • opencl-nvidia: execute on your Nvidia GPU (official Nvidia runtime)
  • opencl-mesa: execute on AMD GPU's using the mesa drivers (currently under development, your mileage may vary)
  • opencl-catalystAUR: execute on your AMD GPU (official AMD runtime)
  • intel-opencl-runtimeAUR: execute on your CPU (official Intel runtime, also supports non-Intel CPUs)
  • poclAUR: execute on your CPU (LLVM-based OpenCL implementation)

For OpenCL development, the bare minimum additional packages required, are:

  • ocl-icd: OpenCL ICD loader implementation, up to date with the latest OpenCL specification.
  • opencl-headers: OpenCL C/C++ API headers.

The vendors' SDKs provide a multitude of tools and support libraries:

  • intel-opencl-sdkAUR: Intel's OpenCL SDK (old version, new OpenCL SDKs are included in the INDE and Intel Media Server Studio)
  • amdapp-sdkAUR: AMD's OpenCL SDK
  • cuda: Nvidia's GPU SDK which includes support for OpenCL 1.1.

OpenCL ICD loader (libOpenCL.so)

The OpenCL ICD loader is supposed to be a platform-agnostic library that provides the means to load device-specific drivers through the OpenCL API. Most OpenCL vendors provide their own implementation of an OpenCL ICD loader, and these should all work with the other vendors' OpenCL implementations. Unfortunately, most vendors do not provide completely up-to-date ICD loaders, and therefore Arch Linux has decided to provide this library from a separate project (ocl-icd) which currently provides a functioning implementation of the current OpenCL API.

The other ICD loader libraries are installed as part of each vendor's SDK. If you want to ensure the ICD loader from the ocl-icd package is used, you can create a file in /etc/ld.so.conf.d which adds /usr/lib to the dynamic program loader's search directories:

/etc/ld.so.conf.d/00-usrlib.conf
/usr/lib

This is necessary because all the SDKs add their runtime's lib directories to the search path through ld.so.conf.d files.

The available packages containing various OpenCL ICDs are:

  • ocl-icd: recommended, most up-to-date
  • libopenclAUR by AMD. Provides version 2.0 of OpenCL. It is currently distributed by AMD under a restrictive license and therefore could not have been pushed into official repo.
  • intel-opencl-runtimeAUR: Intel's libCL, provides OpenCL 1.2.
Note: ICD Loader's vendor is mentioned only to identify each loader, it is otherwise completely irrelevant. ICD loaders are vendor-agnostic and may be used interchangeably. (as long as they are implemented correctly)

Implementations

To see which OpenCL implementations are currently active on your system, use the following command:

$ ls /etc/OpenCL/vendors

AMD

OpenCL implementation from AMD is known as AMD APP SDK, formerly also known as AMD Stream SDK or ATi Stream.

It can be installed with the amdapp-sdkAUR package. This package is installed as /opt/AMDAPP and apart from SDK files it also contains a number of code samples (/opt/AMDAPP/SDK/samples/). It also provides the clinfo utility which lists OpenCL platforms and devices present in the system and displays detailed information about them.

As AMD APP SDK itself contains CPU OpenCL driver, no extra driver is needed to execute OpenCL on CPU devices (regardless of its vendor). GPU OpenCL drivers are provided by the catalystAUR package (an optional dependency).

Code is compiled using llvm (dependency).

Mesa (Gallium)

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: How accurate is this part? (Discuss in Talk:GPGPU#)

OpenCL support from Mesa is in development (see http://www.x.org/wiki/GalliumStatus/). AMD Radeon cards are supported by the r600g driver.

Arch Linux ships OpenCL support as a separate package opencl-mesa. See http://dri.freedesktop.org/wiki/GalliumCompute/ for usage instructions.

You could also use lordheavy's repo. Install these packages:

  • ati-dri-git
  • opencl-mesa-git
  • libclc-git

Surprisingly, pyrit performs 20% better with radeon+r600g compared to Catalyst 13.11 Beta1 (tested with 7 other CPU cores):

catalyst     #1: 'OpenCL-Device 'Barts'': 21840.7 PMKs/s (RTT 2.8)
radeon+r600g #1: 'OpenCL-Device 'AMD BARTS'': 26608.1 PMKs/s (RTT 3.0)

At the time of this writing (30 October 2013), one must apply patches [1] and [2] on top of Mesa commit ac81b6f2be8779022e8641984b09118b57263128 to get this performance improvement. The latest unpatched LLVM trunk was used (SVN rev 193660).

Nvidia

The Nvidia implementation is available as opencl-nvidia from the official repositories. It only supports Nvidia GPUs running the nvidia kernel module (nouveau does not support OpenCL yet).

Intel

The Intel implementation, named simply Intel OpenCL SDK, provides optimized OpenCL performance on Intel CPUs (mainly Core and Xeon) and CPUs only. Install it with the intel-opencl-sdkAUR package. The runtime can be installed with the separate intel-opencl-runtimeAUR package. OpenCL for integrated graphics hardware is available through the beignetAUR package for Ivy Bridge and newer hardware.

POCL

CPU-only LLVM-based implementation. Available as poclAUR.

Development

The required packages for OpenCL development are listed in the overview. Installation of a full SDK is optional (depending on the runtime implementation, which is often only available as part of a vendor's SDK). Link your application to libOpenCL.so.

Language bindings

CUDA

CUDA (Compute Unified Device Architecture) is NVIDIA's proprietary, closed-source parallel computing architecture and framework. It requires a Nvidia GPU. It consists of several components:

  • required:
    • proprietary Nvidia kernel module
    • CUDA "driver" and "runtime" libraries
  • optional:
    • additional libraries: CUBLAS, CUFFT, CUSPARSE, etc.
    • CUDA toolkit, including the nvcc compiler
    • CUDA SDK, which contains many code samples and examples of CUDA and OpenCL programs

The kernel module and CUDA "driver" library are shipped in nvidia and opencl-nvidia. The "runtime" library and the rest of the CUDA toolkit are available in cuda. The library is available only in 64-bit version.

Development

Note: CUDA 7.5/8.0 is not compatible with GCC 6 (see FS#49272). This means that with the cuda and gcc packages from the official repositories, it is impossible to compile CUDA code. You will have to follow #Using CUDA with an older GCC.

The cuda package installs all components in the directory /opt/cuda. For compiling CUDA code, add /opt/cuda/include to your include path in the compiler instructions. For example this can be accomplished by adding -I/opt/cuda/include to the compiler flags/options. To use nvcc, a gcc wrapper provided by NVIDIA, just add /opt/cuda/bin to your path.

To find whether the installation was successful and if cuda is up and running, you can compile the samples installed on /opt/cuda/sample (you can simply run make inside the directory, altough is a good practice to copy the /opt/cuda/samples directory to your home directory before compiling) and running the compiled examples. A nice way to check the installation is to run one of the examples, called deviceQuery.

Using CUDA with an older GCC

Since CUDA does often not support the latest GCC version, you might need to install an older GCC to compile CUDA programs.

For CUDA 7.5/GCC 4.9, create the following symlinks, so CUDA will use the old compiler (for CUDA 8.0/GCC 5, replace 4.9 with 5):

# ln -s /usr/bin/gcc-4.9 /opt/cuda/bin/gcc
# ln -s /usr/bin/g++-4.9 /opt/cuda/bin/g++

You might also need to configure your build system to use the same GCC version for compiling host code.

Language bindings

Driver issues

It might be necessary to use the legacy driver nvidia-304xx or nvidia-304xx-lts to resolve permissions issues when running CUDA programs on systems with multiple GPUs.

List of OpenCL and CUDA accelerated software

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: please use the first argument of the template to provide a brief explanation. (Discuss in Talk:GPGPU#)

Links and references