GPGPU stands for General-purpose computing on graphics processing units. In Linux, there are currently two major GPGPU frameworks: OpenCL and CUDA
- 1 OpenCL
- 2 CUDA
- 3 List of OpenCL and CUDA accelerated software
- 4 Links and references
OpenCL (Open Computing Language) is an open, royalty-free parallel programming specification developed by the Khronos Group, a non-profit consortium.
The OpenCL specification describes a programming language, a general environment that is required to be present, and a C API to enable programmers to call into this environment.
To execute programs that use OpenCL, a compatible hardware runtime needs to be installed.
- AMDGPU and Radeon : free runtime for
- AMDGPU AUR: proprietary standalone runtime for
- AMDGPU PRO AUR: proprietary runtime for
- AMDGPU AUR: AMD proprietary runtime, soon to be deprecated in favor of
- AUR: AMD CPU runtime
- NVIDIA runtime : official
- AUR: official Intel CPU runtime, also supports non-Intel CPUs
- : open-source implementation for Intel IvyBridge+ iGPUs
- AUR: LLVM-based OpenCL implementation
OpenCL ICD loader (libOpenCL.so)
The OpenCL ICD loader is supposed to be a platform-agnostic library that provides the means to load device-specific drivers through the OpenCL API. Most OpenCL vendors provide their own implementation of an OpenCL ICD loader, and these should all work with the other vendors' OpenCL implementations. Unfortunately, most vendors do not provide completely up-to-date ICD loaders, and therefore Arch Linux has decided to provide this library from a separate project () which currently provides a functioning implementation of the current OpenCL API.
The other ICD loader libraries are installed as part of each vendor's SDK. If you want to ensure the ICD loader from the
/etc/ld.so.conf.d which adds
/usr/lib to the dynamic program loader's search directories:
This is necessary because all the SDKs add their runtime's lib directories to the search path through
The available packages containing various OpenCL ICDs are:
- : recommended, most up-to-date
- AUR by AMD. Provides OpenCL 2.0. It is distributed by AMD under a restrictive license and therefore cannot be included into the official repositories.
- AUR by Intel. Provides OpenCL 2.0.
For OpenCL development, the bare minimum additional packages required, are:
- : OpenCL ICD loader implementation, up to date with the latest OpenCL specification.
- : OpenCL C/C++ API headers.
The vendors' SDKs provide a multitude of tools and support libraries:
- Intel OpenCL SDK (old version, new OpenCL SDKs are included in the INDE and Intel Media Server Studio) AUR:
/opt/AMDAPPand apart from SDK files it also contains a number of code samples (
/opt/AMDAPP/SDK/samples/). It also provides the
clinfoutility which lists OpenCL platforms and devices present in the system and displays detailed information about them. As AMD APP SDK itself contains CPU OpenCL driver, no extra driver is needed to execute OpenCL on CPU devices (regardless of its vendor). GPU OpenCL drivers are provided by the AUR package (an optional dependency).
AUR: This package is installed as
- : Nvidia's GPU SDK which includes support for OpenCL 1.1.
To see which OpenCL implementations are currently active on your system, use the following command:
$ ls /etc/OpenCL/vendors
- D: cl4d
- Haskell: OpenCLRaw: AUR[broken link: archived in aur-mirror]
- Java: JOCL (a part of JogAmp)
- Mono/.NET: Open Toolkit
- Go: OpenCL bindings for Go
- Racket: Racket has a native interface on PLaneT that can be installed via raco.
CUDA (Compute Unified Device Architecture) is NVIDIA's proprietary, closed-source parallel computing architecture and framework. It requires a Nvidia GPU. It consists of several components:
- proprietary Nvidia kernel module
- CUDA "driver" and "runtime" libraries
- additional libraries: CUBLAS, CUFFT, CUSPARSE, etc.
- CUDA toolkit, including the
- CUDA SDK, which contains many code samples and examples of CUDA and OpenCL programs
/opt/cuda. For compiling CUDA code, add
/opt/cuda/include to your include path in the compiler instructions. For example this can be accomplished by adding
-I/opt/cuda/include to the compiler flags/options. To use
gcc wrapper provided by NVIDIA, just add
/opt/cuda/bin to your path.
To find whether the installation was successful and if cuda is up and running, you can compile the samples installed on
/opt/cuda/samples (you can simply run
make inside the directory, altough is a good practice to copy the
/opt/cuda/samples directory to your home directory before compiling) and running the compiled examples. A nice way to check the installation is to run one of the examples, called
/opt/cuda/bin/for the older version to be picked up by
nvcc. You might also need to configure your build system to use the same GCC version for compiling host code.
- Fortran: PGI CUDA Fortran Compiler
- Haskell: The accelerate package lists available CUDA backends
- Java: JCuda
- Mathematica: CUDAlink
- Mono/.NET: CUDA.NET, CUDAfy.NET
- Perl: Kappa, CUDA-Minimal
- Python: or Kappa
- Ruby, Lua: Kappa
It might be necessary to use the legacy driveror to resolve permissions issues when running CUDA programs on systems with multiple GPUs.
List of OpenCL and CUDA accelerated software
- GIMP (experimental - see )
- - OpenCL feature requires at least 1 GB RAM on GPU and Image support (check output of clinfo command).
- AUR - a GPU memtest. Despite its name, is supports both CUDA and OpenCL
- Blender - CUDA support for Nvidia GPUs and OpenCL support for AMD GPUs. More information here.