|Summary help replacing me|
|Installation and usage of OpenCL and CUDA, the two major Linux GPGPU frameworks|
GPGPU stands for General-purpose computing on graphics processing units.
In Linux, there are currently two major GPGPU frameworks: OpenCL and CUDA
OpenCL (Open Computing Language) is an open, royalty-free parallel programming framework developed by the Khronos Group, a non-profit consortium.
Distribution of the OpenCL framework generally consists of:
- Library providing OpenCL API, known as libCL or libOpenCL (
- OpenCL implementation(s), which contain:
- Device drivers
- OpenCL/C code compiler
- SDK *
- Header files *
* only needed for development
There are several choices for the libCL. In general case, installingfrom [extra] should do :
# pacman -S libcl
However, there are situations when another libCL distribution is more suitable. The following paragraph covers this more advanced topic.
The OpenCL ICD model
OpenCL offers the option to install multiple vendor-specific implementations on the same machine at the same time. In practice, this is implemented using the Installable Client Driver (ICD) model. The center point of this model is the libCL library which in fact imeplements ICD Loader. Through the ICD Loader, an OpenCL application is able to access all platforms and all devices present in the system.
Although itself vendor-agnostic, the ICD Loader still has to be provided by someone. In Archlinux, there are currently two options:
- extra/ by Nvidia. Provides OpenCL version 1.0 and is thus slightly outdated. Its behaviour with OpenCL 1.1 code has not been tested as of yet.
- unsupported/ AUR by AMD. Provides up to date version 1.1 of OpenCL. It is currently distributed by AMD under a restrictive license and therefore could not have been pushed into official repo.
(There is also Intel's libCL, this one is currently not provided in a separate package though.)
For basic usage, extra/libcl is recommended as its installation and updating is convenient. For advanced usage, libopencl is recommended. Both libcl and libopencl should still work with all the implementations.
To see which OpenCL implementations are currently active on your system, use the following command:
$ ls /etc/OpenCL/vendors
OpenCL implementation from AMD is known as AMD APP SDK, formerly also known as AMD Stream SDK or ATi Stream.
For Arch Linux, AMD APP SDK is currently available in AUR as
/opt/amdstream and apart from SDK files it also contains a profiler (
/opt/amdstream/bin/sprofile) and a number of code samples (
/opt/amdstream/samples/opencl). It also provides the
clinfo utility which lists OpenCL platforms and devices present in the system and displays detailed information about them.
As AMD APP SDK itself contains CPU OpenCL driver, no extra driver is needed to use execute OpenCL on CPU devices (regardless of its vendor). GPU OpenCL drivers are provided by theAUR package (an optional dependency), the open-source driver ( ) does not support OpenCL.
Code is compiled using(dependency).
The Nvidia implementation is available in extra/. It only supports Nvidia GPUs running the kernel module (nouveau does not support OpenCL yet).
The Intel implementation, named simply Intel OpenCL SDK, provides optimized OpenCL performance on Intel CPUs (mainly Core and Xeon) and CPUs only. There is no GPU support as Intel GPUs do not support OpenCL/GPGPU. Package is available in AUR: AUR.
For development of OpenCL-capable applications, full installation of the OpenCL framework including implementation, drivers and compiler plus the
- C++: A binding by Khronos is part of the official specs. It is included in
- C++/Qt: An experimental binding named QtOpenCL is in Qt Labs - see Blog entry for more information
- Python: There are two bindings with the same name: PyOpenCL. One is in [extra]: , for the other one see sourceforge
- D: cl4d
- Haskell: The OpenCLRaw package is available in AUR: AUR
- Java: JOCL (a part of JogAmp)
- Mono/.NET: Open Toolkit
CUDA (Compute Unified Device Architecture) is Nvidia's proprietary, closed-source parallel computing architecture and framework. It is made of several components:
- proprietary Nvidia kernel module
- CUDA "driver" and "runtime" libraries
- additional libraries: CUBLAS, CUFFT, CUSPARSE, etc.
- CUDA toolkit, including the
- CUDA SDK, which contains many code samples and examples of CUDA and OpenCL programs
The kernel module and CUDA "driver" library are shipped in extra/and extra/ . The "runtime" library and the rest of the CUDA toolkit are available in community/ . The SDK has been packaged too ( AUR), even if it is not required for developing in CUDA.
When installingpackage you get the directory /opt/cuda created where all of the components "live". For compiling cuda code add /opt/cuda/include to your include path in the compiler instructions. For example this can be accomplished by adding -I/opt/cuda/include to the compiler flags/options.
- Fortran: FORTRAN CUDA, PGI CUDA Fortran Compiler
- Python: In AUR: AUR, also Kappa
- Perl: Kappa, CUDA-Minimal
- Haskell: The CUDA package is available in AUR: The accelerate package AUR. There is also
- Java: jCUDA, JCuda
- Mono/.NET: CUDA.NET, CUDAfy.NET
- Mathematica: CUDAlink
- Ruby, Lua: Kappa
List of OpenCL and CUDA accelerated software
- GIMP (development in progress - see this notice)
- AUR - a GPU memtest. Despite its name, is supports both CUDA and OpenCL