GPGPU
GPGPU stands for General-purpose computing on graphics processing units.
OpenCL
OpenCL (Open Computing Language) is an open, royalty-free parallel programming specification developed by the Khronos Group, a non-profit consortium.
The OpenCL specification describes a programming language, a general environment that is required to be present, and a C API to enable programmers to call into this environment.
Runtime
To execute programs that use OpenCL, a compatible hardware runtime needs to be installed.
AMD/ATI
- opencl-clover-mesa or opencl-rusticl-mesa: OpenCL support with clover and rusticl for mesa drivers
- rocm-opencl-runtime: Part of AMD's ROCm GPU compute stack, officially supporting a small range of GPU models (other cards may work with unofficial or partial support). To support cards older than Vega, you need to set the runtime variable
ROC_ENABLE_PRE_VEGA=1
. This is similar, but not quite equivalent to specifyingopencl=rocr
in ubuntu's amdgpu-install, because this package's rocm version differs from ubuntu's installer version. - opencl-legacy-amdgpu-proAUR: Legacy Orca OpenCL repackaged from AMD's ubuntu releases. Equivalent to specifying
opencl=legacy
in ubuntu's amdgpu-install. - opencl-amdAUR, opencl-amd-devAUR: ROCm components repackaged from AMD's Ubuntu releases. Equivalent to specifying
opencl=rocr,legacy
in ubuntu's amdgpu-install. - amdapp-sdkAUR: AMD CPU runtime
NVIDIA
- opencl-clover-mesa or opencl-rusticl-mesa: OpenCL support with clover and rusticl for mesa drivers
- opencl-nvidia: official NVIDIA runtime
Intel
- intel-compute-runtime: a.k.a. the Neo OpenCL runtime, the open-source implementation for Intel HD Graphics GPU on Gen8 (Broadwell) and beyond.
- opencl-clover-mesa or opencl-rusticl-mesa: OpenCL support with clover and rusticl for mesa drivers
- beignetAUR: the open-source implementation for Intel HD Graphics GPU on Gen7 (Ivy Bridge) and beyond, deprecated by Intel in favour of NEO OpenCL driver, remains recommended solution for legacy hardware platforms (e.g. Ivy Bridge, Haswell).
- intel-openclAUR: the proprietary implementation for Intel HD Graphics GPU on Gen7 (Ivy Bridge) and beyond, deprecated by Intel in favour of NEO OpenCL driver, remains recommended solution for legacy hardware platforms (e.g. Ivy Bridge, Haswell).
- intel-opencl-runtimeAUR: the implementation for Intel Core and Xeon processors. It also supports non-Intel CPUs.
Others
- pocl: LLVM-based OpenCL implementation (hardware independent)
There is compiler and translator enable OpenCL applications to be run over a Vulkan run-time.
- clspv-gitAUR: Clspv is a prototype compiler for a subset of OpenCL C to Vulkan compute shaders.
- clvk-gitAUR: clvk is a prototype implementation of OpenCL 3.0 on top of Vulkan using clspv as the compiler.
- xrt-binAUR: Xilinx Run Time for FPGA xrt
- fpga-runtime-for-opencl:FPGA Runtime
32-bit runtime
To execute 32-bit programs that use OpenCL, a compatible hardware 32-bit runtime needs to be installed.
AMD/ATI
- lib32-opencl-clover-mesa or lib32-opencl-rusticl-mesa: OpenCL support for AMD/ATI Radeon mesa drivers (32-bit)
NVIDIA
- lib32-opencl-nvidia: OpenCL implemention for NVIDIA (32-bit)
ICD loader (libOpenCL.so)
The OpenCL ICD loader is supposed to be a platform-agnostic library that provides the means to load device-specific drivers through the OpenCL API. Most OpenCL vendors provide their own implementation of an OpenCL ICD loader, and these should all work with the other vendors' OpenCL implementations. Unfortunately, most vendors do not provide completely up-to-date ICD loaders, and therefore Arch Linux has decided to provide this library from a separate project (ocl-icd) which currently provides a functioning implementation of the current OpenCL API.
The other ICD loader libraries are installed as part of each vendor's SDK. If you want to ensure the ICD loader from the ocl-icd package is used, you can create a file in /etc/ld.so.conf.d
which adds /usr/lib
to the dynamic program loader's search directories:
/etc/ld.so.conf.d/00-usrlib.conf
/usr/lib
This is necessary because all the SDKs add their runtime's lib directories to the search path through ld.so.conf.d
files.
The available packages containing various OpenCL ICDs are:
- ocl-icd: recommended, most up-to-date
- intel-openclAUR by Intel. Provides OpenCL 2.0, deprecated in favour of intel-compute-runtime.
Development
For OpenCL development, the bare minimum additional packages required, are:
- ocl-icd: OpenCL ICD loader implementation, up to date with the latest OpenCL specification.
- opencl-headers: OpenCL C/C++ API headers.
The vendors' SDKs provide a multitude of tools and support libraries:
- intel-opencl-sdkAUR: Intel OpenCL SDK (old version, new OpenCL SDKs are included in the INDE and Intel Media Server Studio)
- amdapp-sdkAUR: This package is installed as
/opt/AMDAPP
and apart from SDK files it also contains a number of code samples (/opt/AMDAPP/SDK/samples/
). It also provides theclinfo
utility which lists OpenCL platforms and devices present in the system and displays detailed information about them. As the SDK itself contains a CPU OpenCL driver, no extra driver is needed to execute OpenCL on CPU devices (regardless of its vendor). - cuda: Nvidia's GPU SDK which includes support for OpenCL 3.0.
Implementations
To see which OpenCL implementations are currently active on your system, use the following command:
$ ls /etc/OpenCL/vendors
To find out all possible (known) properties of the OpenCL platform and devices available on the system, install clinfo.
You can specify which implementations should your application see using ocl-icd-chooseAUR. For example:
$ ocl-icd-choose amdocl64.icd:mesa.icd davinci-resolve-checker
Rusticl
Rusticl is a new OpenCL implementation written in Rust provided by opencl-rusticl-mesa. It can be enabled by using the environment variable RUSTICL_ENABLE=driver
, where driver
is a Gallium driver, such as radeonsi
or iris
.
Optionally, if OpenCL applications still do not detect Rusticl, use the following environment variable:
OCL_ICD_VENDORS=/etc/OpenCL/vendors/rusticl.icd
Language bindings
- JavaScript/HTML5: WebCL
- Python: python-pyopencl
- D: cl4d or DCompute
- Java: Aparapi or JOCL (a part of JogAmp)
- Mono/.NET: Open Toolkit
- Go: OpenCL bindings for Go
- Racket: Racket has a native interface on PLaneT that can be installed via raco.
- Rust: ocl
- Julia: OpenCL.jl
SYCL
According to Wikipedia:SYCL:
- SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators. It is a single-source embedded domain-specific language (eDSL) based on pure C++17.
- SYCL is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency inspired by OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using completely standard C++. SYCL enables single-source development where C++ template functions can contain both host and device code to construct complex algorithms that use hardware accelerators, and then re-use them throughout their source code on different types of data.
- While the SYCL standard started as the higher-level programming model sub-group of the OpenCL working group and was originally developed for use with OpenCL and SPIR, SYCL is a Khronos Group workgroup independent from the OpenCL working group since September 20, 2019 and starting with SYCL 2020, SYCL has been generalized as a more general heterogeneous framework able to target other systems. This is now possible with the concept of a generic backend to target any acceleration API while enabling full interoperability with the target API, like using existing native libraries to reach the maximum performance along with simplifying the programming effort. For example, the Open SYCL implementation targets ROCm and CUDA via AMD's cross-vendor HIP.
Implementations
- computecppAUR Codeplay's proprietary implementation of SYCL 1.2.1. Can target SPIR, SPIR-V and experimentally PTX (NVIDIA) as device targets (ends of support on 1st september 2023, will get merged into intel llvm implementation Source).
- trisycl-gitAUR: Open source implementation mainly driven by Xilinx.
- hipsycl-cuda-gitAUR and hipsycl-rocm-gitAUR: Free implementation built over AMD's HIP instead of OpenCL. Is able to run on AMD and NVIDIA GPUs.
- intel-oneapi-dpcpp-cpp: Intel's Data Parallel C++: the oneAPI Implementation of SYCL.
Checking for SPIR support
Most SYCL implementations are able to compile the accelerator code to SPIR or SPIR-V. Both are intermediate languages designed by Khronos that can be consumed by an OpenCL driver. To check whether SPIR or SPIR-V are supported clinfo can be used:
$ clinfo | grep -i spir
Platform Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint IL version SPIR-V_1.0 SPIR versions 1.2
ComputeCpp additionally ships with a tool that summarizes the relevant system information:
$ computecpp_info
Device 0: Device is supported : UNTESTED - Untested OS CL_DEVICE_NAME : Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz CL_DEVICE_VENDOR : Intel(R) Corporation CL_DRIVER_VERSION : 18.1.0.0920 CL_DEVICE_TYPE : CL_DEVICE_TYPE_CPU
Drivers known to at least partially support SPIR or SPIR-V include intel-compute-runtime, intel-opencl-runtimeAUR, pocl and amdgpu-pro-openclAUR[broken link: package not found].
Development
SYCL requires a working C++11 environment to be set up. There are a few open source libraries available:
- ComputeCpp SDK: Collection of code examples, cmake integration for ComputeCpp
- SYCL-DNN: Neural network performance primitives
- SYCL-BLAS: Linear algebra performance primitives
- VisionCpp: Computer Vision library
- SYCL Parallel STL: GPU implementation of the C++17 parallel algorithms
CUDA
CUDA (Compute Unified Device Architecture) is NVIDIA's proprietary, closed-source parallel computing architecture and framework. It requires an NVIDIA GPU, and consists of several components:
- Required:
- Proprietary NVIDIA kernel module
- CUDA "driver" and "runtime" libraries
- Optional:
- Additional libraries: CUBLAS, CUFFT, CUSPARSE, etc.
- CUDA toolkit, including the
nvcc
compiler - CUDA SDK, which contains many code samples and examples of CUDA and OpenCL programs
The kernel module and CUDA "driver" library are shipped in nvidia and opencl-nvidia. The "runtime" library and the rest of the CUDA toolkit are available in cuda. cuda-gdb
needs ncurses5-compat-libsAUR to be installed, see FS#46598.
Development
The cuda package installs all components in the directory /opt/cuda
. The script in /etc/profile.d/cuda.sh
sets the relevant environment variables so all build systems that support CUDA can find it.
To find whether the installation was successful and whether CUDA is up and running, you can compile the CUDA samples. One way to check the installation is to run the deviceQuery
sample.
Language bindings
- Fortran: PGI CUDA Fortran Compiler
- Haskell: The accelerate package lists available CUDA backends
- Java: JCuda
- Mathematica: CUDAlink
- Mono/.NET: CUDAfy.NET, managedCuda
- Perl: KappaCUDA, CUDA-Minimal
- Python: python-pycuda
- Ruby: rbcuda
- Rust: cuda-sys (bindings) or RustaCUDA (high-level wrapper)
ROCm
ROCm (Radeon Open Compute) is AMD's open-source parallel computing architecture and framework. Although it requires an AMD GPU some ROCm tools are hardware agnostic. See the ROCm for Arch Linux repository for more information.
- rocm-hip-sdk: Develop applications using HIP and libraries for AMD platforms.
- rocm-opencl-sdk: Develop OpenCL-based applications for AMD platforms.
HIP
The Heterogeneous Interface for Portability (HIP) is AMD's dedicated GPU programming environment for designing high performance kernels on GPU hardware. HIP is a C++ runtime API and programming language that allows developers to create portable applications on different platforms.
- rocm-hip-runtime: The base runtime, packages to run HIP applications on the AMD platform.
- hip-runtime-amd: The Heterogeneous Interface for AMDGPUs in ROCm. Supports GPUs from the polaris architecture (RX 500 series) till AMD's latest RDNA 2 architecture (RX 6000 series)
- miopen-hip: AMD's open source deep learning library with HIP backend.
- hip-runtime-nvidiaAUR: The Heterogeneous Interface for NVIDIA GPUs in ROCm.
OpenMP
The openmp-extrasAUR package provides AOMP - an open source Clang/LLVM based compiler with added support for the OpenMP API on AMD GPUs.
OpenCL
The rocm-opencl-runtime package is the part of the ROCm framework providing an OpenCL runtime.
OpenCL image support
The latest ROCm versions now includes OpenCL Image Support used by GPGPU accelerated software such as Darktable. ROCm with the AMDGPU open source graphics driver are all that is required. AMDGPU PRO is not required.
$ /opt/rocm/bin/clinfo | grep -i "image support"
Image support Yes
Troubleshooting
First check if your GPU shows up in /opt/rocm/bin/rocminfo
. If it does not, it might mean that ROCm does not support your GPU or it is built without support for your GPU.
PyTorch
To use PyTorch with ROCm install python-pytorch-rocm
$ python -c 'import torch; print(torch.cuda.is_available())'
True
ROCm pretends to be CUDA so this should return True
. If it does not, either it is not compiled with your GPU support or you might have conflicting dependencies. You can verify those by looking at ldd /usr/lib/libtorch.so
- there should not be any missing .so
files nor multiple versions of same .so
.
List of GPGPU accelerated software
- Bitcoin
- Blender – CUDA support for Nvidia GPUs and HIP support for AMD GPUs. More information here.
- BOINC
- FFmpeg – more information here.
- Folding@home
- GIMP – experimental – more information here.
- HandBrake
- Hashcat
- LibreOffice Calc – more information here.
- mpv - See mpv#Hardware video acceleration.
- clinfo – Find all possible (known) properties of the OpenCL platform and devices available on the system.
- cuda_memtestAUR – a GPU memtest. Despite its name, is supports both CUDA and OpenCL.
- darktable – OpenCL feature requires at least 1 GB RAM on GPU and Image support (check output of clinfo command).
- DaVinci Resolve - a non-linear video editor. Can use both OpenCL and CUDA.
- imagemagick
- lc0AUR - Used for searching the neural network (supports tensorflow, OpenCL, CUDA, and openblas)
- opencv
- pyritAUR
- python-pytorch-cuda - PyTorch with CUDA backend
- tensorflow-cuda - Port of TensorFlow to CUDA
- tensorflow-computecppAUR - Port of TensorFlow to SYCL
- whisper.cpp-clblasAUR and whisper.cpp-cublasAUR - Port of OpenAI's Whisper model in C/C++ (with OpenCL and CUDA optimizations)
- xmrig - High Perf CryptoNote CPU and GPU (OpenCL, CUDA) miner