Difference between revisions of "GPGPU"

From ArchWiki
Jump to: navigation, search
(Putting bindings back in place. I messed up, sorry...)
Line 69: Line 69:
 
For development of OpenCL-capable applications, full installation of the OpenCL framework including implementation, drivers and compiler plus the {{Package Official|opencl-headers}} package is needed. Link your code against <tt>libOpenCL</tt>.
 
For development of OpenCL-capable applications, full installation of the OpenCL framework including implementation, drivers and compiler plus the {{Package Official|opencl-headers}} package is needed. Link your code against <tt>libOpenCL</tt>.
  
 +
====Language bindings====
 +
* '''C++''': A binding by Khronos is part of the official specs. It is included in {{Package Official|opencl-headers}}
 +
* '''C++/Qt''': An experimental binding named [http://qt.gitorious.org/qt-labs/opencl QtOpenCL] is in Qt Labs - see [http://labs.qt.nokia.com/2010/04/07/using-opencl-with-qt/ Blog entry] for more information
 +
* '''JavaScript/HTML5''': [http://www.khronos.org/webcl/ WebCL]
 +
* '''[[Python]]''': There are two bindings with the same name: PyOpenCL. One is in [extra]: {{Package Official|python2-pyopencl}}, for the other one see [http://sourceforge.net/projects/pyopencl/ sourceforge]
 +
* '''[[D]]''': [https://bitbucket.org/trass3r/cl4d/wiki/Home cl4d]
 +
* '''Haskell''': The OpenCLRaw package is available in AUR: {{Package AUR|haskell-openclraw}}
 +
* '''[[Java]]''': [http://jogamp.org/jocl/www/ JOCL] (a part of [http://jogamp.org/ JogAmp])
 +
* '''[[Mono|Mono/.NET]]''': [http://sourceforge.net/projects/opentk/ Open Toolkit]
  
 
==CUDA==
 
==CUDA==
Line 82: Line 91:
  
 
The kernel module and CUDA "driver" library are shipped in extra/{{Package Official|nvidia}} and extra/{{Package Official|opencl-nvidia}}. The "runtime" library and the rest of the CUDA toolkit are available in unsupported/{{Package AUR|cuda-toolkit}}. The SDK has been packaged too ({{Package AUR|cuda-sdk}}), even if it is not required for developing in CUDA.
 
The kernel module and CUDA "driver" library are shipped in extra/{{Package Official|nvidia}} and extra/{{Package Official|opencl-nvidia}}. The "runtime" library and the rest of the CUDA toolkit are available in unsupported/{{Package AUR|cuda-toolkit}}. The SDK has been packaged too ({{Package AUR|cuda-sdk}}), even if it is not required for developing in CUDA.
 +
 +
===Language bindings===
 +
* '''Fortran''': [http://www.hoopoe-cloud.com/Solutions/Fortran/Default.aspx FORTRAN CUDA], [http://www.pgroup.com/resources/cudafortran.htm PGI CUDA Fortran Compiler]
 +
* '''[[Python]]''': In AUR: {{Package AUR|pycuda}}, also [http://psilambda.com/download/kappa-for-python Kappa]
 +
* '''Perl''': [http://psilambda.com/download/kappa-for-perl Kappa], [https://github.com/run4flat/perl-CUDA-Minimal CUDA-Minimal]
 +
* '''Haskell''': The CUDA package is available in AUR: {{Package AUR|haskell-cuda}}. There is also [http://hackage.haskell.org/package/accelerate The accelerate package]
 +
* '''Java''': [http://www.hoopoe-cloud.com/Solutions/jCUDA/Default.aspx jCUDA], [http://www.jcuda.org/jcuda/JCuda.html JCuda]
 +
* '''[[Mono|Mono/.NET]]''': [http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx CUDA.NET], [http://www.hybriddsp.com/ CUDAfy.NET]
 +
* '''[[Mathematica]]''': [http://reference.wolfram.com/mathematica/CUDALink/tutorial/Overview.html CUDAlink]
 +
* '''[[Ruby]]''', '''Lua''': [http://psilambda.com/products/kappa/ Kappa]
 +
 +
==List of OpenCL and CUDA accelerated software==
 +
{{Expansion}}
 +
* [[Bitcoin]]
 +
* {{Package AUR|Pyrit}}
 +
* {{Package AUR|aircrack-ng}}
 +
* {{Package AUR|cuda_memtest}} - a GPU memtest. Despite it's name, is supports both CUDA and OpenCL
  
 
==Links and references==
 
==Links and references==

Revision as of 23:01, 9 August 2011

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary end

GPGPU stands for General-purpose computing on graphics processing units. In Linux, there are currently two major GPGPU frameworks: OpenCL and CUDA


OpenCL

Overview

OpenCL (Open Computing Language) is an open, royalty-free parallel programming framework developed by the Khronos Group, a non-profit consortium.

Distribution of the OpenCL framework generally constists of:

  • Library providing OpenCL API, known as libCL or libOpenCL (libOpenCL.so in linux)
  • OpenCL implementation(s), which contain:
    • Device drivers
    • OpenCL/C code compiler
    • SDK *
  • Header files *

* only needed for development

OpenCL libraray

There are several choices for the libCL. In general case, installing Template:Package Official from [extra] should do :

# pacman -S libcl

However, there are situations when another libCL distribution is more suitable. The following paragraph covers this more advanced topic.

The OpenCL ICD model

OpenCL offers the option to install multiple vendor-specific implementations on the same machine at the same time. In practice, this is implemented using the Installable Client Driver (ICD) model. The center point of this model is the libCL library which in fact imeplements ICD Loader. Through the ICD Loader, an OpenCL application is able to access all platforms and all devices present in the system.

Although itself vendor-agnostic, the ICD Loader still has to be provided by someone. In Archlinux, there are currently two options:

  • extra/Template:Package Official by Nvidia. Provides OpenCL version 1.0 and is thus slightly outdated. It's behaviour with OpenCL 1.1 code has not been tested as of yet.
  • unsupported/Template:Package AUR by AMD. Provides up to date version 1.1 of OpenCL. It is currently distributed by AMD under a restrictive license and therefore could not have been pushed into official repo.

(There is also Intel's libCL, this one is currently not provided in a seperate package though.)

Note: ICD Loader's vendor is mentioned only to indetify each loader, it is otherwise completely irrelevant. ICD Loaders are vendor-agnostic and may be used interchangeably
(as long as they are implemented correctly)

For basic usage, extra/libcl is recommended as it's installation and updating is convenient. For advanced usage, libopencl is recommended. Both libcl and libopencl should still work with all the implementations.

Implementations

To see which OpenCL imeplentations are currently active on your system, use the following command: Template:Cli

AMD

OpenCL implementation from AMD is known as AMD APP SDK, formerly also known as AMD Stream SDK or ATi Stream.

For Arch Linux, AMD APP SDK is currently available in AUR as Template:Package AUR. This package is installed as Template:Filename and apart from SDK files it also contains a profiler (Template:Filename) and a number of code samples (Template:Filename). It also provides the Template:Filename utility which lists OpenCL platforms and devices present in the system and displays detailed information about them.

As AMD APP SDK itself contains CPU OpenCL driver, no extra driver is needed to use execute OpenCL on CPU devices (regardless of it's vendor). GPU OpenCL drivers are provided by the Template:Package AUR package (an optional dependency), the open-source driver (Template:Package Official) does not support OpenCL.

Code is compiled using Template:Package Official (dependency).

Nvidia

The Nvidia implementation is available in extra/Template:Package Official. It only supports Nvidia GPUs running the Template:Package Official kernel module (nouveau does not support OpenCL yet).

Intel

The Intel implementation, named simply Intel OpenCL SDK, provides optimized OpenCL performance on Intel CPUs (mainly Core and Xeon) and CPUs only. There is no GPU support as Intel GPUs don't support OpenCL/GPGPU. Package is available in AUR: Template:Package AUR.

Developement

For development of OpenCL-capable applications, full installation of the OpenCL framework including implementation, drivers and compiler plus the Template:Package Official package is needed. Link your code against libOpenCL.

Language bindings

CUDA

CUDA (Compute Unified Device Architecture) is Nvidia's proprietary, closed-source parallel computing architecture and framework. It is made of several components:

  • required:
    • proprietary Nvidia kernel module
    • CUDA "driver" and "runtime" libraries
  • optional:
    • additional libraries: CUBLAS, CUFFT, CUSPARSE, etc.
    • CUDA toolkit, including the Template:Filename compiler
    • CUDA SDK, which contains many code samples and examples of CUDA and OpenCL programs

The kernel module and CUDA "driver" library are shipped in extra/Template:Package Official and extra/Template:Package Official. The "runtime" library and the rest of the CUDA toolkit are available in unsupported/Template:Package AUR. The SDK has been packaged too (Template:Package AUR), even if it is not required for developing in CUDA.

Language bindings

List of OpenCL and CUDA accelerated software

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: please use the first argument of the template to provide a brief explanation. (Discuss in Talk:GPGPU#)

Links and references