Difference between revisions of "Multi-threaded Programming"

From Gridkaschool
(Created page with "===Technical specification/requirements===")
 
Line 1: Line 1:
  +
= Introduction =
  +
  +
OpenCL is a standard which defines a framework, an API and a programming language for parallel computation on heterogeneous systems like client computer systems, high- performance computing servers as well as hand-held devices. The standard is maintained by the Khronos Group and supported by a large consortium of industry leaders including Apple, Intel, AMD, NVIDIA and ARM. Influenced by NVIDIA’s CUDA from the GPU side and by OpenMP which originates from the classical CPU side, the open OpenCL standard is characterized by a formulation which is abstract enough to support both CPU and GPU computing resources. This is an ambitious goal, since providing an abstract interface together with a peak performance is a challenging task. OpenCL employs a strict isolation of the computation work into fundamental units, the kernels. These kernels can be developed in the OpenCL C programming language, a subset of the C99 language, with some additional OpenCL specific keywords. In general, these kernels are hardware independent and compiled by the OpenCL runtime when they are loaded. To be able to fully exploit the parallel execution of the kernel code, several kernel instances, the work items, are started to process a set of input values. The actual number of concurrently running work items is determined by the OpenCL system. How a concrete algorithm can be partitioned into work items has to be decided by the programmer.
  +
  +
= Reference Material =
  +
  +
<ul>
  +
<li><p>Khronos Group OpenCL</p>
  +
<p>http://www.khronos.org/opencl/</p></li>
  +
<li><p>OpenCL 1.2 Quick Reference Card</p>
  +
<p>http://www.khronos.org/files/opencl-1-2-quick-reference-card.pdf</p></li>
  +
<li><p>OpenCL 1.2 Full Documentation</p>
  +
<p>http://www.khronos.org/registry/cl/sdk/1.2/docs/man/xhtml/</p></li>
  +
<li><p>Intel SDK for OpenCL</p>
  +
<p>http://software.intel.com/en-us/articles/vcsource-tools-opencl-sdk/</p></li>
  +
<li><p>AMD OpenCL Zone</p>
  +
<p>http://developer.amd.com/zones/OpenCLZone/</p></li>
  +
<li><p>NVIDIA OpenCL</p>
  +
<p>http://www.nvidia.com/object/cuda_opencl_1.html</p></li></ul>
  +
  +
= Project: Boostraping OpenCL and Vector Addition =
  +
  +
TODO: give overview of what to do
  +
  +
== Compiling and running the test program ==
  +
  +
Open the folder '''project_vectoradd''', create the build files using CMake and compile the application.
  +
  +
  +
  +
<pre>$ cd project_vectoradd/
  +
[hauth@vdt-corei7avx project_vectoradd]$ cmake .
  +
[hauth@vdt-corei7avx project_vectoradd]$ make
  +
[100%] Built target vectoradd
  +
[hauth@vdt-corei7avx project_vectoradd]$ ./vectoradd </pre>
  +
Once the application was compiled successully run it. The output should be along the following lines:
  +
  +
<pre>$ ./vectoradd
  +
Testing Platform : Intel(R) OpenCL
  +
&gt; Selected Compute Device : Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
  +
Transferring data to device memory took 2e-06 s
  +
Running vectorAdd kernel took 2e-06 s
  +
Transferring data to host memory took 0 s
  +
All done</pre>
  +
== Running an OpenCL Kernel ==
  +
  +
The first task is to run a simple OpenCL kernel. To do so, you have to edit the file <tt>vectoradd.cpp</tt> in your favorite text editor, use <tt>nano</tt> if you are not sure which tool to use.
  +
  +
<pre>$ nano vectoradd.cpp</pre>
  +
Take your time to familiarize yourself with the sourcecode which is already in the file. Some of the intial steps of setting up the OpenCL system are already provided:
  +
  +
<ul>
  +
<li><p>Creating the OpenCL compute context</p>
  +
<p>A OpenCL platform is automaticaly selected, depending the required device type. You can change the required device type by modifying the constant <tt>devType</tt>:</p>
  +
<pre>// Desired Device type.
  +
// can be CL_DEVICE_TYPE_GPU or CL_DEVICE_TYPE_CPU in this example
  +
const cl_device_type devType = CL_DEVICE_TYPE_CPU;</pre></li>
  +
<li></li></ul>
  +
  +
== Modifications to play around ==
  +
  +
* switch to double , how does the runtime change for CPU/GPU ?
  +
* switch to float4 vector type. can you perform the same addition operations ?
  +
  +
= Project: N-Body Simulation =
  +
 
===[[Internals:Multi-threaded|Technical specification/requirements]]===
 
===[[Internals:Multi-threaded|Technical specification/requirements]]===

Revision as of 17:36, 3 August 2012

Introduction

OpenCL is a standard which defines a framework, an API and a programming language for parallel computation on heterogeneous systems like client computer systems, high- performance computing servers as well as hand-held devices. The standard is maintained by the Khronos Group and supported by a large consortium of industry leaders including Apple, Intel, AMD, NVIDIA and ARM. Influenced by NVIDIA’s CUDA from the GPU side and by OpenMP which originates from the classical CPU side, the open OpenCL standard is characterized by a formulation which is abstract enough to support both CPU and GPU computing resources. This is an ambitious goal, since providing an abstract interface together with a peak performance is a challenging task. OpenCL employs a strict isolation of the computation work into fundamental units, the kernels. These kernels can be developed in the OpenCL C programming language, a subset of the C99 language, with some additional OpenCL specific keywords. In general, these kernels are hardware independent and compiled by the OpenCL runtime when they are loaded. To be able to fully exploit the parallel execution of the kernel code, several kernel instances, the work items, are started to process a set of input values. The actual number of concurrently running work items is determined by the OpenCL system. How a concrete algorithm can be partitioned into work items has to be decided by the programmer.

Reference Material

Project: Boostraping OpenCL and Vector Addition

TODO: give overview of what to do

Compiling and running the test program

Open the folder project_vectoradd, create the build files using CMake and compile the application.


$ cd project_vectoradd/
[hauth@vdt-corei7avx project_vectoradd]$ cmake . 
[hauth@vdt-corei7avx project_vectoradd]$ make 
[100%] Built target vectoradd
[hauth@vdt-corei7avx project_vectoradd]$ ./vectoradd 

Once the application was compiled successully run it. The output should be along the following lines:

$ ./vectoradd
Testing Platform : Intel(R) OpenCL
> Selected Compute Device :        Intel(R) Core(TM) i7-3930K CPU @ 3.20GHz
Transferring data to device memory took 2e-06 s
Running vectorAdd kernel took 2e-06 s
Transferring data to host memory took 0 s
All done

Running an OpenCL Kernel

The first task is to run a simple OpenCL kernel. To do so, you have to edit the file vectoradd.cpp in your favorite text editor, use nano if you are not sure which tool to use.

$ nano vectoradd.cpp

Take your time to familiarize yourself with the sourcecode which is already in the file. Some of the intial steps of setting up the OpenCL system are already provided:

  • Creating the OpenCL compute context

    A OpenCL platform is automaticaly selected, depending the required device type. You can change the required device type by modifying the constant devType:

    // Desired Device type.
    // can be CL_DEVICE_TYPE_GPU or CL_DEVICE_TYPE_CPU in this example
    const cl_device_type devType = CL_DEVICE_TYPE_CPU;

Modifications to play around

  • switch to double , how does the runtime change for CPU/GPU ?
  • switch to float4 vector type. can you perform the same addition operations ?

Project: N-Body Simulation

Technical specification/requirements