OpenMP
From Wikipedia, the free encyclopedia
The OpenMP application programming interface (API) supports multi-platform shared memory multiprocessing programming in C/C++ and Fortran on many architectures, including Unix and Microsoft Windows platforms. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.
Often a so-called hybrid-model for parallel programming, using both OpenMP and MPI (Message Passing Interface), is used for programming computer clusters.
Contents |
[edit] History
The OpenMP Architecture Review Board (ARB) published its first standard, OpenMP for FORTRAN 1.0, in October of 1997. October the following year they released the C/C++ standard. 2000 saw version 2.0 of the FORTRAN standard with version 2.0 of the C/C++ standard being released in 2002. The current version is 2.5. It is a combined C/C++/FORTRAN standard, which was released in 2005.
[edit] The core elements
The core elements of OpenMP are the constructs for thread creation, work load distribution (work sharing), data environment management, thread synchronization, user level runtime routines and environment variables.
- Thread creation: omp parallel. It is used to fork additional threads to carry out the work enclosed in the construct in parallel. The original process will be denoted as master thread with thread ID 0.
Example: Display "Hello, world" using multiple threads.
int main(int argc, char* argv[]) { #pragma omp parallel printf("Hello, world.\n"); return 0; }
- work-sharing constructs: used to specify how to assign independent work to one or all of the threads.
- omp for or omp do: used to splits up loop iterations among the threads
- sections: assigning consecutive but independent code blocks to different threads
- single: specifying a code block that is executed by only one thread, a barrier is implied in the end
- master: similar to single, but the code block will be executed by the master thread only and no barrier implied in the end.
Example: initialize the value of a large array in parallel, using each thread to do a portion of the work
#define N 100000 int main(int argc, char *argv[]) { int i, a[N]; #pragma omp parallel for for (i=0;i<N;i++) a[i]= 2*i; return 0; }
- Data environment management: Since OpenMP is a shared memory programming model, most variables in OpenMP code are visible to all threads by default. But sometimes private variables are necessary to avoid race condition and there is a need to pass values between the sequential part and the parallel region (the code block executed in parallel), so data environment management is introduced as data clauses.
- shared: the data is shared, which means visible and accessible by all threads simultaneously
- private: the data is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region.
- firstprivate: the data is private to each thread, but initialized using the value of the variable using the same name from the master thread.
- lastprivate: the data is private to each thread. The value of this private data will be copied to a global variable using the same name outside the parallel region if current iteration is the last iteration in the parallelized loop. A variable can be both firstprivate and lastprivate.
- threadprivate: The data is a global data, but it is private in each parallel region during the runtime. The difference between threadprivate and private is the global scope associated with threadprivate and the preserved value across parallel regions.
- copyin: similar to firstprivate for private variables, threadprivate variables are not initialized, unless using copyin to pass the value from the corresponding global variables. No copyout is needed because the value of a threadprivate variable is maintained throughout the execution of the whole program.
- reduction: the variable has a local copy in each thread, but the values of the local copies will be summarized (reduced) into a global shared variable.
- synchronization constructs:
- critical section: the enclosed code block will be executed by all threads but only one thread at a time, not simultaneously execution. It is often used to protect shared data from race condition.
- atomic: similar to critical section, but advise the compiler to use special hardware instructions for better performance. Compilers may choose to ignore this suggestion from users and use critical section instead.
- barrier: each thread waits until all of the other threads of a team have reached this point
- ordered: the structure block is executed in the order in which iterations would be executed in a sequential loop
- flush: the value of this variable is restored from the register to the memory for using this value outside of a parallel part
- User-level runtime routines: used to modify/check the number of threads, detect if the execution context is in a parallel region, how many processors in current system, etc
- Environment variables: a method to alter the execution features of OpenMP applications. Used to control loop iterations scheduling, default number of threads, etc.
[edit] Sample Programs
[edit] Hello World
[edit] C/C++
#include <omp.h> #include <stdio.h> int main (int argc, char *argv[]) { int id, nthreads; #pragma omp parallel private(id) { id = omp_get_thread_num(); printf("Hello World from thread %d\n", id); #pragma omp barrier if ( id == 0 ) { nthreads = omp_get_num_threads(); printf("There are %d threads\n",nthreads); } } return 0; }
[edit] Fortran 77
PROGRAM HELLO INTEGER ID, NTHRDS INTEGER OMP_GET_THREAD_NUM, OMP_GET_NUM_THREADS C$OMP PARALLEL PRIVATE(ID) ID = OMP_GET_THREAD_NUM() PRINT *, 'HELLO WORLD FROM THREAD', ID C$OMP BARRIER IF ( ID .EQ. 0 ) THEN NTHRDS = OMP_GET_NUM_THREADS() PRINT *, 'THERE ARE', NTHRDS, 'THREADS' END IF C$OMP END PARALLEL END
[edit] Free form Fortran 90
program hello90 use omp_lib integer :: id, nthreads !$omp parallel private(id) id = omp_get_thread_num() write (*,*) 'Hello World from thread', id !$omp barrier if ( id .eq. 0 ) then nthreads = omp_get_num_threads() write (*,*) 'There are', nthreads, 'threads' end if !$omp end parallel end program
[edit] Pros and Cons of OpenMP
Pros
- Simple: need not deal with message passing as MPI does
- Data layout and decomposition is handled automatically by directives.
- Incremental parallelism: can work on one portion of the program at one time, no dramatic change to code is needed.
- Unified code for both serial and parallel applications: OpenMP constructs are treated as comments when sequential compilers are used.
- Original (serial) code statements need not, in general, be modified when parallelized with OpenMP. This reduces the chance of inadvertently introducing bugs.
Cons
- Currently only runs efficiently in shared-memory multiprocessor platforms
- Requires a compiler that supports OpenMP. Visual C++ 2005 supports it, and so do the Intel compilers for their x86 and IPF product series. Sun Studio supports the latest OpenMP 2.5 specification with productivity enhancements for Solaris OS (UltraSPARC and x86/x64) and is planning similar support for the Linux platform in its next release. GCC 4.2 will support OpenMP, and some distributions (such as Fedora Core 5) already have support for OpenMP in their GCC 4.1 based compilers.
- Low parallel efficiency: rely more on parallelizable loops, leaving out a relatively high percentage of a non-loop code in sequential part.
[edit] Performance expectations of OpenMP
One might expect to get N times less wall clock execution time (or N times speedup) when running a program parallelized using OpenMP on a N processor platform. However, this is seldom the case due to the following reasons:
- A large portion of the program may not be parallelized by OpenMP, which sets theoretical upper limit of speedup according to Amdahl's law.
- N processors in a SMP may bring N times computation power, but the memory bandwidth usually does not scale up N times. Quite often, the original memory path is shared by multiple processors and performance degradation may be observed when they compete for the shared memory bandwidth.
- Many other common problems affecting the final speedup in parallel computing also apply to OpenMP, like load balancing and synchronization overhead.
[edit] OpenMP Benchmarks
There are some public domain OpenMP benchmarks for users to try.
- NAS parallel benchmark
- OpenMP validation suite
- OpenMP source code repository
- EPCC OpenMP Microbenchmarks
This commercial benchmark is also very popular.
[edit] See also
[edit] External links
- The official site for OpenMP
- cOMPunity Community of OpenMP Users, Researchers, Tool Developers and Providers
- A simple OpenMP Tutorial to introduce OpenMP programming concepts
- TotalView A debugger for OpenMP programs
- Intel® Threading Tools
- com/tech/dpomp Dynamic Performance Monitor for OpenMP
- Parawiki page for OpenMP
- PC Cluster Consortium
- GCC's OpenMP implementation
- IBM Octopiler with OpenMP support
- MSDN Magazine article on OpenMP
- Sun Studio multithreading tools for Solaris and Linux
- An article describing why OpenMP is well suited for parallel programming