Parallel Patterns

An All-in-One solution supporting multiple patterns of parallelism

Not all parallel applications are alike. Some are best described by decomposing problems in the data domain or in the task domain, using shared-memory or message passing. Ateji® PX for Java covers most models of parallelism with one single tool:

Task parallelism

Statements or blocks of statements can be composed in parallel using the || operator inside a parallel block, introduced with square brackets:

[
|| a++;
|| b++;
]

or in short form:

[ a++; || b++; ]

Each parallel statement within the composition is called a branch. We purposedly avoid using the terms task or process which mean very different things in different contexts.

Data parallelism

Branches in a parallel composition can be quantified. This is used for performing the same operation on all elements of an array or a collection:

[
// increment all array elements in parallel
|| (int i : N) array[i]++;
]

The equivalent sequential code would be:

[
// increment all array elements one after the other
for(int i : N) array[i]++;
]

Quantification can introduce an arbitrary number of generators (iterators) and filters. Here is how we would update the upper left triangle of a matrix:

[
||(int i:N, int j:N, if i+j<N) matrix[i][j]++;
]

Speculative parallelism

Parallelism is speculative when branches are created without knowing at the time of creation if the result they produce will actually be needed or not. A basic example consists in launching two different sort algorithms in parallel, and returning from the computation as soon as one of them returns:

[
|| return mergeSort(array);
|| return quickSort(array);
]

The two return statements implement non-local exits out of the parallel composition. As soon as one of the branches returns, all other branches are interrupted and their result is discarded. Other nonlocal jump or exit statements such as break, continue and throw behave similarly.

Recursive parallelism

Parallel branches can be created recursively. This is often used in divide-and-conquer algorithms. Here is a simple example computing the Fibonacci series in parallel (this is a naive algorithm for the purpose of exposition):

int fib(int n) {
if(n <= 1) return 1;
int fib1, fib2;
// recursively create parallel branches
[
|| fib1 = fib(n-1);
|| fib2 = fib(n-2);
]
return fib1 + fib2;
}

Distributed parallelism

While multi-threaded programs on multi-core processors can rely on shared memory for communication between threads, this is not the case for other hardware architectures based on distributed memory such as grids, meshes, and most future-generation parallel processors. When shared memory is not available, parallel branches can communicate by exchanging messages if the program has been written in a message-passing style.

Ateji PX provides message-passing at the language level. This enables the compiler to map distributed programs to various target architectures and write code that is independent of any given library.

With Ateji PX, a source code written in message-passing style will also run without modifications on computer clusters, MPI-based supercomputers, across a network, and in the Cloud. A distributed version of Ateji PX is in preparation, where parallel branches can be run at remote locations. Watch www.ateji.com for announcements.

Message-passing at the language level also enables a simple expression of a wide range of parallel programming paradigms, including data-flow, stream programming, the Actor model, and the MapReduce algorithm.

 

SEO by AceSEF

Customer Quotes

 

We just completed an evaluation of Ateji's product, and it does everything it promises… this is a very smart idea

Martin Curley,
European Research Director,
Intel

 

Ateji PX allows quicker and easier Java parallel programming without several of the pain-points of multithreading coming in the way

Dr. Gourab Nath,
Sr. Research Scientist,

Amadeus

 

Ateji PX is a dream for Java™ developers, enabling all kinds of applications to take better advantage of NVIDIA’s multicore processors.

Stephen Jones,
Product Line Manager,
Developer Tools NVIDIA
NVIDIA

 

Thank you for this brilliant piece of engineering

Ala Shiban,
Haifa University,
Cancer Research Group
Haifa