Book: Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)

Programming Massively Parallel Processors: A Hands-on Approach (Applications of GPU Computing Series)
by David B. Kirk and Wen-mei W. Hwu

I wasn’t interested in GPU’s parallel area, but this book gave some interesting ideas about parallel programming, opportunities to take on day-to-day electronic devices.

Here are just 2 of many things I found interesting:

Thread granularity:
An important algorithmic decision in performance tuning is the granularity of threads. It is often advantageous to put more work into each thread and use fewer threads. Such advantage arises when some redundant work exists between threads.

Amdahl – Gustafson laws:
Amdahl’s law often motivates task-level parallelization. Although some
of these smaller activities do not warrant fine-grained massive parallel exe-cution, it may be desirable to execute some of these activities in parallel
with each other when the dataset is large enough. This could be achieved
by using a multicore host and executing each such task in parallel. This is
a illustration of Gustafson’s Law, which states that any sufficiently large
problem can be effectively parallelized. When the data set is large enough
and the more demanding calculation has been parallelized, one can effec-tively parallelize the less demanding calculation. Alternatively, we could
try to simultaneously execute multiple small kernels, each corresponding
to one task.

Multi-core processors are no longer the future of computing-they are the present day reality. A typical mass-produced CPU features multiple processor cores, while a GPU (Graphics Processing Unit) may have hundreds or even thousands of cores. With the rise of multi-core architectures has come the need to teach advanced programmers a new and essential skill: how to program massively parallel processors.

Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture. Various techniques for constructing parallel programs are explored in detail. Case studies demonstrate the development process, which begins with computational thinking and ends with effective and efficient parallel programs.

Teaches computational thinking and problem-solving techniques that facilitate high-performance parallel computing.
Utilizes CUDA (Compute Unified Device Architecture), NVIDIA’s software development tool created specifically for massively parallel environments.
Shows you how to achieve both high-performance and high-reliability using the CUDA programming model as well as OpenCL.

Technorati tags: MPI, CUDA, Parallel, Parallel Computing, GPU, GPGPU

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: