CS525: Parallel Computing
Ananth Grama, firstname.lastname@example.org, 494 6964
MWF 9:30 - 10:20 AM
W, 1:30 - 3:00, and by appointment.
TA: Bo Sang
Tuesday 9:00am - 11:00am, LWSN B132 #12
Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
CS525, Parallel Computing deals with emerging trends in the use of large
scale computing platforms ranging from desktop multicore processors,
tightly coupled SMPs, message passing platforms, and state-of-the-art
virtualized cloud computing environments . The course consists of four major
Please read this policy before starting as I intend on enforcing it
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed. Message passing using MPI,
thread-based programming using POSIX threads, directive-based programming
using OpenMP, and GPU programming in CUDA are discussed.
Parallel and distributed platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
multicore, SMP, and message passing hardware, interconnection networks,
and evaluation metrics for architectures. Cost models for communication are
Parallel and Distributed Algorithms: Starting from design principles for
parallel algorithms, this part develops parallel algorithms for a variety of
problems. Various metrics for evaluating these algorithms are also discussed.
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.
To be discussed in class.