Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
Please read this policy before starting as I intend on enforcing it
CS525, Parallel Computing deals with emerging trends in the use of large scale
computing platforms ranging from tightly coupled SMPs and message passing
parallel computers to loosely connected clusters and multiclusters. The
course consists of four major parts:
Parallel computing platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
SMP and message passing hardware, interconnection networks, network hardware,
and evaluation metrics for architectures. Cost models for communication are
Parallel Algorithms: Starting from design principles for parallel algorithms,
this part develops parallel algorithms for a variety of problems. Various
metrics for evaluating these algorithms are also discussed.
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed in this part. Message passing using MPI,
thread-based programming using POSIX threads, and directive-based programming
using OpenMP will be discussed. In addition, CORBA, Java RMI, NI, and threads
will also be covered. System software issues relating to threads and
distributed object systems will be studied.
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.