CS525: Parallel Computing
Ananth Grama, email@example.com, 494 6964
MWF 9:30 - 10:20 AM
W, 1:30 - 3:00, and by appointment.
TA: Bo Sang
Tuesday 9:00am - 11:00am, LWSN B132 #12
Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
CS525, Parallel Computing deals with emerging trends in the use of large
scale computing platforms ranging from desktop multicore processors,
tightly coupled SMPs, message passing platforms, and state-of-the-art
virtualized cloud computing environments . The course consists of four major
Please read this policy before starting as I intend on enforcing it
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed. Message passing using MPI,
thread-based programming using POSIX threads, directive-based programming
using OpenMP, and GPU programming in CUDA are discussed.
Parallel and distributed platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
multicore, SMP, and message passing hardware, interconnection networks,
and evaluation metrics for architectures. Cost models for communication are
Parallel and Distributed Algorithms: Starting from design principles for
parallel algorithms, this part develops parallel algorithms for a variety of
problems. Various metrics for evaluating these algorithms are also discussed.
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.
To be discussed in class.
Assignment 1: (Due Sept 25, in class) Problems 2.6, 2.12, 2.13, 2.20, 2.23,
2.24, and 2.25 of the text `Introduction to Parallel Computing', by Grama et al.
Additional problem: Consider two cases of message transmissions between source and
destination nodes 20 hops apart. Each hop has an error rate of 5%. Consider two cases:
(i) In the first case, the message is sent in its entirety all the way to the destination,
which checks for errors and asks for a retransmission, if required. (ii) In the second
case, each intermediate node checks for errors and asks for retransmission only from the
previous processor (since message goes to the next processor only when it has reached
previous processor. In each case compute the data transmission overhead measure in
terms of link-bytes (number of bytes multiplied by the number of links the bytes traversed)
for a message of size 1KB. Repeat the experiment for 10% link error rate. What is the
probability that the message gets to its destination for the 10% link error rate?
Homework 2 Due October 9