CS525: Parallel Computing
Ananth Grama, email@example.com, 494 6964
MWF 9:30 - 10:20 AM
W, 1:30 - 3:00, and by appointment.
TA: Bo Sang
Tuesday 9:00am - 11:00am, LWSN B132 #12
Important announcements relating to the course will be made here. Please
look at this area of the web page periodically. Announcements will include
(but are not limited to) release of assignments, erratas, and grades.
CS525, Parallel Computing deals with emerging trends in the use of large
scale computing platforms ranging from desktop multicore processors,
tightly coupled SMPs, message passing platforms, and state-of-the-art
virtualized cloud computing environments . The course consists of four major
Please read this policy before starting as I intend on enforcing it
Parallel Programming: Programming models and language support for programming
parallel platforms is discussed. Message passing using MPI,
thread-based programming using POSIX threads, directive-based programming
using OpenMP, and GPU programming in CUDA are discussed.
Parallel and distributed platforms: This part of the class outlines parallel
computing hardware. Topics covered include processor and memory architectures,
multicore, SMP, and message passing hardware, interconnection networks,
and evaluation metrics for architectures. Cost models for communication are
Parallel and Distributed Algorithms: Starting from design principles for
parallel algorithms, this part develops parallel algorithms for a variety of
problems. Various metrics for evaluating these algorithms are also discussed.
Applications: A variety of parallel applications from diverse domains such
as data analysis, graphics and visualization, particle dynamics, and
discrete event and direct numerical simulations will be discussed.
To be discussed in class.
Assignment 1: (Due Sept 25, in class) Problems 2.6, 2.12, 2.13, 2.20, 2.23,
2.24, and 2.25 of the text `Introduction to Parallel Computing', by Grama et al.
Additional problem: Consider two cases of message transmissions between source and
destination nodes 20 hops apart. Each hop has an error rate of 5%. Consider two cases:
(i) In the first case, the message is sent in its entirety all the way to the destination,
which checks for errors and asks for a retransmission, if required. (ii) In the second
case, each intermediate node checks for errors and asks for retransmission only from the
previous processor (since message goes to the next processor only when it has reached
previous processor. In each case compute the data transmission overhead measure in
terms of link-bytes (number of bytes multiplied by the number of links the bytes traversed)
for a message of size 1KB. Repeat the experiment for 10% link error rate. What is the
probability that the message gets to its destination for the 10% link error rate?
Assignment 2 Due October 9
Assignment 3: Repeat Assignment 2, using pthreads. In this case, you will only have
two performance data points.. one with one thread (single core) and one that uses all
cores (use the number of threads that minimizes your execution time). Your performance targets
are A for 66% efficiency, B for 50% efficiency, and C for 33% efficiency. No grade for non-functional
Due October 30.
Note on Assignment 2. You may also submit revised Assignment 2's on Oct 30 with a
25% grade penalty.
Assignment 4: In this assignment, you will use a discrete random walk initiated at each
node to estimate pagerank. You will initiate a random walk of length k (k is an input parameter
for your program). Each walker performs k random transition from its starting node (i.e., there
are n random walkers, each of which will perform k transitions). Each time you visit a node,
you add one to a counter at the node. The counter keeps track of number of visits by walkers
at each node.
At the end of the process, you are required to identify and print node ids of the s
most highest visited nodes (s is another input parameter to your program).
Write a pthreads program to do this. Test it on the input graphs from assignment 3.
Your grading rubric is the same as assignment 3.
Due date: Nov 21, 2005.