This course will examine the design and implementation of programming
languages and computing systems that support concurrency and parallelism
with a focus on foundations and principles. It will have the following a
few focuses.
First, we will discuss the principles of concurrency. In particular, we
will discuss mutual exclusion. We will also introduce concepts and
mechanisms for describing and reasoning about correctness such as data
races, linearizability, deadlocks, livelocks, and serializability.
We will briefly study memory models that describe the modern processor
behavior in the presence of concurrency.
Second, we will introduce various parallel data structures and
algorithms. We consider a number of basic parallel algorithms over
concurrent variants of representative data structures such as stacks,
queues, matrices, trees, lists, hash tables, and graphs. Parallel sorting
and searching algorithms over these structures are presented as
representative examples, along with a detailed examination of their run-time
complexity.
Third, abstractions used in modern concurrent and parallel languages can be
broadly classified as either message- or shared-memory based. While most
of our previous discussion is shared-memory based, we will introduce
the basic concepts in message parssing, using languages like
Erlang and libraries like MPI.
Four, while concurrency is introduced as a program structuring device,
parallelism is motivated as a mechanism to improve performance.
In this context, there will be discussion on cost models, notions of
speedup and efficiency, and tradeoffs between concurrency, latency and
throughput.
The students are expected to gain substantial experience in parallel programming
through the course projects, using languages such as C/C++, Java, and Erlang.
Requirements
A series of programming projects will be assigned.
Each student will give a 20 minutes presentation about a paper.
There will be a few quizzes and a cumulative take-home final.
Lecture 1: Course Introduction. Lecture 2: Posix Threads (From text book Introduction to Parallel Computing). Lecture 3: Mutual Exclusion. ( Chapter 2 from the Art of Multiprocessor Programming) Lecture 4: Concurrent Objections, Data races, atomicity violations, and linearizability violations. ( Part of it from Chaper 3 from the Art of Multiprocessor Programming) Lecture 5: Spin Locks and Low-Level Atomics.( Chapter 7 from the Art of Multiprocessor Programming) Lecture 6: Lock-free Data Structures.( Chapters 9, 10 from the Art of Multiprocessor Programming) A lock free implementation from 2001 Lecture 7: Messaging passing using MPI (From text book Introduction to Parallel Computing). Lecture 8: Tasks and Work-Stealing.
Lecture 9: Program parallelization.