CS 353: Principles of Concurrency and Parallelism

Instructor

Instructor

Prof. Xiangyu Zhang
Room 3154K
Lawson Computer Science Building
Ph: 4969415

xyzhang@cs.purdue.edu

Office Hours: by Appointment

Piazza Course Page

Syllabus

This course will examine the design and implementation of programming languages and computing systems that support concurrency and parallelism with a focus on foundations and principles. It will have the following a few focuses. First, we will discuss the principles of concurrency. In particular, we will discuss mutual exclusion. We will also introduce concepts and mechanisms for describing and reasoning about correctness such as data races, linearizability, deadlocks, livelocks, and serializability. We will briefly study memory models that describe the modern processor behavior in the presence of concurrency.

Second, we will introduce various parallel data structures and algorithms. We consider a number of basic parallel algorithms over concurrent variants of representative data structures such as stacks, queues, matrices, trees, lists, hash tables, and graphs. Parallel sorting and searching algorithms over these structures are presented as representative examples, along with a detailed examination of their run-time complexity.

Third, abstractions used in modern concurrent and parallel languages can be broadly classified as either message- or shared-memory based. While most of our previous discussion is shared-memory based, we will introduce the basic concepts in message parssing, using languages like Erlang and libraries like MPI.

Four, while concurrency is introduced as a program structuring device, parallelism is motivated as a mechanism to improve performance. In this context, there will be discussion on cost models, notions of speedup and efficiency, and tradeoffs between concurrency, latency and throughput.

The students are expected to gain substantial experience in parallel programming through the course projects, using languages such as C/C++, Java, and Erlang.

Requirements

A series of programming projects will be assigned. Each student will give a 20 minutes presentation about a paper. There will be a few quizzes and a cumulative take-home final.

Grading

Quizzes                15%
Presentation                20%
Programming Assignments     65%

Lectures

Lecture 1:   Course Introduction.
Lecture 2:   Posix Threads (From text book Introduction to Parallel Computing).
Lecture 3:   Mutual Exclusion. ( Chapter 2 from the Art of Multiprocessor Programming)
Lecture 4:   Concurrent Objections, Data races, atomicity violations, and linearizability violations. ( Part of it from Chaper 3 from the Art of Multiprocessor Programming)
Lecture 5:   Spin Locks and Low-Level Atomics.( Chapter 7 from the Art of Multiprocessor Programming)
Lecture 6:   Lock-free Data Structures.( Chapters 9, 10 from the Art of Multiprocessor Programming) A lock free implementation from 2001
Lecture 7:   Messaging passing using MPI (From text book Introduction to Parallel Computing).
Lecture 8:   Tasks and Work-Stealing.
Lecture 9:   Program parallelization.

Programmming Assignments

Posix threads . Due Date: September 8, 2014.

Deterministic Scheduler. Due Date: September 29, 2014. Slides on 9/22 lecture

CHESS and Data race detection. Due Date: October 16, 2014.

Lock free data structures. Due Date: October 30, 2014.

Parallelizing CHESS using MPI. Due Date: November 18, 2014.

Program parallelization. Due Date: December 5th, 2014.

Quiz

Quiz one

Presentation Schedule

Time, clocks, and the ordering of events in a distributed system Yinchen Yang, November 12

FastTrack: Efficient and Precise Dynamic Race Detection

Thin Locks: Featherweight Synchronization for Java Michael A Hlista, October 27

Lock allocation Jason Salter, October 20

A flexible framework for implementing software transactional memory Scott, Dec. 1

LEAP: The Lightweight Deterministic Multi-processor Replay of Concurrent Java Programs Josh, November 24

Dataflow Analysis for Concurrent Programs using Datarace DetectionMichael Greene, Dec.8

Learning from mistakes: a comprehensive study on real world concurrency bug characteristics Yinglai Wang, November 3

Readings

Threads and Events
  1. Why Threads are a Bad Idea (for most purposes), Ousterhout (1996)
  2. Cooperative Task Management without Manual Stack Management, Adya, Howell, Theimer, Bolosky and Douceur (2002)
  3. Why Events are a Bad Idea (for high-concurrency servers), von Behren, Condit, and Brewer (2003)
Message-Passing and Erlang
  1. Communicating Sequential Processes, Hoare (1978)
  2. Concurrent Programming in Erlang Armstrong et. al (1996)
Shared-Memory Programming and Posix
  1. Posix Threads Tutorial
  2. Amdhal's Law in the Multicore Erra Hill and Marty (2008)
  3. Monitors: An Operating Systems Structuring Concept Hoare (1974)
Mutual Exclusion and Locking
  1. Art of Multiprocessor Programming (slides) (2009)
Scheduling and Work-Stealing
  1. Cilk 5.4.6 Reference Manual
  2. Implementation of the Cilk-5 Multithreaded Language
  3. Intel TBB
Data Races
  1. Eraser: A Dynamic Data Race Detector for Multithreaded Programsl
  2. FastTrack: Efficient and Precise Dynamic Race Detectione
  3. Learning from Mistakes - A Comprehensive Study on Real-World Concurrency Bugs