Link

Analysis of Concurrent and Distributed Programs

CS 4405, Spring 2023

Course description:

Software systems are becoming highly concurrent and distributed to utilize modern multicore architectures and increasing speed and bandwidth in networks. Shared-memory concurrency in multicore programs and message-passing concurrency in distributed programs share many common abstractions and problems.

In the multicore era, all performance-critical software employs some form of concurrent programming; typically shared memory concurrency. In this setting, programmers use a number of primitives to develop efficient and correct concurrent programs. To do so the programmers have to understand the behaviors of the primitives and reason about them. It is also important to match the programming paradigms and underlying architectures. For instance, traditionally programmers have assumed that a multithreaded program executed simply by interleaving the executions of its threads—a model known as sequential consistency (SC). This assumption is, however, invalidated both by mainstream multicore architectures, which often execute instructions out of order, and by compilers, whose optimizations affect the outcomes of concurrent programs. As a result, concurrent programs have more outcomes than SC allows.

In the distributed setting, the units of concurrency are independent processes that do not share memory but communicate by exchanging asynchronous messages. The execution of such a system involves two main sources of nondeterminism: concurrency and partial failures. As the processes run concurrently, the exchanged messages can be delivered and processed in many different orderings. The distributed set of processes is also prone to network of process failures. The trade-off between the system’s availability in the existence of failures and the consistency between the processes gives rise to a spectrum of weak consistency notions. It is important to reason about concurrency, possible failures, and consistency guarantees to implement distributed programs correctly and understand their behavior.

This course aims to explore analysis techniques for concurrent and distributed programs.

Course organization

Study goals:

By the end of this course, you should be able to:

Teachers:

Teaching assistants

Course schedule:

Feb 13
Introduction to concurrency
BKO
Feb 15
Concurrency primitives and bugs
SC
Slide
Feb 20
Concurrency analysis for multithreaded programs
SC
Slide
Feb 22
Weak-memory concurrency – I
SC
Slide
Feb 27
Weak-memory concurrency – II
SC
Slide
Mar 01
Distributed concurrency
BKO
Introduction to distributed systems
Time, order, and causality in distributed systems
Mar 06
Concurrency analysis for distributed systems - I
BKO
Distributed consensus
Concurrency analysis for distributed systems
Mar 08
Concurrency analysis for distributed systems - II
BKO
Probabilistic concurrency testing
SC and Linearizability
Mar 13-15
No lectures
Mar 20
Project discussion
SC
Mar 22
Weak consistency and isolation
BKO
Slides
Apr 03-05
In-progress project evaluation
Apr 17-19
Final project presentations

The schedule is subject to small changes during the term.

You can access the last year’s edition here.

Course projects:

Alternatively, you can suggest your own project that involves replicating an existing work or proposing a new approach to an existing problem. If you have exciting ideas for the course project, contact the TA’s and the teachers to determine the context of your project.

Example projects from the previous editions: