Analysis of Concurrent and Distributed Programs

CS 4405, Spring 2022

Course description:

Software systems are becoming highly concurrent and distributed to utilize modern multicore architectures and increasing speed and bandwidth in networks. Shared-memory concurrency in multicore programs and message-passing concurrency in distributed programs share many common abstractions and problems.

In the multicore era, all performance-critical software employs some form of concurrent programming; typically shared memory concurrency. In this setting, programmers use a number of primitives to develop efficient and correct concurrent programs. To do so the programmers have to understand the behaviors of the primitives and reason about them. It is also important to match the programming paradigms and underlying architectures. For instance, traditionally programmers have assumed that a multithreaded program executed simply by interleaving the executions of its threads—a model known as sequential consistency (SC). This assumption is, however, invalidated both by mainstream multicore architectures, which often execute instructions out of order, and by compilers, whose optimizations affect the outcomes of concurrent programs. As a result, concurrent programs have more outcomes than SC allows.

In the distributed setting, the units of concurrency are independent processes that do not share memory but communicate by exchanging asynchronous messages. The execution of such a system involves two main sources of nondeterminism: concurrency and partial failures. As the processes run concurrently, the exchanged messages can be delivered and processed in many different orderings. The distributed set of processes is also prone to network of process failures. The trade-off between the system’s availability in the existence of failures and the consistency between the processes gives rise to a spectrum of weak consistency notions. It is important to reason about concurrency, possible failures, and consistency guarantees to implement distributed programs correctly and understand their behavior.

This course aims to explore analysis techniques for concurrent and distributed programs.

Course organization

Study goals:

By the end of this course, you should be able to:


Teaching assistants

Dennis Sprokholt

Course schedule:

Feb 09
Introduction to concurrency
  1. Course intro
Feb 11
Primitives for concurrent programs
  1. Slide
Feb 16
Symptoms of concurrency bugs
  1. Slide
Feb 18
Concurrency analysis for multithreaded programs
  1. Slide
Feb 23
Lock-free synchronization
  1. Slide
Feb 25
Weak-memory consistency – I
  1. Slide
Mar 02
Weak-memory consistency – II
  1. Slide
Mar 04
Many flavors of concurrency
  1. Asynchronous and event-driven programming
Mar 09
Distributed concurrency
  1. Introduction to distributed systems
Mar 11
More on distributed concurrency
  1. Distributed decision making
Mar 16
Concurrency analysis for distributed systems
  1. Time and order in distributed systems
  1. Concurrency analysis for distributed systems
Mar 18
Strong consistency: SC and Linearizability
  1. SC and Linearizability
Mar 23-25
In-progress project evaluation
Mar 30
CAP theorem & weak consistency
  1. CAP, weak consistency, weak isolation
Apr 01
Active research directions
Apr 06-08
No lectures
Apr 13-15
Final project presentations

The schedule is subject to small changes during the term.

Course projects:

Alternatively, you can suggest your own project that involves replicating an existing work or proposing a new approach to an existing problem. If you have exciting ideas for the course project, contact the TA’s and the teachers to determine the context of your project.