Reliable, Secure, and Efficient Multithreading

The massive number of services powered by cloud computing and the rise of multicore hardware have caused multithreaded programs to become increasingly pervasive and critical. Yet, these programs are extremely difficult to write, optimize, test, debug, and verify, and often contain many difficult-to-debug “heisenbugs” that compromise security and reliability. We’re developing techniques and methodologies to make parallel programs run reliably, securely, and efficiently. One ongoing idea we’re investigating is to memoize schedules and reuse them on future inputs if possible, so that the schedules tested are the ones run and that reproducing concurrency errors becomes much easier. Moreover, the memoized schedules effectively “predict” the future of the executions, allowing operating system schedulers to optimize thread scheduling and placement.

Sound and Precise Analysis of Parallel Programs through Schedule Specialization

Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’12), June 2012

Abstract

PDF

 

Efficient Deterministic Multithreading through Schedule Relaxation

Proceedings of the 23rd ACM Symposium on Operating Systems Principles (SOSP ’11), October 2011

Abstract

PDF

 

Stable Deterministic Multithreading through Schedule Memoization

Proceedings of the Ninth Symposium on Operating Systems Design and Implementation (OSDI ’10), October 2010

Abstract

PDF

 

Bypassing Races in Live Applications with Execution Filters

Proceedings of the Ninth Symposium on Operating Systems Design and Implementation (OSDI ’10), October 2010

Abstract

PDF

 

Concurrency Attacks

the Fourth USENIX Workshop on Hot Topics in Parallelism (HOTPAR ’12), June 2012

Abstract

PDF

 

Columbia University Department of Computer Science