The objective of this project is to take a holistic approach to creating novel program analysis/protection techniques and a system called DASH to secure multithreaded programs and harden traditional defense techniques in a concurrency environment. We will do so by selectively combining static and dynamic techniques, thus getting the best of both worlds. We anticipate numerous contributions from this project; the main ones are: (1) a thorough understanding of concurrency attacks and their implications to traditional defense techniques; (2) accurate and effective techniques to detect, avoid, and survive concurrency vulnerabilities; and (3) hardening of traditional defense techniques for multithreaded programs.
PI: Prof. Junfeng Yang, Columbia University
Just as errors in sequential programs can lead to security exploits, errors in concurrent programs can lead to concurrency attacks. Questions such as whether these attacks are real and what characteristics they have remain largely unknown. In this paper, we present a preliminary study of concurrency attacks and the security implications of real concurrency errors. Our study yields several interesting findings. For instance, we observe that the exploitability of a concurrency error depends on the duration of the timing window within which the error may occur. We further observe that attackers can increase this window through carefully crafted inputs. We also find that four out of five commonly used sequential defense mechanisms become unsafe when applied to concurrent programs. Based on our findings, we propose new defense directions and fixes to existing defenses.
[download_link link="http://systems.cs.columbia.edu/files/concurrency-attacks_errors.tar.bz2" variation="green"]errors.tar.bz2[/download_link]
Parallel programs are known to be difficult to analyze. A key reason is that they typically have an enormous number of execution interleavings, or schedules. Static analysis over all schedules requires over-approximations, resulting in poor precision; dynamic analysis rarely covers more than a tiny fraction of all schedules. We propose an approach called schedule specialization to analyze a parallel program over only a small set of schedules for precision, and then enforce these schedules at runtime for soundness of the static analysis results. We build a schedule specialization framework for C/C++ multithreaded programs that use Pthreads. Our framework avoids the need to modify every analysis to be schedule-aware by specializing a program into a simpler program based on a schedule, so that the resultant program can be analyzed with stock analyses for improved precision. Moreover, our framework provides a precise schedule-aware def-use analysis on memory locations, enabling us to build three highly precise analyses: an alias analyzer, a data-race detector, and a path slicer. Evaluation on 17 programs, including 2 real-world programs and 15 popular benchmarks, shows that analyses using our framework reduced may-aliases by 61.9%, false race reports by 69%, and path slices by 48.7%; and detected 7 unknown bugs in well-checked programs.
This work is supported by the United States Air Force Office of Scientific Research (AFOSR) through Contract FA9550-12-1-0346. Opinions, findings, conclusions and recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the US Government or AFOSR.