- AboutThis should describe the systems research collaboration, and present the overall research goals of the new group.
- PeopleHere are the different labs in the SRC…
- PublicationsA page where you will find categorized publications!
- ProjectsA page where you will find our projects
- ResourcesVarious resources for prospective students, current students, alumni. Maybe put something here about life in NYC and at Columbia…
Publications from 2006
ACM Transactions on Computer Systems, Volume 24, Issue 4, November 2006
This paper shows how to use model checking to find serious errors in file systems. Model checking is a formal verification technique tuned for finding corner-case errors by comprehensively exploring the state spaces defined by a system. File systems have two dynamics that make them attractive for such an approach. First, their errors are some of the most serious, since they can destroy persistent data and lead to unrecoverable corruption. Second, traditional testing needs an impractical, exponential number of test cases to check that the system will recover if it crashes at any point during execution. Model checking employs a variety of state-reducing techniques that allow it to explore such vast state spaces efficiently.
We built a system, FiSC, for model checking file systems. We applied it to four widely-used, heavily-tested file systems: ext3, JFS, ReiserFS and XFS. We found serious bugs in all of them, 33 in total. Most have led to patches within a day of diagnosis. For each file system, FiSC found demonstrable events leading to the unrecoverable destruction of metadata and entire directories, including the file system root directory ``/''.
Proceedings of the Seventh Symposium on Operating Systems Design and Implementation (OSDI '06), November 2006
Storage systems such as file systems, databases, and RAID systems have a simple, basic contract: you give them data, they do not lose or corrupt it. Often they store the only copy, making its irrevocable loss almost arbitrarily bad. Unfortunately, their code is exceptionally hard to get right, since it must correctly recover from any crash at any program point, no matter how their state was smeared across volatile and persistent memory.
This paper describes eXplode , a system that makes it easy to systematically check real storage systems for errors. It takes user-written, potentially system-specific checkers and uses them to drive a storage system into tricky corner cases, including crash recovery errors. eXplode uses a novel adaptation of ideas from model checking, a comprehensive, heavy-weight formal verification technique, that makes its checking more systematic (and hopefully more effective) than a pure testing approach while being just as lightweight.
eXplode is effective. It found serious bugs in a broad range of real storage systems (without requiring source code): three version control systems, Berkeley DB, an NFS implementation, ten file systems, a RAID system, and the popular VMware GSX virtual machine. We found bugs in every system we checked, 36 bugs in total, typically with little effort.
Proceedings of the 2006 Americas Display Engineering and Applications Conference (ADEAC 2006), October 2006
The proposed Net2Display TM VESA standard is intended for remoting displays and USB I/O devices with responsiveness, performance and motion video user experience comparable to a local PC. This proposed standard will enable client displays to connect over wired or wireless networks to host computers located centrally in businesses or homes or remotely at service facilities.
Proceedings of the IEEE International Conference on Services Computing (SCC 2006), September 2006
Mobile handheld devices such as PDAs and smartphones are increasingly being used by service providers to deliver application functionality similar to that found in traditional desktop comput- ing environments. However, these handheld applications can be quite slow and often lack important functionality compared to their desktop counterparts. We have developed PASSPORT, (PDA Application Streaming Service PORTal) a thin-client solution that leverages more powerful servers to run full-function desktop applications and then simply stream screen updates to the PDA for display. PASSPORT uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. PASSPORT also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have imple- mented PASSPORT on Windows PDAs and demonstrate that it can provide superior application performance and functionality compared to the traditional approach of running applications directly on handheld devices.
Proceedings of the IEEE Computer Society Signature Conference on Software Technology and Applications (COMPSAC 2006), September 2006
We present DeskPod, a portable system that provides a highly reliable desktop computing environment for mo- bile users by leveraging rapid improvements in capacity, cost, and size of portable storage devices. DeskPod en- ables a userâ€™s live computing environment to be suspended to portable storage, carried around, easily copied for fault- resilience, and resumed from the storage device to provide the user with the same persistent, personalized computing environment on another computer. DeskPod achieves this by providing a virtualization and checkpoint/restart mecha- nism that decouples a desktop computing environment from any single hardware device so that it can be stored and ex- ecuted anywhere, improving desktop computing reliability by eliminating a potential single point of failure. We have implemented a Linux DeskPod prototype and demonstrate its ability to quickly suspend and resume desktop sessions, enabling a seamless mobile experience.
Measuring and Managing the Remote Client Perceived Response Time for Web Transactions using Server-side Techniques
Ph.D. Thesis, Department of Computer Science, Columbia University, August 2006
As businesses continue to grow their dependence on the World Wide Web, it is increasingly vital for them to accurately measure and manage response time of their Web services. This dissertation shows that it is possible to determine the remote client perceived response time for Web transactions using only server-side techniques and that doing so is useful and essential for the management of latency based service level agreements. First, we present Certes, a novel modeling algorithm, that accurately estimates connection establishment latencies as perceived by the remote clients, even in the pres- ence of admission control drops. We present a non-linear optimization that models this effect and then we present an O(c) time and space online approximation algorithm. Sec- ond, we present ksniffer, an intelligent traffic monitor which accurately determines the pageview response times experienced by a remote client without any changes to existing systems or Web content. Novel algorithms for inferring the remote client perceived re- sponse time on a per pageview basis are presented which take into account network loss, RTT, and incomplete information. Third, we present Remote Latency-based Management (RLM), a system that controls the latencies experienced by the remote client by manipulat- ing the packet traffic into and out of the Web server complex. RLM tracks the progress of each pageview download in real-time, as each embedded object is requested, making fine grained decisions on the processing of each request as it pertains to the overall pageview latency. RLM introduces fast SYN and SYN/ACK retransmission and embedded object rewrite and removal techniques to control the latency perceived by the remote client. We have implemented these mechanisms in Linux and demonstrate their effective- ness across a wide-range of realistic workloads. Our experimental results show for the first time that server-side response time measurements can be done in real-time at gigabit traffic rates to within 5% of that perceived by the remote client. This is an order of magnitude better than common application-level techniques run at the Web server. Our results also demonstrate for the first time how both the mean and the shape of the per pageview client perceived response time distribution can be dynamically controlled at the server complex.
Proceedings of the 25th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC 2006), July 2006
We present Grouped Distributed Queues (GDQ ), the first propor- tional share scheduler for multiprocessor systems that scales well with a large number of processors and processes. GDQ uses a dis- tributed queue architecture, and achieves accurate proportional fair- ness scheduling with only O(1) scheduling overhead. GDQ takes a novel approach to distributed queuing: instead of creating per- processor queues that need to be constantly balanced to achieve any measure of proportional sharing fairness, GDQ uses a simple grouping strategy to organize processes into groups based on sim- ilar processor time allocation rights, and then assigns processors to groups based on aggregate group shares. Group membership of processes is static, and fairness is achieved by dynamically migrat- ing processors among groups. The set of processors working on a group use simple, low-overhead round-robin queues, while proces- sor reallocation among groups is achieved using a new multipro- cessor adaptation of Weighted Fair Queuing. By commoditizing processors and decoupling their allocation from process schedul- ing, GDQ provides, with only constant scheduling overhead, fair- ness within a constant of the ideal generalized processor sharing model for process weights with a fixed upper bound. We have implemented GDQ in Linux and measured its performance. Our experimental results show that GDQ has low overhead and scales well with the number of processors and processes.
Proceedings of the Joint International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS/Performance 2006), June 2006
Understanding and managing the response time of web ser- vices is of key importance as dependence on the World Wide Web continues to grow. We present Remote Latency-based Management (RLM), a novel server-side approach for man- aging pageview response times as perceived by remote clients, in real-time. RLM passively monitors server-side network traffic, accurately tracks the progress of page downloads and their response times in real-time, and dynamically adapts connection setup behavior and web page content as needed to meet response time goals. To manage client perceived pageview response times, RLM builds a novel event node model to guide the use of several techniques for manipulating the packet traffic in and out of a web server complex, includ- ing fast SYN and SYN/ACK retransmission, and embedded object removal and rewrite. RLM operates as a stand-alone appliance that simply sits in front of a web server com- plex, without any changes to existing web clients, servers, or applications. We have implemented RLM on an inexpen- sive, commodity, Linux-based PC and present experimen- tal results that demonstrate its effectiveness in managing client perceived pageview response times on transactional e-commerce web workloads.
Department of Computer Science, Columbia University Technical Report , CUCS-027-06, June 2006
Proceedings of the 2006 IEEE Symposium on Security and Privacy (SP '06), May 2006
Many current systems allow data produced by potentially malicious sources to be mounted as a file system. File system code must check this data for dangerous values or invariant violations before using it. Because file system code typically runs inside the operating system kernel, even a single unchecked value can crash the machine or lead to an exploit. Unfortunately, validating file system images is complex: they form DAGs with complex dependency relationships across massive amounts of data bound together with intricate, undocumented assumptions. This paper shows how to automatically find bugs in such code using symbolic execution. Rather than running the code on manually-constructed concrete input, we instead run it on symbolic input that is initially allowed to be ``anything.'' As the code runs, it observes (tests) this input and thus constrains its possible values.
We generate test cases by solving these constraints for concrete values. The approach works well in practice: we checked the disk mounting code of three widely-used Linux file systems: ext2, ext3, and JFS and found bugs in all of them where malicious data could either cause a kernel panic or form the basis of a buffer overflow attack.