- AboutThis should describe the systems research collaboration, and present the overall research goals of the new group.
- PeopleHere are the different labs in the SRC…
- PublicationsA page where you will find categorized publications!
- ProjectsA page where you will find our projects
- ResourcesVarious resources for prospective students, current students, alumni. Maybe put something here about life in NYC and at Columbia…
Proceedings of the 12th NASA Goddard, 21st IEEE Conference on Mass Storage Systems and Technologies (MSST), April 2004
Storage management costs continue to increase despite the decrease in hardware costs. We propose a system to reduce storage maintenance costs by reducing the amount of data backed up and reclaiming disk space using vari- ous methods (e.g., transparently compress old files). Our system also provides a rich set of policies. This allows ad- ministrators and users to select the appropriate methods for reclaiming space. Our performance evaluation shows that the overheads under normal use are negligible. We re- port space savings on modern systems ranging from 25% to 76%, which result in extending storage lifetimes by 72%.
Proceedings of the 1st USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI 2004), March 2004
We have developed SWAP, a system that auto- matically detects process dependencies and accounts for such dependencies in scheduling. SWAP uses system call history to determine possible resource dependencies among processes in an automatic and fully transparent fashion. Because some dependen- cies cannot be precisely determined, SWAP asso- ciates confidence levels with dependency information that are dynamically adjusted using feedback from process blocking behavior. SWAP can schedule pro- cesses using this imprecise dependency information in a manner that is compatible with existing sched- uling mechanisms and ensures that actual scheduling behavior corresponds to the desired scheduling pol- icy in the presence of process dependencies. We have implemented SWAP in Linux and measured its effec- tiveness on microbenchmarks and real applications. Our results show that SWAP has low overhead, effec- tively solves the priority inversion problem and can provide substantial improvements in system perfor- mance in scheduling processes with dependencies.
ACM Transactions on Computer Systems (TOCS), Volume 22, Issue 1, February 2004
While many application service providers have proposed using thin-client computing to deliver computational services over the Internet, little work has been done to evaluate the effectiveness of thin-client computing in a wide-area network. To assess the potential of thin-client comput- ing in the context of future commodity high-bandwidth Internet access, we have used a novel, noninvasive slow-motion benchmarking technique to evaluate the performance of several popular thin-client computing platforms in delivering computational services cross-country over Internet2. Our results show that using thin-client computing in a wide-area network environment can de- liver acceptable performance over Internet2, even when client and server are located thousands of miles apart on opposite ends of the country. However, performance varies widely among thin-client platforms and not all platforms are suitable for this environment. While many thin-client systems are touted as being bandwidth efficient, we show that network latency is often the key factor in limiting wide-area thin-client performance. Furthermore, we show that the same techniques used to improve bandwidth efficiency often result in worse overall performance in wide-area networks. We characterize and analyze the different design choices in the various thin-client platforms and explain which of these choices should be selected for supporting wide-area computing services.
Department of Computer Science, Columbia University Technical Report , CUCS-005-04, January 2004
Existing applications often contain security holes that are not patched until after the system has already been com- promised. Even when software updates are applied to ad- dress security issues, they often result in system services being unavailable for some time. To address these system security and availability issues, we have developed peas and pods. A pea provides a least privilege environment that can restrict processes to the minimal subset of sys- tem resources needed to run. This mechanism enables the creation of environments for privileged program execution that can help with intrusion prevention and containment. A pod provides a group of processes and associated users with a consistent, machine-independent virtualized envi- ronment. Pods are coupled with a novel checkpoint-restart mechanism which allows processes to be migrated across minor operating system kernel versions with different se- curity patches. This mechanism allows system administra- tors the flexibility to patch their operating systems immedi- ately without worrying over potential loss of data or need- ing to schedule system downtime. We have implemented peas and pods in Linux without requiring any application or operating system kernel changes. Our measurements on real world desktop and server applications demonstrate that peas and pods impose little overhead and enable secure iso- lation and migration of untrusted applications.
Proceedings of the American Medical Informatics Association (AMIA) 2003 Annual Symposium, November 2003
Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an â€œany- time, anywhereâ€ activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing in- formation wherever the provider and patient are. Thin- client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in support- ing image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver ac- ceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.
Proceedings of the 10th ACM conference on Computer and communications security (CCS '03), October 2003
This paper describes a system and annotation language, MECA, for checking security rules. MECA is expressive and designed for checking real systems. It provides a variety of practical constructs to effectively annotate large bodies of code. For example, it allows programmers to write programmatic annotators that automatically annotate large bodies of source code. As another example, it lets programmers use general predicates to determine if an annotation is applied; we have used this ability to easily handle kernel backdoors and other false-positive inducing constructs. Once code is annotated, MECA propagates annotations aggressively, allowing a single manual annotation to derive many additional annotations (e.g., over one hundred in our experiments) freeing programmers from the heavy manual effort required by most past systems.
MECA is effective. Our most thorough case study was a user-pointer checker that used 75 annotations to check thousands of declarations in millions of lines of code in the Linux system. It found over forty errors, many of which were serious, while only having eight false positives.
Proceedings of the 2003 ACM Workshop on Survivable and Self-Regenerative Systems, October 2003
We present SABER (Survivability Architecture: Block, Evade, Re- act), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and sur- vivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are builtâ€“using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxesâ€“ SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch. This coordinated multi-layer approach will be capable of de- fending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive successful security breaches increases substantially. We discuss the key components of SABER, how they will be in- tegrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the proto- typing stages, with several interesting open research topics.
Department of Computer Science, Columbia University Technical Report , CUCS-021-03, July 2003
We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by us- ing several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are builtâ€“using isolated, independent security mecha- nisms such as firewalls, intrusion detection systems and software sandboxesâ€“SABER integrates several different technologies in an attempt to provide a unified frame- work for responding to the wide range of attacks mali- cious insiders and outsiders can launch. This coordinated multi-layer approach will be capa- ble of defending against attacks targeted at various lev- els of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while mul- tiple lines of defense are useful, most conventional, un- coordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the re- sponse, the ability to survive even in the face of success- ful security breaches increases substantially. We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics.
Department of Computer Science, Columbia University Technical Report , CUCS-018-03, June 2003
We introduce Group Round-Robin (GRR) scheduling, a hybrid scheduling framework based on a novel grouping strategy that narrows down the traditional tradeoff between fairness and computational complexity. GRR combines its grouping strategy with a specialized round-robin scheduling algorithm that utilizes the proper- ties of GRR groups to schedule flows within groups in a manner that provides bounds on fairness with only time complexity. Under the practical assumption that GRR employs a small constant number of groups, we apply GRR to popular fair queueing scheduling algorithms and show how GRR can be used to achieve constant bounds on fairness and time complexity for these algorithms.
Proceedings of the 12th International World Wide Web Conference (WWW 2003), May 2003
Web applications are becoming increasingly popular for mobile wireless systems. However, wireless networks can have high packet loss rates, which can degrade web browsing performance on wire- less systems. An alternative approach is wireless thin-client com- puting, in which the web browser runs on a remote thin server with a more reliable wired connection to the Internet. A mobile client then maintains a connection to the thin server to receive display up- dates over the lossy wireless network. To assess the viability of this thin-client approach, we compare the web browsing performance of thin clients against fat clients that run the web browser locally in lossy wireless networks. Our results show that thin clients can op- erate quite effectively over lossy networks. Compared to fat clients running web browsers locally, our results show surprisingly that thin clients can be faster and more resilient on web applications over lossy wireless LANs despite having to send more data over the network. We characterize and analyze different design choices in various thin-client systems and explain why these approaches can yield superior web browsing performance in lossy wireless net- works.