- AboutThis should describe the systems research collaboration, and present the overall research goals of the new group.
- PeopleHere are the different labs in the SRC…
- PublicationsA page where you will find categorized publications!
- ProjectsA page where you will find our projects
- ResourcesVarious resources for prospective students, current students, alumni. Maybe put something here about life in NYC and at Columbia…
Proceedings of the 12th International World Wide Web Conference (WWW 2003), May 2003
Web applications are becoming increasingly popular for mobile wireless systems. However, wireless networks can have high packet loss rates, which can degrade web browsing performance on wire- less systems. An alternative approach is wireless thin-client com- puting, in which the web browser runs on a remote thin server with a more reliable wired connection to the Internet. A mobile client then maintains a connection to the thin server to receive display up- dates over the lossy wireless network. To assess the viability of this thin-client approach, we compare the web browsing performance of thin clients against fat clients that run the web browser locally in lossy wireless networks. Our results show that thin clients can op- erate quite effectively over lossy networks. Compared to fat clients running web browsers locally, our results show surprisingly that thin clients can be faster and more resilient on web applications over lossy wireless LANs despite having to send more data over the network. We characterize and analyze different design choices in various thin-client systems and explain why these approaches can yield superior web browsing performance in lossy wireless net- works.
Department of Computer Science, Columbia University Technical Report , CUCS-005-03, April 2003
Cooperating processes are increasingly used to struc- ture modern applications in common client-server comput- ing environments. This cooperation among processes often results in dependencies such that a certain process cannot proceed until other processes finish some tasks. Despite the popularity of using cooperating processes in applica- tion design, operating systems typically ignore process de- pendencies and schedule processes independently. This can result in poor system performance due to the actual scheduling behavior contradicting the desired scheduling policy. To address this problem, we have developed SWAP, a system that automatically detects process dependen- cies and accounts for such dependencies in scheduling. SWAP uses system call history to determine possible re- source dependencies among processes in an automatic and fully transparent fashion. Because some dependencies can- not be precisely determined, SWAP associates confidence levels with dependency information that are dynamically adjusted using feedback from process blocking behavior. SWAP can schedule processes using this imprecise depen- dency information in a manner that is compatible with existing scheduling mechanisms and ensures that actual scheduling behavior corresponds to the desired scheduling policy in the presence of process dependencies. We have implemented SWAP in Linux and measured its effective- ness on microbenchmarks and real applications. Our ex- periment results show that SWAP has low overhead and can provide substantial improvements in system perfor- mance in scheduling processes with dependencies.
Department of Computer Science, Columbia University Technical Report , CUCS-012-03, April 2003
Proportional share resource management provides a flexible and useful abstration for multiplexing timeshared resources. However, previous proportional share mechanisms have either weak proportional sharing accuracy or high scheduling overhead. We present Group Ratio Round-Robin (GR3), a proportional share scheduler than can provide high proportional sharing accuracy with O(1) scheduling overhead.
Dept. of Computer Science, Stony Brook University Technical Report , FSL-03-01, March 2003
Storage consumption continues to grow rapidly, espe- cially with the popularity of multimedia files. Worse, current disk technologies are reaching physical me- dia limitations. Storage hardware costs represent a small fraction of overall management costs, which in- clude backups, quota maintenance, and constant inter- ruptions due to upgrades to incrementally larger stor- age. HSM systems can extend storage lifetimes by mi- grating infrequently-used files to less expensive storage. Although HSMs can reduce overall management costs, they also add costs due to additional hardware. Our key approach to reducing total storage manage- ment costs is to reduce actual storage consumption. We achieve this in two ways. First, whereas files often have persistent lifetimes, we classify files into categories of importance, and allow the system to reclaim some space based on a fileâ€™s importance (e.g., transparently com- press old files). Second, our system provides a rich set of policies. We allow users to tailor their disk usage poli- cies, offloading some of the management burdens from the system and its administrators. We have implemented the system and evaluated it. Performance overheads un- der normal use are negligible. We report space savings on modern systems ranging from 20% to 75%, which result in extending storage lifetimes by up to 72%.
ACM Transactions on Computer Systems (TOCS), Volume 21, Issue 1, February 2003
Modern thin-client systems are designed to provide the same graphical interfaces and applications available on traditional desktop computers while centralizing administration and allowing more efficient use of computing resources. Despite the rapidly increasing popularity of these client-server systems, there are few reliable analyses of their performance. Industry standard benchmark tech- niques commonly used for measuring desktop system performance are ill-suited for measuring the performance of thin-client systems because these benchmarks only measure application per- formance on the server, not the actual user-perceived performance on the client. To address this problem, we have developed slow-motion benchmarking, a new measurement technique for eval- uating thin-client systems. In slow-motion benchmarking, performance is measured by capturing network packet traces between a thin client and its respective server during the execution of a slow-motion version of a conventional benchmark application. These results can then be used ei- ther independently or in conjunction with conventional benchmark results to yield an accurate and objective measure of the performance of thin-client systems. We have demonstrated the effective- ness of slow-motion benchmarking by using this technique to measure the performance of several popular thin-client systems in various network environments on Web and multimedia workloads. Our results show that slow-motion benchmarking solves the problems with using conventional benchmarks on thin-client systems and is an accurate tool for analyzing the performance of these systems.
Proceedings of the 5th Symposium on Operating Systems Design and Implementation (OSDI 2002), December 2002
We have created Zap, a novel system for transparent migration of legacy and networked applications. Zap provides a thin virtualization layer on top of the operating system that introduces pods, which are groups of processes that are pro- vided a consistent, virtualized view of the system. This decouples processes in pods from dependencies to the host oper- ating system and other processes on the system. By integrating Zap virtualization with a checkpoint-restart mechanism, Zap can migrate a pod of processes as a unit among machines running independent operating systems without leaving behind any residual state after migration. We have implemented a Zap prototype in Linux that supports transparent mi- gration of unmodified applications without any kernel modifications. We demonstrate that our Linux Zap prototype can provide general-purpose process migration functionality with low overhead. Our experimental results for migrating pods used for running a standard userâ€™s X windows desktop computing environment and for running an Apache web server show that these kinds of pods can be migrated with subsecond checkpoint and restart latencies.
Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS 2002), June 2002
s businesses continue to grow their World Wide Web pres- ence, it is becoming increasingly vital for them to have quan- titative measures of the client perceived response times of their web services. We present Certes (CliEnt Response Time Estimated by the Server), an online server-based mech- anism for web servers to measure client perceived response time, as if measured at the client. Certes is based on a model of TCP that quantifies the effect that connection drops have on perceived client response time, by using three simple server-side measurements: connection drop rate, connection accept rate and connection completion rate. The mechanism does not require modifications to http servers or web pages, does not rely on probing or third party sampling, and does not require client-side modifications or scripting. Certes can be used to measure response times for any web content, not just HTML. We have implemented Certes and compared its response time measurements with those obtained with detailed client instrumentation. Our results demonstrate that Certes provides accurate server-based measurements of client response times in HTTP 1.0/1.1 environments, even with rapidly changing workloads. Certes runs online in constant time with very low overhead. It can be used at web sites and server farms to verify compliance with service level objectives.
Proceedings of the 2002 USENIX Annual Technical Conference, June 2002
The growing popularity of thin-client systems makes it important to determine the factors that govern the performance of these thin-client architectures. To assess the viability of the thin-client computing model, we measured the performance of six popular thin-client platformsâ€”Citrix MetaFrame, Microsoft Terminal Services, Sun Ray, Tarantella, VNC, and Xâ€”running over a wide range of network access bandwidths. We find that thin- client systems can perform well on web and multimedia applications in LAN environments, but the efficiency of the thin-client protocols varies widely. We analyze the differences in the various approaches and explain the impact of the underlying remote display protocols on overall performance. Our results quantify the impact of different approaches in display encoding primitives, display update policies, and display caching and compression techniques across a broad range of thin-client systems.
Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS 2002), June 2002
While many application service providers have proposed using thin-client computing to deliver computational services over the Internet, little work has been done to evaluate the effectiveness of thin-client computing in a wide-area network. To assess the po- tential of thin-client computing in the context of future commodity high-bandwidth Internet access, we have used a novel, non-invasive slow-motion benchmarking technique to evaluate the performance of several popular thin-client computing platforms in delivering computational services cross-country over Internet2. Our results show that using thin-client computing in a wide-area network en- vironment can deliver acceptable performance over Internet2, even when client and server are located thousands of miles apart on op- posite ends of the country. However, performance varies widely among thin-client platforms and not all platforms are suitable for this environment. While many thin-client systems are touted as be- ing bandwidth efficient, we show that network latency is often the key factor in limiting wide-area thin-client performance. Further- more, we show that the same techniques used to improve band- width efficiency often result in worse overall performance in wide- area networks. We characterize and analyze the different design choices in the various thin-client platforms and explain which of these choices should be selected for supporting wide-area comput- ing services.
Department of Computer Science, Columbia University Technical Report , CUCS-014-02, June 2002
We introduce elastic quotas, a disk space management technique that makes disk space an elastic resource like CPU and memory. Elastic quotas allow all users to use unlimited amounts of available disk space while still providing system administrators the ability to control how the disk space is al- located among users. Elastic quotas maintain existing persis- tent file semantics while supporting user-controlled policies for removing files when the file system becomes too full. We have implemented an elastic quota system in Solaris and mea- sured its performance. The system is simple to implement, requires no kernel modifications, and is compatible with ex- isting disk space management methods. Our results show that elastic quotas are an effective, low-overhead solution for flex- ible file system management.