- AboutThis should describe the systems research collaboration, and present the overall research goals of the new group.
- PeopleHere are the different labs in the SRC…
- PublicationsA page where you will find categorized publications!
- ProjectsA page where you will find our projects
- ResourcesVarious resources for prospective students, current students, alumni. Maybe put something here about life in NYC and at Columbia…
Department of Computer Science, Columbia University Technical Report , CUCS-009-05, February 2005
The increasing popularity of distance learning and online courses has highlighted the lack of collaborative tools for student groups. In addition, the introduction of lecture videos into the online curriculum has drawn attention to the disparity in the network resources used by students. We present an e-Learning architecture and adaptation model called AI2 TV (Adaptive Internet Interactive Team Video), a system that allows bor- derless, virtual students, possibly some or all disadvantaged in network resources, to collaboratively view a video in synchrony. AI2 TV upholds the invariant that each student will view semantically equivalent content at all times. Video player actions, like play, pause and stop, can be ini- tiated by any of the students and the results of those actions are seen by all the other students. These features allow group members to review a lecture video in tandem to facilitate the learning process. We show in experimental trials that our system can successfully synchronize video for distributed students while, at the same time, optimizing the video quality given actual (fluctuating) bandwidth by adaptively adjusting the quality level for each student.
Proceedings of the 36th ACM Technical Symposium on Computer Science Education (SIGCSE 2005), February 2005
Operating system courses teach students much more when they provide hands-on kernel-level project experience with a real operating system. However, enabling a large class of students to do kernel development can be difficult. To ad- dress this problem, we created a virtual kernel development environment in which operating systems can be developed, debugged, and rebooted in a shared computer facility with- out affecting other users. Using virtual machines and remote display technology, our virtual kernel development labora- tory enables even distance learning students at remote loca- tions to participate in kernel development projects with on- campus students. We have successfully deployed and used our virtual kernel development environment together with the open-source Linux kernel to provide kernel-level project experiences for over nine hundred students in the introduc- tory operating system course at Columbia University.
Proceedings of the 12th Annual Network and Distributed System Security Symposium (NDSS 2005), February 2005
We present a solution to the denial of service (DoS) problem that does not rely on network infrastructure support, conforming to the end-to-end (e2e) design prin- ciple. Our approach is to combine an overlay network, which allows us to treat authorized traffic preferentially, with a lightweight process-migration environment that allows us to move services easily between different parts of a distributed system. Functionality residing on a part of the system that is subjected to a DoS attack migrates to an unaffected location. The overlay network ensures that traffic from legitimate users, who are authenticated before they are allowed to access the service, is routed to the new location. We demonstrate the feasibility and effectiveness of our approach by measuring the perfor- mance of an experimental prototype against a series of attacks using PlanetLab, a distributed experimental testbed. Our preliminary results show that the end-to- end latency remains at acceptable levels during regular operation, increasing only by a factor of 2 to 3, even for large overlays. When a process migrates due to a DoS attack, the disruption of service for the end user is in the order of a few seconds, depending on the network proximity of the servers involved in the migration.
Ph.D. Thesis, Department of Computer Science, Columbia University, February 2005
Introduced are two novel schemes for inter-networking, and an opportunity they present to diagonalize the Internet architecture, i.e. orthogonalize its components at multiple levels, so as to make it simpler as well as inherently more general, dynamic and scalable. The first is client-side virtualization of Internet Protocol (IP) addresses, representing a functional inverse of network address translation (NAT) that instantly enables unlimited effective extension of Layer 3 address space as independent realms, analogous to per-process virtual addressing in Unix, thus also eliminating the current need for global coordination of the Layer 3 space. The second is a namespace providing IP-like routing semantics instead of mere translation to lower layer addresses, sufficing for inter-realm addressing and routing independently of Layer 3. It is further shown to be a natural coordinate system by construction, thus obviating the express numbering of nodes as in IP, and canonical with respect to networking, in the sense of requiring the least configurational information of any networking, i.e. addressing and routing, scheme. These properties make it ideal as an inter-domain network and protocol, and for confining IP to individual domains or realms using VAS. Simplicity results for network operators by the elimination of all need to coordinate IP addresses, including for application servers, requiring only locally unique labelling of nodes and link-local configuration. Generality includes full multi-realm access to unmodified IP hosts and applications â€“ via local VAS mapping of foreign destinations addressed by name. Flexibility lies in the capability for multiple application-specific secondary namespaces and for bottom-up evolution of newer inter-networks even over existing infrastructure by linking separate deployments, as coordinated numbering is eliminated. The dynamic nature includes the instant effectiveness of name bindings and deletions. Scalability is assured by the elimination of hard limits and generally by the localization of both configuration and traffic. Route discovery and automatic subscriptions to namespace changes are envisaged for performance and efficiency, and filesystem-like ownership and access control as a simpler security model. The basic ideas are implemented in a prototype.
Proceedings of the Sixth Symposium on Operating Systems Design and Implementation (OSDI '04), December 2004
This paper shows how to use model checking to find serious errors in file systems. Model checking is a formal verification technique tuned for finding corner-case errors by comprehensively exploring the state spaces defined by a system. File systems have two dynamics that make them attractive for such an approach. First, their errors are some of the most serious, since they can destroy persistent data and lead to unrecoverable corruption. Second, traditional testing needs an impractical, exponential number of test cases to check that the system will recover if it crashes at any point during execution. Model checking employs a variety of state-reducing techniques that allow it to explore such vast state spaces efficiently.
We built a system, FiSC, for model checking file systems. We applied it to three widely-used, heavily-tested file systems: ext3, JFS, and ReiserFS. We found serious bugs in all of them, 32 in total. Most have led to patches within a day of diagnosis. For each file system, FiSC found demonstrable events leading to the unrecoverable destruction of metadata and entire directories, including the file system root directory ``/''.
Department of Computer Science, Columbia University Technical Report , CUCS-050-04, December 2004
Software that covertly monitors a userâ€™s actions, also known as spyware, has become a first-level security threat due to its ubiquity and the difficulty of detecting and removing it. Such software may be inadvertently installed by a user that is casually browsing the web, or may be purposely installed by an attacker, or even by the owner of a system to spy on other users of the system. This is particularly problematic in the case of utility computing, early manifestations of which are Internet cafes and thin-client computing. Traditional trusted computing approaches offer a partial solution to this by significantly increasing the size of the trusted computing base (TCB) to include the operating system and other software. We examine the problem of protecting a user accessing specific services in such an environment. We focus on secure video conferencing and remote desktop access when using any convenient, and often untrusted, terminal as two example applications. We posit that, at least for such applications, the TCB can be confined to a suitably modified graphics processing unit (GPU). Specifically, to prevent spyware on untrusted clients from accessing the userâ€™s data, we investigate the possibility of restricting the boundary of trust required to the clientâ€™s GPU, and evaluate the possibility of moving decryption into GPUs. We discuss the applicability of GPU-based decryption in these two sample scenarios and identify the limitations of the current generation of GPUs. We propose straightforward modifications to future GPUs that will allow the realization of our architecture.
Proceedings of the 6th Symposium on Operating Systems Design and Implementation (OSDI 2004), December 2004
As dependence on the World Wide Web continues to grow, so does the need for businesses to have quantitative measures of the client perceived response times of their Web services. We present ksniffer, a kernel-based traf- fic monitor capable of determining pageview response times as perceived by remote clients, in real-time at giga- bit traffic rates. ksniffer is based on novel, online mech- anisms that take a â€œlook once, then dropâ€ approach to packet analysis to reconstruct TCP connections and learn client pageview activity. These mechanisms are designed to operate accurately with live network traffic even in the presence of packet loss and delay, and can be efficiently implemented in kernel space. This enables ksniffer to perform analysis that exceeds the functionality of cur- rent traffic analyzers while doing so at high bandwidth rates. ksniffer requires only to passively monitor network traffic and can be integrated with systems that perform server management to achieve specified response time goals. Our experimental results demonstrate that ksnif- fer can run on an inexpensive, commodity, Linux-based PC and provide online pageview response time measure- ments, across a wide range of operating conditions, that are within five percent of the response times measured at the client by detailed instrumentation.
Proceedings of the 12th ACM SIGSOFT International Symposium on Foundations of Software Engineering (SIGSOFT '04/FSE-12), November 2004
Static program checking tools can find many serious bugs in software, but due to analysis limitations they also frequently emit false error reports. Such false positives can easily render the error checker useless by hiding real errors amidst the false. Effective error report ranking schemes mitigate the problem of false positives by suppressing them during the report inspection process. In this way, ranking techniques provide a complementary method to increasing the precision of the analysis results of a checking tool. A weakness of previous ranking schemes, however, is that they produce static rankings that do not adapt as reports are inspected, ignoring useful correlations amongst reports. This paper addresses this weakness with two main contributions. First, we observe that both bugs and false positives frequently cluster by code locality. We analyze clustering behavior in historical bug data from two large systems and show how clustering can be exploited to greatly improve error report ranking. Second, we present a general probabilistic technique for error ranking that (1) exploits correlation behavior amongst reports and (2) incorporates user feedback into the ranking process. In our results we observe a factor of 2-8 improvement over randomized ranking for error reports emitted by both intra-procedural and inter-procedural analysis tools.
Department of Computer Science, Columbia University Technical Report , CUCS-047-04, November 2004
We present WebPod, a portable device for managing web browsing sessions. WebPod leverages capacity improve- ments in portable solid state memory devices to provide a consistent environment to access the web. WebPod pro- vides a thin virtualization layer that decouples a userâ€™s web session from any particular end-user device, allowing users freedom to move their work environments around. We have implemented a prototype in Linux that works with existing unmodified applications and operating system kernels. Our experimental results demonstrate that WebPod has very low virtualization overhead and can provide a full featured web browsing experience, including support for all helper appli- cations and plug-ins one expects. WebPod is able to effi- ciently migrate a userâ€™s web session. This enables improved user mobility while maintaining a consistent work environ- ment.
Ph.D. Thesis, Department of Computer Science, Columbia University, October 2004
The combined force behind ubiquitous mobile computing and storage devices and universal network access has created a unique era of mobile network computing, in which computation units ranging from a single process to an entire host can move while communicating with each other across the network. A key problem therefore is how to preserve the ongoing network communication between two computation units when they move from one place to another; because current network infrastructure and protocols are designed to support stationary commu- nication endpoints only. We have developed MOVE, a fine-grain end-to-end connection migration architec- ture, to address the problem. The most distinguishing characteristic of MOVE is that MOVE achieves, in a single system, several essential goals of a mobile commu- nication architecture including: (1) entirely end system only without any infra- structure demand, transport protocol independence, and backward compatibility; (2) fine-grain connection migration and unlimited mobility scope; (3) secure migration with both handoff and suspension/resumption support; and (4) very low performance overhead both before and after migration. We first analyze the key technical problems of end-to-end network communica- tion caused by mobility: state inconsistency, conflict, and synchronization; and we develop a simple and elegant namespace abstraction called CELL to resolve these problems. CELL provides a virtual, private, and labeled namespace for individual connection states so that they can be transparently migrated anywhere free of the problems mentioned above. We then develop a unique handoff signaling protocol called H2O, which can handoff a connection securely in a single one-way end-to- end trip with minimal impact on the connection characteristics perceived by the transport protocols. H2O achieves this by combining the simple connection redi- rection mechanism afforded by the CELL abstraction with a low-overhead security mechanism, which is based on Diffie-Hellman protocol but computes session keys only at migration time. We finally integrate MOVE seamlessly with a process migration mechanism to fully exploit MOVEâ€™s fine-grain connection migration capability and enable support for new application scenarios. For example, we show how the integration can provide high service availability in proxy-based server clusters by allowing server applications and their persistent connections to be migrated during a server maintenance to avoid service disruption. We have implemented MOVE on a commodity OS without requiring any change to the OS and applications and conducted various performance measurements, such as handoff performance, scalability, and virtualization and virtual-physical mapping overhead, etc. Our results show that MOVE handoff incurs minimal per- formance impact on the migrating connection, MOVE does not adversely affect system scalability, and MOVE virtualization and mapping overhead is very low. We also test MOVE with a suite of popular off-the-shelf network applications, all of which work out of the box.