Fall 2004
September 24, 2004
As the Internet evolves towards ubiquity and commodity, the basic services it offers are not likely to change. Structural modifications required for additional functionality take a long time to happen, and even after that are often not used (e.g. IP Multicast) due to infrastructure scalability or security concerns. In this talk, we show that overlay networks offer a viable approach to extend the services and performance provided by the Internet.
Even though capacity grows exponentially over time, latency is difficult to improve. We present an overlay approach that can substantially decrease the number of delayed packets in wide area reliable communication while still maintaining the fairness guarantees for external traffic.
Network performance is often a limiting factor in expanding new services over the Internet. Although Voice over IP (VoIP) exhibits an increasing demand from users and phone carriers, it is far from reaching the quality and stability of regular telephony. Our approach uses overlay routers that understand the stringent requirements of VoIP and run specific protocols that improve both voice communication quality and network reliability.
Finally, we present a generic overlay network research platform that allows experimentation and deployment of new protocols and ideas over wide area networks, and, end with challenges that overlay network systems face in the increasingly hostile environment of the Internet.
October 7, 2004
My Dad runs a standard Windows XP computer on recent hardware. Despite our frequent efforts, this machine is often infected with a great deal of malware. This is not my Dad’s fault. Millions of people are routinely running dangerous software, and often don’t understand the downside of a badly-infected computer.
In Feb 2001 Bill Gates committed Microsoft to improving the security of their software. There are indications that Microsoft is trying very hard to improve their security, and the recently-released Service Pack 2 is a good step towards cleaning their Augean stables. How far have they gone, and what are the prospects for my Dad’s computing environment and improved security of corporate and government intranets?
How is Internet security faring in general? I am optimistic that many of our current problems will subside to a more manageable level, but it will take many years.
Speaker Biography: Ches has been out and about in the Internet security field since the late 1980s. He is known for his early work in firewalls and proxies, and for the book he has co-authored with Steve Bellovin and now Avi Rubin. In summer 2000 Ches helped spin off the Internet cartography work he did at Bell Labs with Hal Burch into a startup, Lumeta Corporation, which explores the extent and perimeter hosts of corporate and government intranets.
October 14, 2004
As machines and systems become more complex, conventional feedforward methods of design and construction become inadequate. Future engineering of complex systems that must remain robust and autonomous without human intervention require methods and materials for intrinsic self-construction and self-repair that make use of environmental feedback. In a first step towards such methods we demonstrate how a simple multicellular organism with Braitenberg-like behavior can assemble itself by replication from a single cell. Each cell contains the same small set of chemical reactors that produce and consume chemicals as a function of their local chemical environment, and a small set of modular competences. These sets, and the conditions under which they are activated are encoded in a single gene-like specification replicated into each cell. Overall, this system is roughly analogous to the transciption / translation mechanisms of biological cells. We show how the interplay between the gradually increasing complexity of the environment produced by the organization of the successively differentiating cells gives rise to the serial unfolding of the final functional organism that is able to detect and follow a trace of food, while retaining the ability to repair itself after significant damage.
October 15, 2004
Software developers must build programs that are both functional and secure. Yet, most current efforts to secure software put protection at odds with usability and usefulness. Good software engineering principles seem to suggest that the solution is to address security as a separate concern. This view, however, is misleading. A major source of our current vulnerability stems from the excess authority routinely granted to applications. The solution is to design software according to the principle of least authority (POLA). Adapting the familiar access matrix model, we show how the recursive application of POLA down to the object level can significantly reduce vulnerabilities. Existing modularity and abstraction mechanisms, together with good software engineering discipline-for the sake of information hiding-bring about sparse “knowledge-of” relationships within systems. Object-capability mechanisms and discipline leverage these practices-for the sake of POLA-to bring about correspondingly sparse “access-to” relationships.
Rather than treating security separately, the consistent application of POLA requires that we more tightly integrate security concerns with system design concerns. Rather than requiring cumbersome security mechanisms, it instead requires the more consistent application of good software engineering practices. Many well-designed languages, including OZ, come tantalizing close to providing the linguistic foundations needed to support POLA-based programming practices. We explain what additional steps are needed, and why it is a path worth taking.
Distinguished Lecturer
October 21, 2004
A new kind of receipt sets a far higher standard of security by letting voters verify correctness of the election out
In the voting booth, the voter can see his or her choices clearly indicated on the receipt. After taking it out of the booth, the voter can use it to ensure that the votes it contains are included correctly in the final tally. But, because the choices are safely encrypted before it can be removed from the booth, the receipt cannot be used to show others how the voter voted.
Several ways to realize such receipts will be described. Some are well suited to be added on to current touch-screen voting machines. A very simple type will be introduced that is similar in use to current optical scan voting systems.
October 29, 2004
The expansion of the Internet to the point where it is an ubiquitous component of most daily activities brought along the necessity for transitioning its service model from centralized client-server architectures to distributed services. Distributed solutions are the natural choice for serving large amounts of data to a large number of clients, as they reduce load on servers, they provide better latency to clients, and perhaps most importantly, they provide better fault tolerance by eliminating single points of failure.
However, despite the acknowledged necessity for distributedness, a closer examination of some critical applications reveals that a lot of the infrastructure and significant parts of the current solutions still have, in reality, a significant degree of centralization. Database replication is performed mostly through variations of master-slave techniques; cluster availability and load balancing often use a centralized dispatcher; the Domain Name System (DNS) employs a master-secondary approach for zone management and uses a hierarchical structure which is exposed to a single logical point of failure at the root servers.
In this talk we recognize the importance of maintaining distributed consistent state in an efficient way and we show that purely distributed solutions, can be practical without compromising on the consistency requirements. We substantiate our argument by tackling two complex problems: peer synchronous database replication and the Domain Name System.
November 12, 2004
Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. We explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use, and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph, and present provably-good algorithms and their empirical performance for outbreak detection by placing sensors. Within this large-scale simulation framework, we then analyze the relative merits of a number of proposed mitigation strategies for disease-spread.
The talk will mostly be based on the following two papers, and will also briefly touch upon ongoing work:
“Modelling Disease Outbreaks in Realistic Urban Social Networks”, by S. Eubank, H. Guclu, V. S. A. Kumar, M. V. Marathe, A. Srinivasan, Z. Toroczkai and N. Wang. Nature, Vol. 429, 180-184, May 2004;
“Structural and Algorithmic Aspects of Massive Social Networks”, by S. Eubank, V. S. A. Kumar, M. V. Marathe, A. Srinivasan, and N. Wang. Proc. ACM-SIAM Symposium on Discrete Algorithms (SODA), 711-720, 2004.
Distinguished Lecturer
November 18, 2004
The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability. This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet’s hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.
Distinguished Lecturer
November 19, 2004
I will present research on harnessing statistical methods to model properties of human attention and memory–and on harnessing such cognitive models in computing applications. I will focus first on the construction of predictive models of workload and interruptability. I will review user studies of disruptions in computing settings, and discuss the Priorities, BusyBody, and Notification Platform projects. Then, I will turn to efforts to build probabilistic models of memory landmarks, reviewing work on the MemoryLens project. I will present several prototype applications to demonstrate how such models can be applied. Finally, I will discuss longer-term directions with the use of probabilistic methods to model multiple aspects of human cognition.
Speaker Biography: Eric Horvitz is a Senior Researcher at Microsoft Research, where he manages the Adaptive Systems and Interaction group. His interests include principles of sensing, learning, and reasoning under uncertainty, and applications of probability and utility in problem solving, communications, and human-computer interaction. He is an Associate Editor of the Journal of the ACM, Chairman of the Association for Uncertainty in Artificial Intelligence, and serves on the Naval Research Advisory Committee (NRAC). He has been elected a Councilor and Fellow of the American Association for Artificial Intelligence. He received PhD and MD degrees from Stanford University.
December 2, 2004
A classic goal for human computer interface is the conversational computer, which can freely interact with users through natural dialog. Advances in speech and language processing have made systems for single user conversation with a close-talking microphone almost commonplace–but when multiple speakers and noisy conditions are encountered, more information is needed. Visual cues can make conversational interfaces feasible in these environments, providing robust cues for speech and critical information about turn-taking, intent, and physical reference. In this talk I’ll review recent research in our lab on computer vision techniques which can provide these cues. I’ll describe work in progress on estimating pose-invariant mouth features for visual speechreading, head tracking for inferring turn-taking cues, agreement gestures, and conversational intent, and finally body pose estimation for resolving object deixis. Finally, I’ll describe our research on using vision to create perceptive mobile devices, which can use image appearance to recognize locations or objects in the user’s environment and present relevant information resources on the web or application specific databases. Together, these systems for visually-aware conversation and perceptive devices allow us to interact with complex computational resources using natural and intuitive gestures.