Fall 2003
Distinguished Lecturer
September 11, 2003
The Internet is the first computational artifact that was not designed by a single entity, but emerged from the complex interaction of many. Hence, it must be approached as a mysterious object, akin to the universe and the cell, to be understood by observation and falsifiable theories. Game theory plays an important role in this endeavor, since the entities involved in the Internet are optimizing interacting agents in various and varying degrees of collaboration and competition.
We survey recent work considering the Internet and its protocols as equilibria in appropriate games, and striving to explain phenomena such as the power law distributions of the degrees of the Internet topology in terms of the complex optimization problems faced by each node.
Distinguished Lecturer
September 18, 2003
As the Internet continues to grow in size and importance, in recent years we have witnessed an ever increasing number of new threats, such as wide-spread software viruses, malicious attacks, and even disastrous failures caused by operational errors. Can we achieve resilient Internet services in the face of expected as well as unexpected faults and attacks? In this talk I’ll share with you our insight gained from the last few years of research efforts. We argue that the Internet’s large change in size calls for a fundamental change in network and protocol design considerations.
September 25, 2003
Given n data items in a general metric space, how can we efficiently partition them into k clusters of “similar” items? There are many models for this ubiquitous problem, which arises in application areas such as data mining, image recognition, medical imaging, web analysis, etc. One measure of the quality of a k-clustering is the sum of all pairwise distances inside the clusters, which must be minimized. We discuss techniques and algorithms, first for the complementary problem, which can be seen as a metric analog of Max-Cut in graphs, then for 2-clustering, and finally sketch extensions to variants with other objective functions or with cardinality constraints. The algorithms are based on random sampling.
October 2, 2003
Building supercomputers powerful enough to run complex simulations is a challenge that the National Nuclear Security Administration’s (NNSA’s) Advanced Simulation and Computing Program (ASC, formerly ASCI) has met since its beginning in 1995. The ASC Program has been remarkably successful meeting its goals, and its machines are the workhorse computers for running complex two-dimensional and three-dimensional physics codes for the nation’s Stockpile Stewardship Program. As the capabilities of ASCI machines have grown, so have the sophistication of applications and number of users. Costs have fallen as technologies advance with each machine, but vendor integrated systems using workstation processors and proprietary software have now matured to where cost per teraflop improves at only a slow exponential rate dictated by Moore’s law. Lawrence Livermore National Laboratory (LLNL), as part of the ASC Program and its own Multiprogrammatic and Institutional Computing Initiative, is looking to new technologies that can be exploited for better cost-performance. This talk presents an overview of current and planned ASCI systems, and explores some promising approaches, including open source software, cluster architectures, and system-on-a-chip designs that promise cost-effective platforms to run the next generation of complex scientific simulations.
Speaker Biography: Steve Louis is Assistant Department Head for Integrated Computing and Communications at LLNL. He has recently completed a one-year assignment in Washington as a technical advisor for the NNSA ASC Program Office for new computer procurements and software development. From 2000 to 2002 he led installation, integration and early use efforts for the 12 TeraOP ASCI White platform at LLNL, and has been the Principal Investigator for the LLNL Problem Solving Environment Program since 1999. Prior to these positions, he was Project Leader for High Performance Archival Storage and Group Leader for File Storage Systems at LLNL’s National Energy Research Supercomputer Center (NERSC) for ten years. He received an A.B. in computer science from the University of California at Berkeley in 1974 before joining LLNL to become a founding staff member at NERSC, one of the first supercomputer centers in the nation.
October 3, 2003
“15 seconds of fame” is an interactive art installation which elevates the face of a randomly selected observer for 15 seconds into a “work of art”. The installation was inspired by Andy Warhol’s statement that “In the future everybody will be famous for 15 minutes” as well as by the pop-art style of his works. The critical part of the installation is detection of faces which enables the cropping of faces from the wide-angle images of people standing in front of the installation. The method for color-based face detection which must be reliable in different illumination conditions and scenes will be described as well as the Internet dissemination of the resulting portraits.
Speaker Biography: Franc Solina is a professor of computer science at University of Ljubljana and Head of Computer Vision Laboratory at the Faculty of Computer and Information Science. He received a B.Sc. and a M.Sc. degree in Electrical Engineering from the University of Ljubljana, Slovenia in 1979 and 1982, respectively, and a Ph.D. degree in computer science from the University of Pennsylvania in 1987. His research interests include range image interpretation, 3D shape reconstruction, panoramic imaging, and applications of computer vision in the arts.
Distinguished Lecturer
October 17, 2003
The IETF working group on mobile ad-hoc networks [manet] has attempted to standarize routing protocols to support the formation and maintenance of communications in ad hoc networks. This is particularly challenging, due to the rapid changes in topology caused by broken links which then require repair during the course of a single application. Strategies and new techniques for overcoming routing problems will be described in this tutorial. In particular, I will go into depth to explain the details for AODV (RFC 3561), an on-demand, distance vector routing protocol for ad hoc networks. Differences between some of the leading candidates for standardization will be described, so that members of the audience will gain an appreciation for the breadth of research in the topic area.
After discussion of routing techniques for disconnected ad hoc networks, I will then describe more recent results for connecting ad hoc networks to the Internet, and mention some post-Experimental protocol modifications to AODV that are in progress for imminent publication.
Speaker Biography: Charles E. Perkins is a Nokia Fellow in the Communication Systems Laboratory at Nokia Research Center, investigating mobile wireless networking and dynamic configuration protocols. He is the editor for several ACM and IEEE journals for areas related to wireless networking. He is serving as document editor for the mobile-IP working group of the Internet Engineering Task Force (IETF), and is author or co-author of standards-track documents in the mobileip, manet, IPv6, and seamoby (Seamless Mobility) working groups. Charles has served on the Internet Architecture Board (IAB) of the IETF and on various committees for the National Research Council. He is also associate editor for Mobile Communications and Computing Review, the official publication of ACM SIGMOBILE, and has served on the editorial staff for IEEE Internet Computing magazine. Charles has authored and edited books on Mobile IP and Ad Hoc Networking, and has published a number of papers and award winning articles in the areas of mobile networking, ad-hoc networking, route optimization for mobile networking, resource discovery, and automatic configuration for mobile computers. See http://people.nokia.net/charliep for further details.
October 30, 2003
November 6, 2003
A ubiquitous and efficient multicast mechanism is essential to the success of large-scale group communication applications. Traditional IP Multicast is implemented at IP layer on routers, which achieves great transmission efficiency but also poses significant challenges to deployment. End-host Multicast moves the functionality to transport or application layer on end hosts in order to have minimal deployment barrier, but incurs performance penalty due to duplicate packet transmission and inefficient routing. Between them, there are a spectrum of other schemes, such as Source-Specific Multicast and multicast server overlay, trying to make better trade-off between system performance and deployability. Nevertheless, the lack of ubiquitous multicast support on the current Internet hinders the development of multicast applications, which in turn reduces incentives for multicast deployment.
In this dissertation research, we designed and implemented the Universal Multicast (UM) framework to provide ubiquitous multicast delivery service on the Internet. It is conceivable that the deployment of multicast will take several stages. Unlike previous works, we do not implement multicast functionality at a fixed place in the network for a particular deployment stage; instead, UM is a general framework within which applications can be provided with IP Multicast delivery immediately, and native multicast support can gradually spread from the network edges into the network core. Therefore the trade-off between system performance and deployability is not fixed by the design, but can evolve from time to time driven by user demand for the service.
Our approach is to build an end-host overlay to connect IP-Multicast “islands” by adaptive unicast tunnels, and use native IP Multicast within each island. The design consists of three major components: Host Multicast Tree Protocol (HMTP) for inter-island routing, Host Group Management Protocol (HGMP) for intra-island management, and a user Agent implementing the functionalities on a host. UM takes full advantage of network support where available and utilizes End-host Multicast where needed. It is fully distributed, self-organized, and independent from underlying routing protocols. We evaluated the system performance through simulation, implemented a prototype and tested with existing multicast applications.
Speaker Biography: Beichuan Zhang received his PhD in computer science from UCLA in 2003. He got his M.S. from UCLA and B.S. from Beijing University, China. His research interests include Internet routing dynamics, multicast, application-level overlays, and network measurement and performance. He is now with USC/ISI-East as a Postdoc researcher.
November 13, 2003
This talk describes the use of a content delivery network as a testbed for experimental networking. In particular, it presents two recent studies that were performed using Akamai’s network of servers. The network is unique in two ways. First, the scale and diversity of server deployment is unmatched: over 14,000 servers on over 2000 networks in over 70 countries. Second, the network collects voluminous data in the course of providing content delivery services to over 1000 customers. The first study examines the benefits of “multihoming”, i.e., the practice of subscribing to multiple internet service providers. Multihoming was originally employed by stub networks to enhance the reliability of their network connectivity. With the advent of commercial “intelligent route control” products, stubs can now also leverage multihoming to improve performance. This study aims to quantify the extent to which multihoming can improve reliability and performance. The second study is an attempt to characterize a significant fraction of all interdomain http traffic flows. The key idea is that for each request received by an Akamai server for an image that appears on a customer’s web page, it is possible to infer a request to the customer’s origin server from the same client for that page. The latter requests, from clients worldwide to servers operated by content providers, constitute a large body of interdomain flows.
Speaker Biography: Bruce Maggs received the S.B., S.M., and Ph.D. degrees in computer science from the Massachusetts Institute of Technology in 1985, 1986, and 1989, respectively. After spending one year as a Postdoctoral Associate at MIT, he worked as a Research Scientist at NEC Research Institute in Princeton from 1990 to 1993. In 1994, he moved to Carnegie Mellon, where he is now an Associate Professor in the Computer Science Department. While on a two-year leave-of-absence from Carnegie Mellon, Maggs helped to launch Akamai Technologies, serving as its Vice President for Research and Development, before returning to Carnegie Mellon. He retains a part-time role at Akamai as Vice President for Research.
Maggs’s research focuses on networks for parallel and distributed computing systems. In 1986, he became the first winner (with Charles Leiserson) of the Daniel L. Slotnick Award for Most Original Paper at the International Conference on Parallel Processing, and in 1994 he received an NSF National Young Investigator Award. He was co-chair of the 1993-1994 DIMACS Special Year on Massively Parallel Computation and has served on the program committees of SPAA, SODA, STOC, PODC, and many other technical conferences.
Distinguished Lecturer
November 14, 2003
Nature provides a strong existence proof that complex, robust behavior can be produced from remarkably simple programs. Although nearly all species become extinct, those that survive manage to do so in a hostile environment, competing for limited resources and facing predators and parasites evolving against them.
This talk will present some observations about how security problems faced by natural systems and their solutions, with a focus on programming. Our research in swarm computing explores programming models that provide intrinsic robustness and adaptability by describing programs that produce desired functionality through local interactions. Our simple programming model abstracts biological cells as entities that can change state, diffuse chemicals, sense properties of their surroundings, and divide. This talk will describe our programming model and simulator, and report on preliminary experimental results illustrating the robustness of programs constructed using this approach.
Distinguished Lecturer
November 19, 2003
The emergence of the Internet as one of the most important arenas for resource sharing between parties with diverse and selfish interests has led to a number of fascinating and new algorithmic problems. In these problems, one must solicit the inputs to each computation from participants (or agents) whose goal is to manipulate the computation to their own advantage. Until fairly recently, failure models in computer science have not dealt the notion of selfish participants who “play by the rules” only when it fits them. To deal with this, algorithms must be designed so as to provide motivation to the participants to “play along”.
Recent work in this area, has drawn on ideas from game theory and microeconomics, and specifically from the field of mechanism design. The goal is to design protocols so that rational agents will be motivated to adhere to the protocol. A specific focus has been on truthful mechanisms in which selfish agents are motivated to reveal their true inputs. In this talk, we survey recent work in this exciting new area and present a number of interesting directions for future research.
Distinguished Lecturer
November 20, 2003
With rapid advances in technology, it appears that computing and communication devices based on quantum principles may become available in the not too distant future. Accompanying this anticipation is the emergence of an energetic new inter-disciplinary field known as quantum information processing. In this talk, we give an exposition of some recent progress made in this area, with special focus on the algorithmic issues for communication tasks. Examples are taken quantum communication complexity and quantum cryptography. Although to understand the theory fully would require specialized knowledge, its essence can be grasped and enjoyed by any computer scientist.