Spring 2004
February 5, 2004
Encryption is a tool for achieving information privacy. It is usually analyzed in the single-user setting, where only a single recipient of encrypted data is considered. In the real word, however, there are many users, sending each other encrypted data. We investigate the crucial question of whether protocols? various security properties hold in the real ?multi-user? setting. First, we address data-privacy of public-key encryption schemes in the multi-user setting, namely in the presence of attacks involving the encryption of related messages under different public keys, as exemplified by Hastad?s classical attacks on RSA. We provide a model for measuring the security in this setting and prove that security in the single-user setting implies security in the multi-user setting, as long as the former is interpreted in the strong sense. This reassuring result pinpoints many schemes guaranteed to be secure in the multi-user setting. We then highlight the importance in practice of considering and achieving better concrete security, and present improved concrete security results for two popular schemes. While hiding data was considered the only goal of encryption, the emerging concerns about the users? privacy rights in the digital world highlighted the importance of another property, namely, hiding identities of intended recipients of encrypted data. Next, we study this property of encryption in the multi-user setting, which we call ?anonymity? or ?key-privacy?. Finally, we propose and explore an interesting technique that offers important performance and bandwidth benefits in scenarios where a sender needs to encrypt messages for several receivers. We also provide a general test that helps to determine when the technique can be securely applied.
Speaker Biography: Alexandra Boldyreva received her B.S. and M.S. degrees in Applied Mathematics from the St. Petersburg State Technical University, Russia. She is currently a Computer Science Ph.D. candidate in the Cryptography and Security Laboratory at the University of California, San Diego. Her research focuses on cryptography and information security, in particular, the design and analysis of efficient provably-secure protocols.
February 13, 2004
A recent trend in routing research is to avoid inefficiencies in network-level routing by allowing hosts to choose routes themselves using either source routing or overlay routing. Such end-to-end route selection schemes are selfish by nature, because routing decisions are no longer based on system-wide criteria, but are instead designed to optimize host-based or overlay-based metrics. A series of theoretical results showing that selfish routing can result in sub-optimal performance have cast doubts on this approach.
In this talk, I will present our study on the performance of selfish routing in Internet-like environments, focusing on intra-domain scenarios. Using realistic topologies and traffic demands, I will show that in contrast to theoretical worst cases, selfish routing achieves close to optimal average latency in such environments. However, such performance benefit comes at the expense of significantly increased congestion on certain links. Moreover, the adaptive nature of selfish overlays can significantly reduce the effectiveness of traffic engineering by making network traffic less predictable. I will conclude my talk with a brief overview of other networking research I have done, and a summary of my future research interests.
February 20, 2004
The last decade witnessed tremendous advances in distributed and mobile sensing technologies which has made it possible to deploy mobile robots and sensor networks in complex environments. To take advantage of these capabilities, it is crucial to design algorithms to perform high-level tasks in an autonomous fashion. A fundamental problem that arises in this context is pursuit-evasion. In a typical pursuit-evasion game, a pursuer tries to capture an evader who, in turn, actively tries to avoid capture. Practical applications of pursuit-evasion games include surveillance, search-and-rescue, collision avoidance and air traffic control. Designing pursuit strategies that incorporate visibility constraints in a complex environment is a major challenge.
In this talk, I will present pursuit strategies for two such games. In the first game, one or more hunters is seeking to capture an evading rabbit on a graph. At the beginning of each round, the rabbit tries to gather information about the location of the hunters. It can see the hunters only if they are located on adjacent nodes. The second game is played in a simply-connected polygon. The players can move one unit at a time and they can see each other if their view is not obstructed by the polygon.
For both games we will answer the following questions:
-
Given an environment, how many pursuers are needed to capture the evader?
-
Given k pursuers, what is the class of environments where k pursuers suffice for capture?
I will conclude the talk with an overview of solutions we have obtained to various other algorithmic problems in the area of distributed and mobile sensing.
February 26, 2004
This talk examines the security and resilience of two fundamental infrastructure protocols; the BGP routing system that provides global reachability and (briefly) the Domain Name System (DNS) that provides essential naming information. Despite the Internet’s tremendous growth and fundamental change in form, these critical network infrastructure protocols remain tied to a simple fault model. In today’s complex large-scale system, the core Internet protocols face frequent operational errors, incorrect protocol implementations (or protocol under specifications), unexpected complex interactions between elements, and intentional attacks. This talk fist examine some BGP routing problems observed in the Internet, such as the complex interactions between routing and events such as the recent worm attacks. While one could argue that the current problems call for restarting with a complete redesign, it is important to note the core Internet protocols have clearly succeeded to achieve their original aims and their deployed base in measured in millions of systems and billions of dollars. Rather than restarting from scratch, the challenge lies in both advancing the system in fundamental ways while still respecting operational constraints and deployed system bases. To address these challenges, the talk presents enhancements that can be deployed to enhance resilience through techniques such as consistency checking and enhancing protocols with diagnosis information. Using these techniques, the talks show how path vector algorithms can be enhanced to improve convergence by orders of magnitude, exploit existing data for enhanced protection against human error, and ultimately provide better data delivery. In DNS, the talk shows how cryptographic solutions can be added to the system for authentication.
In the broader sense, the results suggest that a multi-fence framework for building a truly resilient Internet infrastructure is both achievable and effective.
March 1, 2004
Advances in wireless networking technologies are enabling networks of wireless camera networks, which have applications ranging from surveillance to situational awareness for emergency responders, to tele-immersion and tele-surgery. Though wireless camera networks are a natural extension of the idea of a sensor network, camera networks introduce numerous challenges because of the high-dimensionality of images. These challenges include efficient transmission of multiple images of the same scene as well as the unsupervised analysis and aggregation of an ensemble of images. In this talk I will introduce tools based on a combination of multiple view geometry and harmonic analysis to address some of the issues in a camera network, wireless or otherwise. Whereas multiple view geometry is traditionally based on discrete geometric features, we demonstrate that the space of constraints which encode multiple view relations lie in a space amenable to a type of Fourier transform. This framework enables fast convolution algorithms with which we can analyze camera geometries in a global, non-parametric manner. No familiarity with harmonic analysis or computer vision is assumed.
March 3, 2004
The traveling salesman problem is NP-hard, as are several other optimization problems such as group steiner tree and k-server. On the other hand, many such problems are easier to solve when the underlying graph is simple, say a tree. A natural approach to get approximately optimal solutions for the above problems is to approximate the input graph by a simpler graph and solve the problem on the simpler graph.
We show how to approximate any graph metric by a distribution over trees, such that internode distances are preserved upto a logarithmic factor (in expectation). This settles a long open question and gives simpler and improved approximation algorithms for several problems. We also show how our techniques lead to a quasi-polynomial approximation scheme for the traveling salesman problem on low-dimensional metrics.
March 4, 2004
A key part of making our society’s information infrastructure work is enabling the parties involved—human users as well as programs—to make effective trust judgments about each other. Should A trust B for action X? If it’s all just wires and bits, how can A know? This problem is made even messier by the emerging multiplicity of users, roles, machines, administrative domains, application contexts, and opinions about what constitutes valid grounds for trust.
Over the past several years, my students and I have been exploring the technological issues underlying effective trust judgments. This talk will discuss some of our research:
-
Why should we trust what’s happening at a remote server? (I’ll discuss our experimental work in building private information servers.)
-
Why should we trust what’s happening at a local client? (I’ll discuss some of our experiences regarding the surprising insecurity of browsers and SSL.)
This talk will also briefly survey some of our other research in trusted computing platforms and applications, and in using PKI in wireless networking and Internet routing.
Speaker Biography: Sean Smith has been a scientist at Los Alamos National Laboratory and at IBM T.J.Watson Research Center, where he designed the security architecture for (and helped code and test) the IBM 4758 secure coprocessor, and then led the formal modeling and verification work that earned it the world’s first FIPS 140-1 Level 4 security validation. In July 2000, Sean left IBM for Dartmouth College. Sean was educated at Princeton (B.A., Math) and CMU (M.S., Ph.D., Computer Science).
March 5, 2004
The Metric Labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems. The input to the problem consists of a set of labels and a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each vertex, an assignment cost is given. The goal is to find a minimum-cost assignment of the vertices to the labels. The cost of the solution consists of two parts: the assignment costs of the vertices and the separation costs of the edges (each edge pays its weight times the distance between the two labels to which its endpoints are assigned).
Due to the simple structure and variety of the applications, the problem and its special cases (with various distance functions on the labels) received a lot of attention recently. Metric Labeling has a known logarithmic approximation, and it has been an open question for several years whether a constant approximation exists. We refute this possibility and show a $$\sqrt{\log n}$$ hardness of approximation.
Joint work with Seffi Naor.
March 8, 2004
The resources available for improving software are always limited, and through sheer numbers a program’s user community will uncover many flaws not captured during the program’s development. We propose a low-overhead sampling infrastructure for gathering information from the runs experienced by a program’s user community. Monitoring is both sparse and random, so complete information is never available about any single run. However, the instrumentation strategy is also lightweight and therefore practical to deploy to user communities numbering in the thousands or millions. Additionally we have developed a body of techniques, called “statistical debugging,” for isolating the causes of bugs using this sparsely sampled data. Statistical debugging combines statistical modeling, machine learning, and program analysis to identify those program behaviors which are highly predictive of program failure. We will also briefly consider other potential applications of sampling deployed software.
March 9, 2004
We consider the all-or-nothing multicommodity flow problem in undirected graphs. We are given a capacitated undirected graph and k pairs of vertices. Each pair has a unit demand. The objective is to find a largest subset of these for which we can send a flow of one unit between each pair. This problem is closely related to the classical edge-disjoint path problem (EDP) but differs in that we do not insist on integer flow paths for the pairs. It is also related to several other well studied multicommodity flow problems.
In this talk we describe a poly-logarithmic approximation for the all-or-nothing problem in general undirected graphs. The best previously known approximation was a polynomial ratio, the same as that for EDP. The emphasis will be on connections to related problems includiing the recent work of Racke on oblivious routing.
This is joint work with Sanjeev Khanna and Bruce Shepherd.
March 11, 2004
Attacks against Internet routing are increasing in number and severity. Contributing greatly to these attacks is the absence of origin authentication: there is no way to validate if an entity using an address has the right to do so. This vulnerability is not only a conduit for malicious behavior, but indirectly allows seemingly inconsequential misconfigurations to disrupt large portions of the Internet. This talk discusses the semantics, design, and costs of origin authentication in interdomain routing. A formalization of address usage and delegation is presented and broad classes of cryptographic proof systems appropriate for origin authentication are considered.
The costs of origin authentication are largely determined by the form and stability of the served address space. However, prior to this work, little was known about the relevant characteristics of address use on the Internet. Developed from collected interdomain routing data and presented in this talk, our approximate delegation hierarchy shows that current IP address delegation is dense and relatively static. One notable result shows that as few as 16 entities are the source of 80% of the delegation on the Internet. We further show via simulation that these features can be exploited to efficiently implement Internet-scale origin authentication. The talk is concluded with a presentation of thoughts on major problems in routing security and other related future work.
Speaker Biography: Patrick McDaniel is a Senior Technical Staff Member of the Secure Systems Group at AT&T Labs-Research. He received his Ph.D. from the University of Michigan in 2001 where he studied the form, algorithmic limits, and enforcement of security policy. Patrick’s recent research efforts have focused on security management in distributed systems, network security, and public policy and technical issues in digital media. Patrick is a past recipient of the NASA Kennedy Space Center fellowship, a frequent contributor to the IETF security standards, and has authored many papers and book chapters in various areas of systems security. Prior to pursuing his Ph.D. in 1996, Patrick was a software architect and program manager in the telecommunications industry.
March 12, 2004
We present a framework for overlay network design that combines many advantages of client/server and peer-to-peer networks. Our approach, the supervised peer-to-peer overlay networking (SPON) model, uses a single supervisor node which is assumed to be reliable to organize the remaining participants in the network. The supervisor is far less powerful than a server, in that at any given time it is aware of very few (O(1) or O(log n)) members of the network, and is only involved with node join and leave operations. Furthermore, a multicast group of redundant supervisors can be used in any SPON system to remove the reliability assumption with very little additional cost.
We present a structure based on a forest of binary trees that can guarantee broadcasting in a dynamic network against a limited-power adversary; we also use this structure for provably efficient storage management as part of a monitoring system for intrusion detection or other security applications. We present two structures based on the DeBruijn graph which function as distributed hash tables for nodes of uniform or arbitrary nonuniform load demands.
March 23, 2004
The development of next-generation Internet is driven by the demand for high-speed wireless connectivity from anywhere at anytime. As one of the two main on-going efforts for wide-area wireless Internet access, 3G infrastructure provides almost ubiquitous coverage but at low data rates. On the other hand, high data rate IEEE 802.11b devices continue to proliferate but the access infrastructure is designed to cover local areas or hot-spots. In this talk, I will present a new network architecture called UCAN, unified cellular and ad-hoc network, that combines the best of both. In UCAN networks high data rate IEEE 802.11b devices relay traffic through each other to improve the throughputs of their connections with the 3G base station. At the same time, the 3G infrastructure provides centralized coordination for the IEEE 802.11b ad-hoc networks to improve the connectivity and performance.
I will present our solutions to two specific problems in UCAN. First, to ensure the performance of the IEEE 802.11b ad-hoc network in relaying bandwidth-demanding and delay-sensitive Internet traffic, I propose our two-tier service model for channel bandwidth allocation. Fully distributed packet scheduling algorithms and localized protocols are implemented to realize the service model. I will then present our base station assisted routing to establish relay paths in the IEEE 802.11b ad-hoc networks under the dynamics of 3G downlink channel quality and node mobility. At the end of the talk I will summarize my current work and outline the future directions in wireless and sensor networking research.
March 25, 2004
In this talk, I will focus on finding efficient algorithms for classic game theoretic notions, such as market equilibrium, Shapley value and core, and show their relevance to Internet computing. I will also talk about the structural properties of the Internet and their impact on its performance.
Speaker Biography: Amin Saberi is a 4th year PhD student in Georgia Institute of Technology. His advisors are professors Vijay Vazirani and Milena Mihail. His research interests include algorithms especially approximation algorithms, algorithmic game theory and their applications in the context of the Internet. He has received his B.S. from Sharif Institute of Technology in 2000.
March 26, 2004
Over the past two decades, the advent of the World Wide Web and improvements in technology for the acquisition and visualization of 3D models have combined to effect an explosion in the availability of 3D models to everyday users. In this context, the challenge has changed from “How do we generate 3D models?” to “How do we find them?”
In this talk, we describe research in the area of shape matching. We provide a general overview of approaches aimed at addressing this challenge and focus specifically on the difficulty of matching 3D models across different orientations. In particular, we present a method for obtaining rotation invariant representations of 3D models. We show that this method is well suited to the task of shape matching by providing a compact representation that allows for efficient and discriminating retrieval from large repositories of 3D shapes.
March 29, 2004
The potential harm to privacy stemming from the use of data processing systems has been understood since computers were first applied to organize personal information. David Chaum’s seminal paper demonstrated the potential of cryptographic protocols to provide services such as user authentication and resource control while maintaining anonymity, reducing the need to distribute personal information or user passwords to remote servers.
In 1991, Chaum and van Heyst introduced the notion of group signatures. These cryptographic primitives provide revocable anonymity — in other words, the privacy of specific transactions can be revoked for legitimate reasons, mitigating tensions between system security and user privacy concerns. For these reasons, group signatures are considered one of the most flexible and promising cryptographic primitives for privacy.
Until recently, all known practical group signature schemes were based on RSA-type constructions. However, anonymous transactions that cross organization boundaries are facilitated by the use of discrete logarithm-type constructions. (In the e-cash setting, this was demonstrated by the Stefan Brands system.) In this talk, I describe the first group signature scheme based on the discrete logarithm problem.