Equitable, Group Strategyproof Cost Allocations via Primal-Dual-Type Algorithms
We build on the well-studied egalitarian cost-sharing method of Dutta and Ray [1987] as follows. Using a primal-dual-type algorithm, we show how to construct, efficiently, a class of cost-sharing methods for a submodular cost function, parameterized by $n$ functions, one for each user. Submodular functions capture the notion of economies of scale and such cost functions arise naturally in practice, e.g., in multicast routing.
All methods in our class are cross-monotone and therefore lead to group strategyproof mechanisms (i.e., the dominant strategy for each user is to report her true utility, even in the presence of collusions).
Our class contains cost-sharing methods satisfying a wide range of fairness criteria, hence the name ``equitable''. For instance, if the $n$ functions are identical, we get the Dutta-Ray egalitarian method, which attempts to equalize the amounts charged from the users. If they are probability distributions from which users pick their utilities, we get a new method that attempts to equalize the probabilities of users getting service.
(This is a joint work with Vijay Vazirani).
"Compressed Bloom filters" and "Compressing the Web graph"
Compressed Bloom Filters (in PODC 01) A Bloom filter is a simple
space-efficient randomized data structure for representing a set that
supports approximate membership queries. Bloom filters are increasing
being used in distributed systems; for example, in a distributed Web
cache, proxies do not need to share exact the URL lists of their
caches, but instead periodically broadcast Bloom filters respresenting
their cache. We introduce compressed Bloom filters, which improve on
Bloom filters in distributed settings.
Compressing the Web Graph (joint work with Micah Adler, in the 2001
Data Compression Conference) We consider the problem of compressing
graphs of the link structure of the World Wide Web. We provide
efficient algorithms for such compression that are motivated by
recently proposed random graph models for describing the Web. We also
provide hardness results demonstrating limitations on natural
extensions of our approach.
Asymmetric Maximum Travelling Salesperson Problem
The Asymmetric Maximum Travelling Salesperson problem is
that of finding a maximally weighted tour in a complete asymmetric
graph with non-negative weights.
We propose a polynomial time approximation algorithm for the
problem with a 5/8 approximation guarantee. This improves upon
a chain of previous results with approximation guarantees
1/2 < 4/7 < 38/63 < 8/13. The previous algorithms are all combinatorial.
Our solution is partially combinatorial and partially LP-based.
Joint work with: Maxim Sviridenko
Algorithmic Aspects of Bandwidth Trading
We study algorithmic problems that are motivated by bandwidth trading in
next generation networks. Typically, bandwidth trading involves sellers
(e.g., network operators) interested in selling bandwidth pipes that offer
a guaranteed service level agreement during a specified time interval.
The buyers (e.g., bandwidth brokers) are looking to procure bandwidth
pipes to satisfy the reservation requests of end-users (e.g., internet
subscribers). Depending on what is available in the bandwidth exchange,
the goal of a buyer is to either spend the least amount of money to buy
enough bandwidth pipes to satisfy all the reservations, or, to maximize
its revenue from whatever reservations can be satisfied.
We model the above as a real-time non-preemptive scheduling problem in
which machine types correspond to bandwidth pipes and jobs correspond to
the end-user reservation requests. Each machine type is available during
a specified time interval, and multiple machines of each type are
available. Each job specifies a set of machine types and a time interval.
A given job may be scheduled on a given machine if the job's interval is
contained in the interval during which the machine is available and the
machine type is one of the types specified by the job. (Several
non-intersecting jobs may be scheduled on a single machine.) The
objective is either to allocate a minimum-cost set of machines on which
all jobs can be scheduled, or to schedule a maximum-profit set of jobs
on a fixed set of machines.
We present constant factor approximation algorithms for variants of these problem.
Joint work with Randeep Bhatia, Julia Chuzhoy, and Seffi Naor.
Towards an Algorithmic Theory of Self-Assembly
Self-assembly is the ubiquitous process by which objects autonomously assemble into complexes. It has been suggested that self-assembly will ultimately become a useful technology, enabling the fabrication of great quantities of intricate objects such as computer circuits from inexpensive components such as DNA and inorganic nano-crystals. In this talk we will outline some of our recent progress towards developing an algorithmic theory of self-assembly.
Recently Rothemund and Winfree considered the program size complexity of constructing nXn squares by self-assembly of simple tiles. In this talk we will extend their model to include a natural notion of running time. We also modify the construction of Rothemund and Winfree to obtain the optimum running time of \Theta(n) and the optimum program size of \Theta(log n/log log n) for assembling a square.
We will then discuss two combinatorial optimization problems in self-assembly. The first is the Minimum Tile Set Problem: given a shape, find the smallest program size complexity for self assembling the shape. Preliminary results show that the problem is NP-Hard in general and polynomial time tractable on tree-shapes and squares. The second is the Tile Concentrations Problem: assign relative concentrations to the tile-types in a system to obtain the fastest assembly. We conjecture that this problem is #P-Hard in general, and will present an O(log n)-approximation algorithm for a large class of assemblies.
We will conclude with a brief discussion of self-assembly as a dynamical system, and with a list of several open problems and directions.
The above represents joint work with Len Adleman, Qi Cheng, Matthew Cook, Ming-deh Huang, David Kempe, Pablo Moisset de espanes, Paul Rothemund, and Hal Wasserman.
An information statistics approach
to data-stream and communication complexity
Alice and Bob are given a bit each, and wish to probabilistically
compute
the AND of their bits by exchanging messages; at the same time they would
like the transcript of their messages to reveal as little information
about their bits as possible.
We formulate questions of this nature precisely, and present a
technique
to obtain sharp lower bounds on the amount of information that needs to be
revealed. Our technique is based on studying distances between
probability
distributions.
The motivation for studying these problems arises from a brand new
method
that
we present for obtaining communication complexity lower bounds. The
latter
have applications for deriving lower bounds for data stream algorithms.
Specifically, we show that computing the frequency moments F_k for k > 2
requires polynomial space in the data stream model; this resolves the
conjecture of Alon, Matias and Szegedy (1996). In addition, we show that
computing L_p norms to within a factor of n^c requires n^{1 - 4c - 2/p}
space
in the data stream model; this generalizes a bound of Saks and Sun (2002).
The talk will be completely self-contained.
I will explain the models, motivations, and the mathematics from scratch.
(Joint work with Ziv Bar-Yossef, T.S. Jayram and Ravi Kumar)
On Parisi's Conjecture for the Finite Random Assignment Problem
The average case analysis of the classical random assignment problem
has received a lot of interest in the recent literature, mainly due
to the following pleasing conjecture of Parisi: The average value of
the minimum-cost permutation in an n x n matrix with i.i.d. exp(1)
entries equals $\sum_{i=1}^n{1\over i^2}$. This crisp conjecture appears
difficult to resolve despite the recent proof of Aldous that in the
limit as $n\to\infty$, the average minimum cost equals $\pi^2/6$.
In this talk, we present some interesting structural properties of
matchings and discuss their role in resolving Parisi's conjecture.
Our analysis has led us to formulate a simpler conjecture which
highlights the role played by the exp(1) costs, and which implies
Parisi's conjecture. Our approach also yields combinatorial results
regarding the structure of minimum-cost permutations in rectangular
matrices. For instance, we find that the elements which participate
in the minimum-cost permutations belong to a class of canonical templates,
which we identify and characterize.
Joint work with Balaji Prabhakar
A Scaling Hypothesis: Or, how the performance of a network scales as its physical parameters are scaled
In this talk we consider the effect of physical scaling on
network performance. By the "physical scaling" of a network
we mean the reduction of its link speeds and buffer sizes by
a factor p and applying a proportion p of the original traffic
to this scaled version.
Our main finding is this: Physical scaling induces a performance
scaling, in that queueing delays, drop probabilities, and the
distribution of the number of concurrent connections are left
virtually unchanged. We verify this through analysis and simulations.
We discuss the implications of this for (i) the recently proposed
deterministic, differential equations-based models of networks
supporting long-lived flows, (ii) the M/GI type models of network
flows, and (iii) the scaleable prediction of network performance.
Joint work with: Rong Pan, Costas Psounis and Damon Wischik
Nearly Optimal FIFO Buffer Management for DiffServ
We consider a FIFO buffer with finite storage space. An arbitrary
input stream of packets arrives at the buffer, but the output stream
rate is bounded, so overflows may occur. Motivated by DiffServ, we
assume that each packet has value either 1 or alpha, for some
alpha>1. The buffer management task is to decide which packets to
drop so as to minimize the total value of lost packets, subject to
the buffer space bound, and to the FIFO order of sent packets. We
consider push-out buffers, where the algorithm may eject packets
from anywhere in the buffer. The best lower bound on the
competitive ratio of on-line algorithms for buffer management is
approximately 1.28. In this paper we present an on-line algorithm
whose competitive ratio is approximately 1.30 for the worst case
alpha. The best previous general upper bound was about 1.888.
Joint work with Zvi Lotker.
Competitive Analysis of Buffer Management
Policies for QoS Switches
We consider buffer management policies for
shared memory packet switches supporting
Quality of Service (QoS).
There are two interesting dimensions in which the
setting may differ. The first is whether or not
the policy is allowed to preempt packets that
have been already admitted to the buffer.
The second is the value of the packets, do all
packets have the same value (Best Effort)
or do different packets have different
values (DiffServ). We study the
performance of a buffer management policy by means of
competitive analysis, where the goal is to
maximize the total value of packets transmitted.
For preemptive model with fixed value packets we show
that the well known Longest Queue Drop (LQD) policy
is $2$-competitive and present a lower bound of $4/3$.
For the case of variable value packets we derive a
partition policy whose competitive ratio is logarithmic on
the ratio of the maximal to minimal value.
For non-preemptive model we introduce a general
buffer management scheme and propose a new Harmonic
policy, based on this scheme.
We demonstrate that its throughput competitive ratio
is almost optimal for the case of fixed value packets.