Spring 2016, 7 April 2016Huang 300, 4:15pm
Jon Kleinberg (Cornell)
Planning Problems for Agents with Behavioral Biases
There are many settings where people set long-range goals and make plans to achieve them. Such long-range planning is becoming an integral part of the experience in many on-line contexts, where for example people work toward reputational milestones in question-answering sites, build up to administrative roles in open-source authoring domains, and reach educational goals in on-line learning communities.
To understand these kinds of processes, we need to enrich our models with the types of human behavioral biases that come into play when people attempt to reach long-range goals — particularly, the tendency toward present bias, in which costs and benefits incurred in the present receive particular weight. We propose a graph-theoretic model for the process of planning with present bias, and show how this model directly produces a wide range of qualitative phenomena observed in the literature on present-biased behavior, including procrastination, abandonment of long-range tasks, and the benefits of reduced sets of choices. We then explore a set of analyses that quantify over the set of all graphs; among other results, we provide bounds on the cost incurred by a present-biased agent relative to the optimal plan, and we show that any instance in which this cost significantly exceeds the optimum must contain — in a precise graph-theoretic sense — a large "procrastination" structure.
This talk is based on joint work with Sigal Oren and Manish Raghavan.
Fall 2015, 15 October 2015Huang 300, 4:15pm
Avi Wigderson (IAS)
Is the universe inherently deterministic or probabilistic? Perhaps more importantly — can we tell the difference between the two?
Humanity has pondered the meaning and utility of randomness for millennia. There is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in statistics, cryptography, game theory, algorithms, gambling... Indeed, randomness seems indispensable! Which of these applications survive if the universe had no randomness in it at all? Which of them survive if only poor quality randomness is available, e.g. that arises from "unpredictable" phenomena like the weather or the stock market?
A computational theory of randomness, developed in the past three decades, reveals (perhaps counter-intuitively) that very little is lost in such deterministic or weakly random worlds. In the talk I'll explain the main ideas and results of this theory.
The talk is aimed at a general scientific audience.
Spring 2015, 11 May 2015Huang 300, 4:15pm
Richard J. Lipton (Georgia Tech)
Humanoid Robots, Digital Consciousness, Self-Replication: Myths Of Computing
Computers are all around us. They are used in everything from smartphones to jumbo jets. Yet many myths surround what computers do, what they will be able to do, and what they cannot do.
This talk is a general discussion of some of these myths, and is suitable for all. The audience is invited to come prepared to laugh, boo, yell, clap, and otherwise have fun.
Fall 2014, 2 December 2014Huang 300, 4:15pm
Ravi Kannan (Microsoft Research India)
Topic Modeling: A Provable Algorithm
Topic Modeling is widely used. The general model inference problem is hard. Known provable algorithms need some strong assumptions on the model. After a from-first-principles introduction, the talk will formulate a model with two natural assumptions: (i) each document has a dominant topic and (ii) each topic has dominant words. We provably solve the model fitting problem. The algorithm is based on three natural components: a thresholding step, Singular Value Decomposition and clustering. The proof crucially draws upon recent results on the convergence of another widely used tool: Lloyd's k-means algorithm. We test both the performance of the algorithm and the validity of the assumptions on real document corpora with success. The simple device of thresholding has other uses - we will see an exponential advantage for certain "planted" problems in terms of the signal-to-noise ratio.
Joint with T. Bansal and C. Bhattacharyya, Indian Institute of Science.
Fall 2013, 5 December 2013Huang 300, 4:15pm
Johan Håstad (Royal Institute of Technology)
How Hard Is It To Find A Good Solution?
Suppose we are given an optimization problem where the quality of a proposed solution can be evaluated efficiently. The natural instinct is to try to find the best solution but in some cases this leads to infeasible amounts of computation. To study when this happens, leads to the classical theory of NP-completeness that was initiated in the 1970's and mostly completed before 1990. The most famous NP-complete problem is the Traveling Salesman Problem. The first part of the lecture will review these classical results. When it is too computationally expensive to find the best solution, one naturally turns to finding good solutions. This is often formalized as finding a solution which is within a small predetermined factor of optimal, and our understanding of these questions have seen substantial progress in the last decades. For some problems it is possible to find very good approximate solutions, quite efficiently, while in some other cases even very weak approximation remains infeasible. In the second part of the lecture we give some highlighted results of both types.
Spring 2013, 30 April 2013Huang 300, 4:15pm
Umesh Vazirani (University of California, Berkeley)
Quantum Hamiltonian Complexity: through the computational lens
Much as probabilistic thinking did starting in the early 80s, quantum computing is expanding the core questions of complexity theory in fundamental new directions. For example, here is a list of three basic questions about quantum mechanics that are at their heart questions about computational complexity:
1. Do `typical' quantum states that occur in Nature have succinct (polynomial) description?
2. Can quantum systems at room temperature exhibit exponential complexity?
3. Is the scientific method sufficiently powerful to comprehend general quantum systems?
Each of these issues is best studied through the computational lens as a question about computation. The resulting questions lie at the core of computational complexity theory.
The first asks about the structure of solutions to the quantum analog of SAT. The second asks whether there is a quantum analog of the PCP theorem. And the third can be formulated as a question about interactive proof systems with quantum polynomial time provers. I will briefly outline these connections and the state of the art on these questions.
Spring 2012, 19 April 2012CIS Auditorium, 4:15 PM
Umesh Vazirani (University of California, Berkeley)
Certifiable Quantum DiceVideo Link
A source of independent and uniform random bits is a basic resource in many computational tasks, such as cryptography, game theoretic protocols, algorithms and physical simulations. In the classical world it is impossible to construct a random number generator whose output can be certified to be a uniformly random n-bit string, since there seems to be no basis on which to reject any particular output in favor of any other. Quantum mechanics allows for a remarkable random number generator: its output is certifiably random in the sense that if the output passes a simple statistical test, and a no-signaling condition is met between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), even a quantum skeptic (viz Einstein's famous quote ``God does not play dice with the Universe''), would be convinced that the output is truly random, independent of the validity of quantum mechanics!
Based on joint work with Thomas Vidick.
Winter 2012, 8 March 2012Huang 300, 4:15 PM
Salil Vadhan (Harvard)
Computational EntropyVideo Link Part 1 Video Link Part 2
Shannon's notion of entropy measures the amount of "randomness" in a process. However, to an algorithm with bounded resources, the amount of randomness can appear to be very different from the Shannon entropy. Indeed, various measures of "computational entropy" have been very useful in computational complexity and the foundations of cryptography. In this talk, I will describe two new measures of computational entropy ("next-bit pseudoentropy" and "inaccessible entropy") that have enabled much simpler and more efficient constructions of cryptographic primitives from one-way functions. In particular, I will discuss a construction of pseudorandom generators of seed length O(n^3) from a one-way function on n bits, improving the seed length of O(n^8) in the classic construction of Hastad, Impagliazzo, Levin, and Luby.
Joint works with Iftach Haitner, Thomas Holenstein, Omer Reingold, Hoeteck Wee, and Colin Jia Zheng.
Fall 2011, 12 December 2011CIS Auditorium, 11:00 AM
Dan Spielman (Yale)
Spectral Sparsification of Graphs and Approximations of MatricesVideo Link Part 1 Video Link Part 2
Expander graphs can be viewed as sparse approximations of complete graphs, with Ramanujan expanders providing the best possible approximations.
We formalize this notion of approximation and ask how well a given graph can be approximated by a sparse graph. We prove that every graph can be approximated by a sparse graph almost as well as the complete graphs are approximated by the Ramanujan expanders: our approximations employ at most twice as many edges to achieve the same approximation factor.
We also present an efficient randomized algorithm for constructing sparse approximations that only uses a logarithmic factor more edges than optimal.
Our algorithms follow from the solution of a problem in linear algebra. Given any expression of a rank-n symmetric matrix A as a sum of rank-1 symmetric matrices, we show that A can be well approximated by a weighted sum of only O(n) of those rank-1 matrices.
This is joint work with Joshua Batson, Nikhil Srivastava and Shang-Hua Teng.
Spring 2011, 5 May 2011CIS Auditorium, 4:15 PM
David Karger (MIT)
Random Sampling and Cuts in Graphs: Three Combinatorial ProofsVideo Link
I'll synthesize work showing that randomly sampling the edges of an undirected graph preserves cut values. I'll give three distinct proofs, based on random contraction, tree packing, and network reliability, that the values of cuts are tightly concentrated around their expectations when those expectations are sufficiently large. I'll give a combinatorial construction for sparsifying any n-vertex undirected graph down to O(n log n) edges with only small perturbations in cut value and show how it can be used in fast algorithms for approximate and exact maximum flow computations. While one can achieve slightly better sparsification using algebraic techniques, I believe these combinatorial methods offer useful insights and may yield more in the future. The talk is self contained, requiring only elementary knowledge of combinatorics and probability.
Winter 2011, 3 March 2011Huang Engineering Center, 3rd floor, Mackenzie Room, 4:15 PM
Moses Charikar (Princeton University)
Dimension Reduction in L_1Video Link
Given a set of n points in a normed space, how many dimensions are needed to represent all distances within a specific distortion ? This dimension-distortion tradeoff is well understood for the L_2 norm, where O(log n) dimensions suffice to preserve distances almost exactly. In sharp contrast, the situation for L_1 is far from understood and there is a significant gap between upper and lower bounds. In this talk, I will survey what we currently know about dimension reduction in L_1 and present the first near linear lower bounds for this question.