Title: Diffusion Asymptotics for Sequential Experiments Abstract: We propose a new diffusion-asymptotic analysis for sequentially randomized experiments. Rather than taking sample size n to infinity while keeping the problem parameters fixed, we let the mean signal level scale to the order 1/sqrt{n} so as to preserve the difficulty of the learning task as n gets large. In this regime, we show that the behavior of a class of methods for sequential experimentation converges to a diffusion limit. This connection enables us to make sharp performance predictions and obtain new insights on the behavior of Thompson sampling. Our diffusion asymptotics also help resolve a discrepancy between the Theta(log(n)) regret predicted by the fixed-parameter, large-scale asymptotics on the one hand, and the Theta(sqrt{n}) regret from worst-case, finite-sample analysis on the other, suggesting that it is an appropriate asymptotic regime for understanding practical large-scale sequential experiments. Joint work with Kuang Xu. Speaker: Stefan Wager Bio: Stefan Wager is an assistant professor of Operations, Information, and Technology at the Stanford Graduate School of Business, and an assistant professor of Statistics (by courtesy). His research lies at the intersection of causal inference, optimization, and statistical learning. He is particularly interested in developing new solutions to problems in statistics, economics and decision making that leverage recent advances in machine learning. When: Mon, May 24, 12pm Where: Zoom: https://stanford.zoom.us/meeting/register/tJEpcOyopzwjGdFFJD1G5LooJcdMIDdD86Qm