Computer and Information Science, University of Pennsylvania Beginning at least as early as the 1950s, the long and still growing literature on no-regret learning establishes the following type of result: On any sequence of T trials in which the predictions of K "experts" are observed, it is possible to maintain a dynamically weighted prediction whose per-step regret to the best single expert in *hindsight* (that is, after the full sequence has been revealed) diminishes rapidly with T. It is an historically rich topic, having origins in statistics, game theory, and information theory, and enjoying active research in the modern machine learning community. This tutorial will attempt to describe simply some of the core ideas behind no-regret learning, and to provide some new perspectives and analytic techniques for understanding the strengths and weaknesses of no-regret algorithms. These will include some surprising empirical results on no-regret learning applied to the S&P 500, as well as recent theoretical results examining the trade-off between having small regret to the best expert and no regret to the average expert. The tutorial will be self-contained, and will assume no prior knowledge of any aspect of finance.