We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an {\em arbitrary, unknown} input distribution. We give such {\em self-improving} algorithms for sorting a list of numbers, and for computing the Delaunay triangulation of a set of planar points. Each algorithm begins with a training phase during which it adjusts itself to the input distribution; this is followed by a stationary regime in which the algorithm settles to its optimized incarnation. In this setting, an algorithm must take expected time proportional to the input size $n$, and to the entropy $E$ of the output. Our algorithms achieve optimal $O(n+E)$ expected running times in the stationary regime.