A tunable measure for information leakage called maximal -leakage is introduced. This measure quantifies the maximal gain of an adversary in refining a tilted version of its prior belief of any (potentially random) function of a dataset conditioned on a disclosed dataset. The choice of alpa determines the specific adversarial action ranging from refining a belief for alpha = 1 to guessing the best posterior for alpha = infty, and for these extremal values this measure simplifies to mutual information (MI) and maximal leakage (MaxL), respectively. For alpha between these extrema, this measure is shown to be the Arimoto channel capacity. Several properties of this measure are proven including: (i) quasiconvexity in the mapping between the original and disclosed datasets; (ii) data processing inequalities; and (iii) a composition property. Joint work with Jiachun Liao, Oliver Kosut, and Flavio du pin Calmon