We analyze active learning algorithms, which only receive the classifications of examples when they ask for them, and traditional passive (PAC) learning algorithms, which receive classifications for all training examples, under log-concave and nearly log-concave distributions. We prove that active learning provides an exponential improvement over passive learning when learning homogeneous linear separators in these settings. For passive learning, we provide a computationally efficient algorithm with optimal sample complexity for such problems. We also provide new bounds for active and passive learning in the case that the data might not be linearly separable, both in the agnostic case and and under the Tsybakov low-noise condition.