How do we draw valid conclusions from a given data set?  This is a central aim of data analysis, yet is made much more difficult when previous inferences guide future inquiry into the same dataset.  This often results in accidental overfitting, p-hacking, and the like.  A recent line of work in the theory community (Dwork et al. 2015, etc.) has established mechanisms that provide low generalization error on adaptive queries, yet there remain significant gaps:  many practitioners, for instance, successfully employ bootstrapping and related sampling approaches in order to maintain validity and speed up analysis, but prior to this work, no theoretical analysis existed to justify employing such techniques in this adaptive setting.  We show how these techniques can be used to provably guarantee validity while significantly speeding up analysis. We show this for a wide class of queries called low-sensitivity queries, and then show how this can be applied to speed up adaptively-made convex optimization queries. We also provide a method for achieving statistically-meaningful responses even when the mechanism is only allowed to see a constant number of samples from the data per query.