We consider the problems of estimation, learning, and optimization over a large dataset of which a subset consists of points drawn from the distribution of interest, and make no assumptions on the remaining points---they could be well-behaved, extremely biased, or adversarially generated. We investigate this question via the introduction of two new model for studying robust estimation, learning, and optimization. One of these models, which we term the "semi-verified" model, assumes access to a second much smaller (typically constant-sized) set of "verified" or trusted data points that have been drawn from the distribution of interest. The key question in this model is how a tiny, but trusted dataset can allow for the accurate extraction of the information contained in the large, but untrusted dataset. The second model we propose, which we term "list-decodable learning", considers the task of returning a small list of proposed answers. Underlying this model is the question of whether the presence of a significant amount of adversarial datapoints can mask the structure of the "good" datapoints, or whether it can simply create additional false copies of the structure of the good points.