For most clustering problems, our true goal is to classify the points correctly, and commonly studied objectives such as k-median, k-means, and min-sum are really only a proxy. That is, there is some unknown correct clustering (grouping proteins by their function or grouping images by who is in them) and the implicit hope is that approximately optimizing these objectives will in fact produce a clustering that is close pointwise to the correct answer. In this paper, we show that if we make this implicit assumption explicit---that is, if we assume that any c-approximation to the given clustering objective F is epsilon-close to the target---then we can produce clusterings that are O(epsilon)-close to the target, even for values c for which obtaining a c-approximation is NP-hard. In particular, for k-median and k-means objectives, we show that we can achieve this guarantee for any constant c > 1, and for min-sum objective we can do this for any constant c > 2. Our results also highlight a difference between assuming that the optimal solution to, say, the k-median objective is epsilon-close to the target, and assuming that any approximately optimal solution is epsilon-close to the target. In the former case, the problem of finding a solution that is O(epsilon)-close to the target remains computationally hard, and yet for the latter we have an efficient algorithm.