Faced with possible distribution shift, we wish to detect and quantify the shift, and to correct our classifiers, without seeing labels on the test set. Progress is possible if we assume either that inputs causes outputs (covariate shift), or that outputs cause inputs (label shift). Label shift problems abound in the wild, e.g., medical diagnosis, news categorization, image classification, and speech recognition. We introduce Black Box Shift Estimation (BBSE), a technique that exploits black-box predictors to produce consistent estimates test-set label distributions. Fixing the number of samples, better predictors give tighter estimates, but the method works even when predictors are biased, inaccurate, or uncalibrated, so long as their confusion matrices are not degenerate. First, we establish BBSE's consistency and bound its error. Next, we demonstrate BBSE's applicability to detecting shift. Finally, we operationalize the estimate to correct our models. Experiments demonstrate that BBSE accurately estimates the test-set label distribution and that shift correction significantly improves classifiers. Moreover, unlike kernel-based approaches, BBSE scales to large, high dimensional datasets (like image), and has no computational dependence on the data or ambient dimensions.