We consider the problem of feature efficient prediction -- a setting where features have costs, and the learner is limited by a budget constraint on the total cost of the features it can examine in test time. We focus on solving this problem with boosting by optimizing the choice of base learners in the training phase and stopping the boosting process when the learner's budget runs out. We experimentally show that in the case of random costs, our method improves upon a previous approach of Reyzin of drawing as many random samples as the budget allows from a trained AdaBoost ensemble. We also experimentally show that our method outperforms pruned decision trees, a natural budgeted classifier.