Conditional Gradient Descent methods are popular first-order methods for (smooth) constraint convex minimization. Relying on a linear programming oracle, these methods often outperform projected gradient descent methods whenever projections into the feasible region are expensive. Unfortunately, even those methods might suffer from prohibitive running times if the linear programming oracle itself is expensive. In this talk, we will explore a general method to significantly speed-up conditional gradient descent methods by replacing the linear programming oracle with a significantly easier and cheaper oracle leading to real-world speedups by several orders of magnitude while maintaining identical theoretical converge rates modulo (small!) constant factors.