In many inference problems resource constraints preclude the utilization of all available measurements. Consequently, active approaches seek to choose the subset of measurements which maximize the expected information gain for an underlying inference task. Determining the optimal subset of measurements in such cases is known to have exponential complexity. Here, we discuss performance guarantees for tractable greedy measurement selection algorithms in sequential estimation problems. We show that optimal choices can yield no better than twice the performance of greedy choices for information-theoretic measures such as mutual information and Fisher information. The bound can be shown to be sharp. We also discuss additional on-line computable bounds, often tighter in practice, as well.