A new multi-scale navigation technique based on Bayesian experimental design is presented, where the criterion is to maximize mutual information. Rather than simply executing user navigation commands, the technique interprets user input to update its knowledge about the user’s intended target; then it navigates to a new view that maximizes the expected mutual information provided by the user’s subsequent input. Controlled experiments demonstrate that the technique is significantly faster than conventional "pan and zoom" navigation and requires fewer commands for distant targets, especially in non-uniform information spaces. We discuss possible application to other human-computer interaction tasks.