In this work, a unified framework for joint state estimation and control design is proposed for a system modeled by a discrete-time, finite-state Markov chain wherein sensing capabilities are adaptively resourced. The system observation is a function of the underlying state and an exogenous controller. The Markov chain system’s discrete state is determined by a recursive Kalman-like filter derived via an innovations approach. The control strategy is derived to optimize the filter’s performance via stochastic dynamic programming. The resulting partially observable Markov Decision process is non-linear with respect to the predicted state estimate, challenging control policy determination. Applications such as body sensing and active classification of time-varying systems are considered.