The modern problem complexity-based framework of convex optimization typically requires that the gradient or Hessian of the objective function is bounded. However, such a condition does not always hold. Exceptions include positron emission tomography, portfolio selection, and quantum state tomography (QST). While the first two applications can be addressed by Csiszar’s alternating minimization algorithm, the corresponding quantum analogue for QST is lacking. In this talk, I will present almost condition-free convergence guarantees for mirror descent-type algorithms, which only require the objective function to be first- or second-order continuously differentiable. Numerical results showed that, therefore, entropic mirror descent with Armijo line search was the fastest guaranteed-to-converge algorithm for QST, empirically on real data-sets.