We present a novel algorithm for online convex optimization, which can be viewed as a robust version of Optimistic Mirror Descent, that achieves the optimal O(sqrt{T}) regret without knowing the time horizon. We further design a simple signaling scheme that bridges our algorithm and Optimistic Mirror Descent into solving the Nash Equilibrium of zero-sum games. Our bounds simultaneously achieve the best rates for honest regret, adversarial regret, and we resolve the open problem of removing the $log T$ term in convergence to Nash Equilibrium.