We analyze the redundancy incurred by imposing a strict end-to-end delay constraint in a lossless source coding setting. Traditionally, this problem has been addressed by considering dictionary-based coding schemes and identifying the delay with the maximal phrase length. This restrictive approach yields a redundancy that decays polynomially with the delay constraint. However, we will show that by employing general coding schemes that allow the encoder/decoder to sequentially emit bits/symbols whenever they are resolved, the redundancy can be made to decay exponentially with the delay constraint. This fundamental tradeoff is characterized by deriving upper and lower bounds on the achievable redundancy-delay exponent, as a function of the source.