Modern communication and computation systems are composed of large networks of physical devices. While individually unreliable, these devices can collectively provide reliability via information redundancy, duplicating data or replicating computations. This approach can lead to major performance improvements through the judicious management of system resources. Prime examples of this paradigm include content access from multiple caches in content delivery networks and master/slave computations on compute clusters. Many recent contributions in the area have identified bounds on the latency performance of redundant implementations. Following a similar line of research, this work introduces new analytical bounds and approximation techniques for the latency-redundancy tradeoff for a range of system loads and two popular redundancy schemes.