Today, many real-world machine learning and data analytics problems are of a scale that requires distributed optimization; unlike in centralized computing, these systems are vulnerable to network and node failures. Recently, coding-theoretic ideas have been applied to mitigate node failures in such distributed computing networks. Relaxing the exact recovery requirement of such techniques, we propose a novel approach for adding redundancy in large-scale convex optimization problems, making solvers more robust against sudden and persistent loss of data. This is done by linearly encoding the data variables; all other aspects the computation operate as usual. We show that under moderate amounts of redundancy, it is possible to recover a close approximation to the solution, even when sizable fractions of nodes fail. In particular, we show that encoding with (equiangular) tight frames results in bounded objective error, and explore some specific constructions. We also numerically demonstrate the performance of the proposed technique for three specific machine learning problems, (two using real world datasets) namely ridge regression, binary support vector machine, and low-rank approximation.