This talk considers the problem of recovering a nonnegative sparse signal x from highly corrupted linear measurements y = Ax + e, where e is an unknown error vector whose nonzero entries may be unbounded. Motivated by an observation from face recognition in computer vision, we show that for highly correlated (and possibly overcomplete) dictionaries A, any non-negative, sufficiently sparse signal x can be recovered by solving an L1-minimization problem: min ||x||_1 + ||e||_1 subject to y = Ax + e. More precisely, if the fraction of errors is bounded away from one and the support of x grows sublinearly in the dimension of the observation, then asymptotically, the above L1-minimization succeeds for all signals x and almost all sign-and-support patterns of e. This result suggests that accurate recovery of sparse signals is possible and computationally feasible even with nearly 100% of the observations corrupted. We further show how these techniques can be extended to the problem of recovering a low-rank matrix from a single highly corrupted observation, by combining the nuclear norm relaxation for the rank and the L1 norm relaxation for the error term. This yields an efficient and provably correct algorithm for this idealized "Robust Principal Component Analysis" problem.