We propose a unified approximate message passing solution to bilinear matrix recovery with applications to dictionary learning, low-rank matrix completion, and robust PCA. In particular, we consider recovering the matrices D, X, and S from Y = P(D*X) + S + W, where D*X is a low-rank product, S models sparse disturbances, and W accounts for dense disturbances. Here, P(.) is a known element-wise mask, and X would be sparse for dictionary-learning problems. We dub our approach Bilinear Generalized Approximate Message Passing (BiG-AMP), a la Rangan. A salient feature of our approach is that it avoids calculating matrix SVD's and hence is extremely efficient. We show empirically that its performance compares favorably to other state-of-the-art matrix recovery methods.