Reed-Muller (RM) codes are now known to exhibit good performance under maximum-likelihood (ML) decoding based on their highly-symmetric structure. In this paper, we explore the question of whether the code symmetry can also be exploited to achieve near-ML performance in practice. The idea is to consider RM and BCH codes and apply iterative decoding to a highly-redundant parity-check (PC) matrix that contains only the minimum-weight dual codewords as rows. As examples, we consider the peeling decoder for the binary erasure channel, linear-programming and belief propagation (BP) decoding for the binary-input additive white Gaussian noise channel, and bit-flipping and BP decoding for the binary symmetric channel. For short block lengths, it is shown that near-ML performance can indeed be achieved in many cases. We also propose a method to tailor the PC matrix to the received observation by selecting only a small fraction of useful minimum-weight PCs before decoding begins. This allows one to both improve performance and significantly reduce complexity compared to using the full set of minimum-weight PCs. Finally, we also test whether the use of learned scaling parameters for BP decoding can improve performance.