We discuss systems that allow circuit faults to occur to decrease energy consumption, while preserving a desired performance level without hardware compensation mechanisms. This approach is suitable for implementing algorithms that include a form of redundancy, which allows the system to reject hardware errors. We present results for two types of systems: decoders for LDPC codes, and inference engines based on deep learning. For the case of LDPC decoders, we present an approach to accurately model the deviations introduced by timing violations in a state-of-the-art circuit architecture, and show that the energy consumption can be improved by allowing faults to occur in the circuit, without compromising on performance or circuit area. For the case of deep neural networks, we study faulty implementations of convolutional neural networks and multi-layer perceptrons using pessimistic deviation models. We look at the impact on resilience of different hyper parameters such as the number of layers. We also observe in which situations a faulty implementation can achieve the same performance as a reliable implementation, and quantify the required increase in network size.