A wide range of new nano-scale technologies is being investigated for future computer memory systems. The main challenge is that in nano-scale systems both the storage elements and logic gates are inherently unreliable. It is in contrast to the state-of-the-art systems where only the memory elements are unreliable while error correction encoders and decoders are made of reliable logic gates. The set of problems that will be addressed can be condensed into the following question: given n memory cells and m universal logic gates which fail following a known random mechanism, what is the optimal memory architecture which stores the maximum number of information bits for the longest period of time with arbitrary low probability of error? This complex problem can be divided and reformulated in many ways, but, interestingly, even some of the most fundamental questions related to it are still unanswered. In this talk we will give an overview of problems related to designing highly reliable memories made of unreliable components and give some new results in characterization of such memories in terms of their complexity and ability to retain stored information.