Computational approaches to problems in Natural Language Understanding and Information Extraction are often modeled as structured predictions – predictions that involve assigning values to sets of interdependent variables. Over the last few years, one of the most successful approaches to studying these problems involves Constrained Conditional Models (CCMs), an Integer Learning Programming formulation that augments probabilistic models with declarative constraints as a way to support such decisions. I will present research within this framework, focusing on learning and inference issues and, in particular, I will focus on recent results on Amortized Inference – learning to accelerate inference during the life time of the learning system.