It is becoming more commonplace to find humans and AI systems operating in cooperative environments. In these types of environments, it is important that humans trust the AI to make rational decisions as the AI is partially responsible for the success or failure of a task. Sometimes, be it through malfunction or misunderstanding, these systems can behave in ways that seem counter to what humans expect, which can lead to humans distrusting the system. One way to help foster trust between AI systems and humans is to create “explainable” AI systems that explain the reasoning behind its own behaviors. In this talk, Brent Harrison will discuss how machine learning can be used to create an AI system that can provide human-understandable rationalizations about its motives for performing certain actions.