Multidimensional (tensor) data appear in contemporary and emerging signal processing and machine-learning applications. One way to make models for such data is to impose structure to limit the number of free parameters. In representation learning, the goal is to find a data-driven method for identifying an efficient method for storing observed data. Dictionary learning represents data via (sparse) linear combinations of atoms. One way of imposing structure in tensor dictionary learning is to use a Kronecker structure on the data. This talk will describe progress on learning such structured representations for tensors of arbitrary order.