In this talk I'll show how algorithms for convex optimization can be used to derive lower bounds against general convex relaxations for constraint satisfaction problems. The technique relies on proving that standard stochastic convex optimization problems can be solved efficiently in the statistical query (SQ) model (an oracle model that restricts access to random samples to estimation of expectations). It is known that many classic constraint satisfaction problems, including 3-SAT, have high complexity in the SQ model. These lower bounds together with our SQ algorithms for stochastic convex optimization imply that broad classes of convex relaxations cannot be used to solve constraint satisfaction problems with high SQ complexity. Based on a joint work with Cristobal Guzman and Santosh Vempala (SODA 2017, https://arxiv.org/abs/1512.09170)