Title: Hardness results for agnostic learning One of the central algorithmic tasks in learning theory is to learn an unknown classification rule (hypothesis) from a set of training examples so that future examples can be classified. The hypothesis output by the learner is required to belong to a certain "concept class" which is assumed to contain a good classification rule for the data. Under the promise that the concept class contains a perfect classifier that is correct on all training examples, the learning problem amounts to finding a hypothesis from the class that is consistent with the training data. For some concept classes, such as the class of monomials, decision lists, XOR functions, or halfspaces, this problem is solvable in polynomial time. But what if there is no such consistent hypothesis? This would often be the case in practice where one does not have accurate background knowledge on the correct hypothesis that explains the data. Also, even if the original data is perfectly explained by a hypothesis from the assumed concept class, the actual training data could be noisy. In such a case, a natural goal, called "agnostic learning," is to output a hypothesis from the class that correctly classifies as many examples as possible. A celebrated result due to Hastad (1997) showed that for the class of XOR functions, even a tiny amount of noise makes learning an XOR function with any non-trivial agreement NP-hard. Similar results were recently obtained for the class of monomials and halfspaces. In this talk, I will discuss these hardness results for agnostic learning, focusing mostly on the following tight hardness result for learning halfspaces: for arbitrary constants $\eps,\delta > 0$, given training data from the hypercube a fraction $(1-\eps)$ of which can be explained by a halfspace, it is NP-hard to find a halfspace that correctly labels a fraction $(1/2+\delta)$ of the examples. Based on joint work with Prasad Raghavendra.