The privacy-utility tradeoff problem is formulated as determining the privacy mechanism (random mapping) that minimizes the mutual information (a metric for privacy leakage) between the private features of the original dataset and a released version. The minimization is subject to a constraint on the average distortion cost defined as a function f evaluated for every distortion d between the public features and the released version of dataset. The asymptotic optimal leakage is derived both for general and stationary memoryless privacy mechanisms. It is shown that for convex cost functions there is no asymptotic loss in using stationary memoryless mechanisms. Of independent interest are the proof techniques developed here for arbitrary cost functions.