We consider a binary classification problem with a feature vector of high dimensionality. Spam mail filters are a popular example hereof. A Bayesian approach requires us to estimate the probability of a feature vector given the class of the object. Due to the size of the feature vector this is an unfeasible task. A useful approach is to split the feature space into several (conditionally) independent subspaces. This results in a new problem, namely how to find the ``best'' subdivision. In this presentation we consider a weighing approach that will perform (asymptotically) as good as the best subdivision and still have a manageable complexity. Finally we show that using the same efficient computation structure we can find the Maximum-Likelihood model explicitely.