Naive Bayes classifier

related topics
{math, number, function}
{rate, high, increase}
{disease, patient, cell}
{food, make, wine}

A Bayes classifier is a simple probabilistic classifier based on applying Bayes' theorem (from Bayesian statistics) with strong (naive) independence assumptions. A more descriptive term for the underlying probability model would be "independent feature model".

In simple terms, a naive Bayes classifier assumes that the presence (or absence) of a particular feature of a class is unrelated to the presence (or absence) of any other feature. For example, a fruit may be considered to be an apple if it is red, round, and about 4" in diameter. Even if these features depend on each other or upon the existence of the other features, a naive Bayes classifier considers all of these properties to independently contribute to the probability that this fruit is an apple.

Depending on the precise nature of the probability model, naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood; in other words, one can work with the naive Bayes model without believing in Bayesian probability or using any Bayesian methods.

In spite of their naive design and apparently over-simplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. In 2004, analysis of the Bayesian classification problem has shown that there are some theoretical reasons for the apparently unreasonable efficacy of naive Bayes classifiers.[1] Still, a comprehensive comparison with other classification methods in 2006 showed that Bayes classification is outperformed by more current approaches, such as boosted trees or random forests.[2]

An advantage of the naive Bayes classifier is that it requires a small amount of training data to estimate the parameters (means and variances of the variables) necessary for classification. Because independent variables are assumed, only the variances of the variables for each class need to be determined and not the entire covariance matrix.

Contents

Full article ▸

related documents
Brouwer fixed point theorem
Pell's equation
Tree automaton
Shell sort
Fundamental theorem of arithmetic
Root-finding algorithm
Delaunay triangulation
Symmetric matrix
Selection sort
Analytic function
Scientific notation
Tangent space
Befunge
Curve
Uniform convergence
Binomial theorem
Tensor
Affine transformation
Knapsack problem
Orthogonality
MathML
Greatest common divisor
Polish notation
Finite state machine
Embedding
Octonion
Burnside's problem
Sufficiency (statistics)
Net (mathematics)
Brute force attack