# Bayesian inference

 related topics {math, number, function} {law, state, case} {rate, high, increase} {disease, patient, cell} {theory, work, human} {island, water, area} {ship, engine, design} {game, team, player} {food, make, wine} {township, household, population} {war, force, army} {style, bgcolor, rowspan} {town, population, incorporate}

Bayesian inference is a method of statistical inference in which some kind of evidence or observations are used to calculate the probability that a hypothesis may be true, or else to update its previously-calculated probability. The term "Bayesian" comes from its use of the Bayes' theorem in the calculation process. Bayes' theorem was deduced in several special cases by Thomas Bayes, and then it was extended to the general theorem by other researchers.[1]

In practical usage, "Bayesian inference" refers to the use of a prior probability over hypotheses to determine the likelihood of a particular hypothesis given some observed evidence; that is, the likelihood that a particular hypothesis is true given some observed evidence (the so-called posterior probability of the hypothesis) comes from a combination of the inherent likelihood (or prior probability) of the hypothesis and the compatibility of the observed evidence with the hypothesis (or likelihood of the evidence, in a technical sense). Bayesian inference is opposed to frequentist inference, which makes use only of the likelihood of the evidence (in the technical sense), discounting the prior probability of the hypothesis. Most elementary undergraduate-level statistics courses teach frequentist inference rather than Bayesian inference.