In calculus, the indefinite integral of a given function (i.e. the set of all antiderivatives of the function) is always written with a constant, the constant of integration. This constant expresses an ambiguity inherent in the construction of antiderivatives. If a function f(x) is defined on an interval and F(x) is an antiderivative of f(x), then the set of all antiderivatives of f(x) is given by the functions F(x) + C, where C is an arbitrary constant.
Origin of the constant
The derivative of any constant function is zero. Once one has found one antiderivative F(x), adding or subtracting a constant C will give us another antiderivative, because . The constant is a way of expressing that every function has an infinite number of different antiderivatives.
For example, suppose one wants to find antiderivatives of cos(x). One such antiderivative is sin(x). Another one is sin(x) + 1. A third is sin(x) − π. Each of these has derivative cos(x), so they are all antiderivatives of cos(x).
It turns out that adding and subtracting constants is the only flexibility we have in finding different antiderivatives of the same function. That is, all antiderivatives are the same up to a constant. To express this fact for cos(x), we write:
Replacing C by a number will produce an antiderivative. By writing C instead of a number, however, a compact description of all the possible antiderivatives of cos(x) is obtained. C is called the constant of integration. It is easily determined that all of these functions are indeed antiderivatives of cos(x):
Necessity of the constant
At first glance it may seem that the constant is unnecessary, since it can be set to zero. Furthermore, when evaluating definite integrals using the fundamental theorem of calculus, the constant will always cancel with itself.
However, trying to set the constant equal to zero doesn't always make sense. For example, 2sin(x)cos(x) can be integrated in two different ways:
So setting C to zero can still leave a constant. This means that, for a given function, there is no "simplest antiderivative". By ignoring the constant of integration, one can construct a proof that 1 = 0, which must obviously be invalid.
Another problem with setting C equal to zero is that sometimes we want to find an antiderivative that has a given value at a given point (as in an initial value problem). For example, to obtain the antiderivative of cos(x) that has the value 100 at x = π, then only one value of C will work (in this case C = 100).
Full article ▸