In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation; it is indeed derived using the product rule.
The integration by parts formula states:
Or, letting and while and the formula can be written more compactly:
The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former.
This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values and and applying the fundamental theorem of calculus gives the definite integral version: The original integral contains the derivativev'; to apply the theorem, one must find v, the antiderivative of v', then evaluate the resulting integral
It is not necessary for and to be continuously differentiable. Integration by parts works if is absolutely continuous and the function designated is Lebesgue integrable (but not necessarily continuous).[3] (If has a point of discontinuity then its antiderivative may not have a derivative at that point.)
If the interval of integration is not compact, then it is not necessary for to be absolutely continuous in the whole interval or for to be Lebesgue integrable in the interval, as a couple of examples (in which and are continuous and continuously differentiable) will show. For instance, if
is not absolutely continuous on the interval [1, ∞), but nevertheless:
so long as is taken to mean the limit of as and so long as the two terms on the right-hand side are finite. This is only true if we choose Similarly, if
is not Lebesgue integrable on the interval [1, ∞), but nevertheless
with the same interpretation.
One can also easily come up with similar examples in which and are not continuously differentiable.
Further, if is a function of bounded variation on the segment and is differentiable on then
where denotes the signed measure corresponding to the function of bounded variation , and functions are extensions of to which are respectively of bounded variation and differentiable.[citation needed]
Consider a parametric curve . Assuming that the curve is locally one-to-one and integrable, we can define
The area of the blue region is
Similarly, the area of the red region is
The total area A1 + A2 is equal to the area of the bigger rectangle, x2y2, minus the area of the smaller one, x1y1:
Or, in terms of t, Or, in terms of indefinite integrals, this can be written as Rearranging: Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region.
This visualization also explains why integration by parts may help find the integral of an inverse function f−1(x) when the integral of the function f(x) is known. Indeed, the functions x(y) and y(x) are inverses, and the integral ∫ xdy may be calculated as above from knowing the integral ∫ ydx. In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions. In fact, if is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of in terms of the integral of . This is demonstrated in the article, Integral of inverse functions.
Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions u(x)v(x) such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take:
On the right-hand side, u is differentiated and v is integrated; consequently it is useful to choose u as a function that simplifies when differentiated, or to choose v as a function that simplifies when integrated. As a simple example, consider:
Since the derivative of ln(x) is 1/x, one makes (ln(x)) part u; since the antiderivative of 1/x2 is −1/x, one makes 1/x2 part v. The formula now yields:
The antiderivative of −1/x2 can be found with the power rule and is 1/x.
Alternatively, one may choose u and v such that the product u′ (∫vdx) simplifies due to cancellation. For example, suppose one wishes to integrate:
If we choose u(x) = ln(|sin(x)|) and v(x) = sec2x, then u differentiates to using the chain rule and v integrates to tan x; so the formula gives:
The integrand simplifies to 1, so the antiderivative is x. Finding a simplifying combination frequently involves experimentation.
In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis, it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below.
Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times is also known.
The function which is to be dv is whichever comes last in the list. The reason is that functions lower on the list generally have simpler antiderivatives than the functions above them. The rule is sometimes written as "DETAIL", where D stands for dv and the top of the list is the function chosen to be dv. An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions.
To demonstrate the LIATE rule, consider the integral
Following the LIATE rule, u = x, and dv = cos(x) dx, hence du = dx, and v = sin(x), which makes the integral become which equals
In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos(x) was chosen as u, and x dx as dv, we would have the integral
which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere.
Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate
one would set
so that
Then
Finally, this results in
Integration by parts is often used as a tool to prove theorems in mathematical analysis.
If is a -times continuously differentiable function and all derivatives up to the th one decay to zero at infinity, then its Fourier transform satisfies
The above result tells us about the decay of the Fourier transform, since it follows that if and are integrable then
In other words, if satisfies these conditions then its Fourier transform decays at infinity at least as quickly as 1/|ξ|k. In particular, if then the Fourier transform is integrable.
One use of integration by parts in operator theory is that it shows that the −∆ (where ∆ is the Laplace operator) is a positive operator on (see Lp space). If is smooth and compactly supported then, using integration by parts, we have
Considering a second derivative of in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS:
Extending this concept of repeated partial integration to derivatives of degree n leads to
This concept may be useful when the successive integrals of are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms), and when the nth derivative of vanishes (e.g., as a polynomial function with degree ). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes.
In the course of the above repetition of partial integrations the integrals and and get related. This may be interpreted as arbitrarily "shifting" derivatives between and within the integrand, and proves useful, too (see Rodrigues' formula).
The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration"[5] and was featured in the film Stand and Deliver (1988).[6]
For example, consider the integral
and take
Begin to list in column A the function and its subsequent derivatives until zero is reached. Then list in column B the function and its subsequent integrals until the size of column B is the same as that of column A. The result is as follows:
# i
Sign
A: derivatives
B: integrals
0
+
1
−
2
+
3
−
4
+
The product of the entries in row i of columns A and B together with the respective sign give the relevant integrals in step i in the course of repeated integration by parts. Step i = 0 yields the original integral. For the complete result in step i > 0 the ith integral must be added to all the previous products (0 ≤ j < i) of the jth entry of column A and the (j + 1)st entry of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given jth sign. This process comes to a natural halt, when the product, which yields the integral, is zero (i = 4 in the example). The complete result is the following (with the alternating signs in each term):
This yields
The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions and their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index i.This can happen, expectably, with exponentials and trigonometric functions. As an example consider
# i
Sign
A: derivatives
B: integrals
0
+
1
−
2
+
In this case the product of the terms in columns A and B with the appropriate sign for index i = 2 yields the negative of the original integrand (compare rows i = 0and i = 2).
Observing that the integral on the RHS can have its own constant of integration , and bringing the abstract integral to the other side, gives:
Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function u and vector-valued function (vector field) V.[7]
where is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form . Rearranging gives:
or in other words The regularity requirements of the theorem can be relaxed. For instance, the boundary need only be Lipschitz continuous, and the functions u, v need only lie in the Sobolev space.
Consider the continuously differentiable vector fields and , where is the i-th standard basis vector for . Now apply the above integration by parts to each times the vector field :
Summing over i gives a new integration by parts formula:
Hoffmann, Laurence D.; Bradley, Gerald L. (2004). Calculus for Business, Economics, and the Social and Life Sciences (8th ed.). pp. 450–464. ISBN0-07-242432-X.
Willard, Stephen (1976). Calculus and its Applications. Boston: Prindle, Weber & Schmidt. pp. 193–214. ISBN0-87150-203-8.
Washington, Allyn J. (1966). Technical Calculus with Analytic Geometry. Reading: Addison-Wesley. pp. 218–245. ISBN0-8465-8603-7.