The Theory of Riemann Integration

4.1 The Concept of Integration.

The reader is doubtless familiar with the idea of integration as the operation inverse to that of differentiation and is equally well aware that the integral (in this sense) of a given elementary function is not always expressible in terms of elementary functions. In order therefore to give a definition of the integral of a function which shall be always available, even though it is not practicable to obtain a function of which the given function is the differential coefficient, we have recourse to the result that the integral[1] of \(f(x)\) between the limits \(a\) and \(b\) is the area bounded by the curve \(y =f(x)\), the axis of \(x\) and the ordinates \(x = a\), \(x = b\). We proceed to frame a formal definition of integration with this idea as the starting-point.

[1]Defined as the (elementary) function whose differential coefficient is \(f(x)\). ↩

4.11 Upper and lower integrals.[2]

[2]The following procedure for establishing existence theorems concerning integrals is based on that given by Goursat, Cours d’ Analyse, i. Ch. iv. The concepts of upper and lower integrals are due to Darboux, Ann. de l’Ecole norm. sup. (2) iv. (1875), p. 64. ↩

Let \(f(x)\) be a bounded function of \(x\) in the range \([a, b]\). Divide the interval at the points \(x_1, x_2 , \dots , x_{n-1}\) \((a \leq x_1 \leq x_2 \cdots \leq x_{n-1} \leq b)\). Let \(U\), \(L\) be the bounds of \(f(x)\) in the range \([a, b]\), and let \(U_r,\) \(L_r\) be the bounds of \(f(x)\) in the range \([x_{r-1}, x_r]\), where \(x_0 = a\), \(x_n = b\).

Consider the sums[3] \[S_n=U_1(x_1-a)+U_2(x_2-x_1)+ \cdots + U_n(b-x_{n-1}),\] \[s_n=L_1(x_1-a)+L_2(x_2-x_1)+ \cdots + L_n(b-x_{n-1}).\] Then \[U\:\! (b-a) \geq S_n \geq s_n \geq L\:\! (b-a).\]

[3]The reader will find a figure of great assistance in following the argument of this section. \(S_n\) and \(s_n\) represent the sums of the areas of a number of rectangles which are respectively greater and less than the area bounded by \(y=f(x)\), \(x=a\), \(x = b\) and \(y = 0\), if this area be assumed to exist. ↩

For a given \(n\), \(S_n\) and \(s_n\) are bounded functions of \(x_1, x_2 , \dots , x_{n-1}\). Let their lower and upper bounds[4] respectively be \(\underline{S}_n\), \(\overline{s}_n\), so that \(\underline{S}_n\), \(\overline{s}_n\) depend only on \(n\) and on the form of \(f(x)\), and not on the particular way of dividing the interval into \(n\) parts.

[4]The bounds of a function of \(n\) variables are defined in just the same manner as the bounds of a function of a single variable (§3.62).  ↩

Let the lower and upper bounds of these functions of \(n\) be \(S\), \(s\). Then \[{S}_n \geq S, \quad {s}_n \leq s.\] We proceed to shew that \(s\) is at most equal to \(S\); i.e. \(S \geq s\).

Let the intervals \([a, x_1], [x_1, x_2], \dots \) be divided into smaller intervals by new points of subdivision, and let \[a, y_1, y_2, \dots , y_{k-1}, y_k (= x_1), y_{k+1}, \dots , y_{l-1}, y_l (= x_2), y_{l+1}, \dots , y_{m-1}, b \] be the end points of the smaller intervals; let \(U_r'\), \(L_r '\) be the bounds of \(f(x)\) in the interval \([y_{r-1}, y_r ]\).

Let \[T_m = \sum_{r=1}^m (y_r - y_{r-1}) U_r ', \quad t_m = \sum_{r=1}^m (y_r - y_{r-1}) L_r '.\]

Since \(U_1 ', U_2 ', \dots , U_k '\) do not exceed \(U_1\), it follows without difficulty that \(S_n \geq T_m \geq t_m \geq s_n\).

Now consider the subdivision of \([a, b]\) into intervals by the points \(x_1, x_2, \dots , x_{n-1}\) and also the subdivision by a different set of points \(x_1', x_2', \dots , x_{n'-1}'\). Let \(S_{n'} ' ,\, s_{n'} '\) be the sums for the second kind of subdivision which correspond to the sums \(S_n ,\, s_n\) for the first kind of subdivision. Take all the points \(x_1, x_2, \dots , x_{n-1};\;x_1', x_2', \dots , x_{n'-1}'\) as the points \(y_1 , y_2 , \dots , y_m\).

Then \[\begin{align*} &S_n \geq T_m \geq t_m \geq s_n ,&\\ \\ \text{ and }\qquad&S_{n'} ' \geq T_m \geq t_m \geq s_{n'} ' . \end{align*}\]

Hence every expression of the type \(S_n\) exceeds (or at least equals) every expression of the type \(s'_{n'}\); and therefore \(S\) cannot be less than \(s\).

[For it \(S < s\) and \(s - S = 2\eta\) we could find an \(S_n\) and an \(s'_{n'}\) such that \(S_n - S < \eta\), \(s - s'_{n'} < \eta\) and so \(s'_{n'} > S_n\), which is impossible.]

The bound \(S\) is called the upper integral of \(f(x)\), and is written \( \displaystyle \overset{\;_{\Large—}}{\int_{a}^{b}}\! f(x) \,dx\); \(s\) is called the lower integral, and written \( \displaystyle \underset{\!\!\!\!\!^{\Large—}}{\int_{a}^{b}}\! f(x) \,dx\).

If \(S = s\), their common value is called the integral of \(f(x)\) taken between the limits[5] of integration \(a\) and \(b\).

[5]‘Extreme values’ would be a more appropriate term but ‘limits’ has the sanction of custom. ‘Termini’ has been suggested by Lamb, Infinitesimal Calculus (1897), p. 209. ↩

The integral is written \(\displaystyle \int_a^b \! f(x)\, dx\).

We define \(\displaystyle \int_b^a \! f(x) \,dx\), when \(a < b\), to mean \(\displaystyle -\!\int_a^b \! f(x) \,dx\).

Example 1. \(\int_a^b \{f(x) + \phi(x)\}\,dx=\)\(\int_a^b f(x)\, dx + \int_a^b \phi(x)\,dx\).

Example 2. By means of example 1, define the integral of a continuous complex function of a real variable.

4.12 Riemann’s Condition of Integrability.[6]

[6]Riemann (Ges. Math. Werke, p. 239) bases his definition of an integral on the limit of the sum occurring in §4.13; but it is then difficult to prove the uniqueness of the limit. A more general definition of integration (which is of very great importance in the modern theory of Functions of Real Variables) has been given by Lebesgue, Annali di Mat. (3) vii. (1902), pp. 231–359. See also his Leçons sur l’intégration (Paris, 1904). ↩

A function is said to be ‘integrable in the sense of Riemann’ if (with the notation of §4.11) \(S_n\) and \(s_n\) have a common limit (called the Riemann integral of the function) when the number of intervals \([x_{r-1}, x_r]\) tends to infinity in such a way that the length of the longest of them tends to zero.

The necessary and sufficient condition that a bounded function should be integrable is that \(S_n - s_n\) should tend to zero when the number of intervals \([x_{r-1}, x_r]\) tends to infinity in such a way that the length of the longest tends to zero.

The condition is obviously necessary, for if \(S_n\) and \(s_n\) have a common limit \(S_n - s_n \rightarrow 0\) as \(n \rightarrow \infty\). And it is sufficient; for, since \(S_n \geq S \geq s \geq s_n\), it follows that if \(\lim \,(S_n - s_n) = 0\), then \[\lim S_n = \lim s_n = S = s.\]

Note. A continuous function \(f(x)\) is ‘integrable’. For, given \(\epsilon\), we can find a \(\delta\) such that \(\left|\,f(x') - f(x'') \,\right| < \left. \epsilon \middle/ (b-a) \right.\) whenever \(\left|\,x'-x' ' \right| < \delta\). Take all the intervals \([x_{s-1}, x_s]\) less than \(\delta\), and then \(U_s - L_s < \left. \epsilon\middle/(b - a)\right.\) and so \(S _n - s_n < \epsilon\); therefore \(S_n - s_n \rightarrow 0\) under the circumstances specified in the condition of integrability.

Corollary. If \(S_ n\) and \(s_n\) have the same limit \(S\) for one mode of subdivision of \([a, b]\) into intervals of the specified kind, the limits of \(S_n\) and of \(s_n\) for any other such mode of subdivision are both S.

Example 1. The product of two integrable functions is an integrable function.

Example 2. A function which is continuous except at a finite number of ordinary discontinuities is integrable.

[If \(f(x)\) have an ordinary discontinuity at \(c\), enclose \(c\) in an interval of length \(\delta_1\); given \(\epsilon\), we can find \(\delta\) so that \(\left|\, f(x') -f (x) \,\right| < \epsilon\) when \(\left|\, x' - x \,\right| < \delta\) and \(x\), \(x'\) are not in this interval.

Then \(S_n-s_n \leq \epsilon (b-a-\delta_1)+k\delta_1\), where \(k\) is the greatest value of \(\left|\,f(x')-f(x)\,\right|\), when \(x\), \(x'\) lie in the interval.

When \(\delta_1 \rightarrow 0, \; k \rightarrow \left | \,f(c + 0)- f(c-0) \,\right|\), and hence \(\lim\limits_{n \rightarrow \infty} (S_n -s_n ) = 0\).]

Example 3. A function with limited total fluctuation and a finite number of ordinary discontinuities[7] is integrable. (See §3.64 example 2.)

[7]Editor’s Note: There is no need to assume a finite number of discontinuities. Any function with limited total fluctuation is Riemann integrable. ↩

4.13 A general theorem on integration.

Let \(f(x)\) be integrable, and let \(\epsilon\) be any positive number. Then it is possible to choose \(\delta\) so that \[\left|\,\sum_{p=1}^n (x_p - x_{p-1}) f(x'_{p-1}) - \int_a^b \! f(x)\, dx \,\right| < \epsilon,\] provided that \[x_p - x_{p-1} \leq \delta, \quad x_{p-1} \leq x'_{p-1} \leq x_p.\]

To prove the theorem we observe that, given \(\epsilon\), we can choose the length of the longest interval, \(\delta\), so small that \(S_n - s_n < \epsilon\).

Also \[S_n \geq \sum_{p=1}^n (x_p - x_{p-1}) f(x'_{p-1}) \geq s_n,\] \[S_n \geq \int_a^b \! f(x) \,dx \geq s_n.\] Therefore \[ \begin{align*} \left|\,\sum_{p=1}^n (x_p - x_{p-1}) f(x'_{p-1}) - \int_a^b \! f(x)\, dx \,\right| &\leq S_n-s_n \\ \\ \\ & < \epsilon. \end{align*} \]

As an example[8] of the evaluation of a definite integral directly from the theorem of this section consider \(\displaystyle \int_0^X \! \frac{dx}{(1-x^2)^{\frac{1}{2}}}\), where \(X < 1\).
[8]Netto, Zeitschrift für Math. und Phys. xii.(1895), p. 184. ↩

Take \(\delta= \displaystyle \frac{1}{p} \arcsin X\) and let \(x_s = \sin s\delta\), \((0 < s\delta < \frac{1}{2}\pi)\), so that \[\begin{align*} x_{s+1}-x_s &= 2 \sin \textstyle\frac{1}{2} \delta \, \cos(s+\textstyle\frac{1}{2}) \delta < \delta ;\\ \text{ also let } \quad\qquad\qquad x'_s &= \sin (s+\textstyle\frac{1}{2})\delta. \end{align*}\]

Then \[ \begin{align*} \sum_{s=1}^p \frac{x_s-x_{s-1}}{(1-x'^{2}_{s-1})^{\frac{1}{2}}} &= \sum_{s=1}^p \frac{\sin s\delta - \sin (s-1)\delta}{\cos (s- \textstyle\frac{1}{2})\delta } \\ &=2p\sin\textstyle\frac{1}{2}\delta\\ &=\arcsin X \left\{ \sin\textstyle\frac{1}{2}\delta \big/ (\textstyle\frac{1}{2} \delta)\right\}. \end{align*} \]

By taking \(p\) sufficiently large we can make \[\left|\, \int_0^X \! \frac{dx}{(1-x^2)^{\frac{1}{2}}} - \sum_{s=1}^p \frac{x_s-x_{s-1}}{(1-x'^{2}_{s-1})^{\frac{1}{2}}} \, \right|\] arbitrarily small.

We can also make \[\arcsin X \, \left\{ \frac{\sin\frac{1}{2}\delta}{\frac{1}{2} \delta}-1 \right\}\] arbitrarily small.

That is, given an arbitrary number \(\epsilon\), we can make \[\left|\, \int_0^X \! \frac{dx}{(1-x^2)^{\frac{1}{2}}} - \arcsin X \,\right | < \epsilon\] by taking \(p\) sufficiently large. But the expression now under consideration does not depend on \(p\); and therefore it must be zero; for if not we could take \(\epsilon\) to be less than it, and we should have a contradiction.

That is to say \[\int_0^X \! \frac{dx}{(1-x^2)^{\frac{1}{2}}} = \arcsin X.\]

Example 1. Shew that \[\lim_{n \rightarrow \infty} \frac{1+ \cos\frac{x}{n} +\cos\frac{2x}{n}+ \cdots + \cos\frac{(n-1)x}{n} } {n} = \frac{\sin x}{x}.\]

Example 2. If \(f(x)\) has ordinary discontinuities at the points \(a_1, a_2 , \dots , a_{\kappa}\), then \[\int_a^b \! f(x) \, dx=\lim \left\{\int_a^{a_1-\delta_1}\! +\int_{a_1+\epsilon_1}^{a_1-\delta_2}\! + \cdots + \int_{a+\epsilon_{\kappa}}^b \! f(x) \, dx\right\}\] where the limit is taken by making \(\delta_1, \delta_2, \dots ,\delta_{\kappa}\), \(\epsilon_1, \epsilon_2, \dots , \epsilon_{\kappa}\) tend to \(+0\) independently.

Example 3. If \(f(x)\) is integrable when \(a_1 \leq x \leq b_1\) and if, when \(a_1 \leq a < b \leq b_1 \), we write \[\int_a^b \! f(x)\,dx =\phi(a,b),\] and if \((b + 0)\) exists, then \[\lim_{\delta \,\rightarrow \,+0}\frac{\phi(a,b+\delta)-\phi(a,b)}{\delta} =f(b+0).\]

Deduce that, if \(f(x)\) is continuous at \(a\) and \(b\), \[\frac{d}{da} \int_a^b \! f(x)\,dx =-f(a), \quad \frac{d}{db} \int_a^b \! f(x)\,dx =f(b).\]

Example 4. Prove by differentiation[9] that, if \(\phi(x)\) is a continuous function of \(x\) and \(\displaystyle \frac{dx}{dt}\) a continuous function of \(t\), then \[\int_{\large x_{0}}^{\large x_{1}} \! \phi(x) \, dx =\int_{\large t_0}^{\large t_{1}} \! \phi(x) \frac{dx}{dt} dt.\]
[9]Editor’s Note: The wording of this example and the next suggests that Whittaker and Watson expect the reader to use the the first fundamental theorem of calculus. However, it is possibly more enlightening to prove them using the theorem at the beginning of the section together with the modified Heine-Borel theorem. ↩

Example 5. If \(f' (x)\) and \(\phi'(x)\) are continuous when \(a \leq x \leq b\), shew from example 3 that \[\int_a^b \! f'(x) \phi(x) \, dx + \int_a^b \!\phi'(x)f(x)\, dx = f(b)\phi(b)-f(a)\phi(a).\]

Example 6. If \(f(x)\) is integrable in the range \([a, c]\) and \(a \leq b \leq c\), shew that \(\displaystyle\int_a^b \! f(x)\,dx\) is a continuous function of \(b\).

4.14 Mean Value Theorems.

The two following general theorems are frequently useful.

(I) Let \(U\) and \(L\) be the upper and lower bounds of the integrable function \(f(x)\) in the range \([a, b]\).

Then from the definition of an integral it is obvious that \[\int_a^b \!\left\{U-f(x)\right\} dx, \quad \int_a^b \!\left\{f(x)-L\right\} dx \] are not negative; and so \[U(b-a) \geq \int_a^b \! f(x)\,dx \geq L(b-a).\]

This is known as the First Mean Value Theorem.

If \(f(x)\) is continuous we can find a number \(\xi\) such that \(a \leq \xi \leq b\) and such that \(f(\xi)\) has any given value lying between \(U\) and \(L\) (§3.63). Therefore we can find \(\xi\) such that \[\int_a^b \! f(x) \, dx = (b-a)f(\xi).\]

If \(F(x)\) has a continuous differential coefficient \(F' (x)\) in the range \([a, b]\), we have, on writing \(F' (x)\) for \(f(x)\), \[F(b)-F(a) = (b-a)F'(\xi)\] for some value of \(\xi\) such that \(a \leq \xi \leq b\).[10]

[10]Editor’s Note: Alternatively, let \(F(x)= \int_c^x f(\zeta)\, d\zeta+K\) for \(a \leq c \leq b\). Then \(\int_a^b f(x) \, dx= F(b)-F(a)\) and, from §4.13 example 3, \(F'(x) = f(x)\). We would still need the first fundamental theorem of calculus to show that any continuously differentiable function \(F(x)\) can be written as \(\int_c^x f(\zeta)\, d\zeta + K\) for some \(f\). ↩

Example. If \(f(x)\) is continuous and \(\phi(x) \geq 0\), shew that \(\xi\) can be found such that \[\phantom{x}\int_a^b \! f(x)\phi(x) \, dx = f(\xi)\!\int_a^b \!\phi(x)\,dx.\]

[11]Journal de Math. xiv. (1849), p. 249. The proof given is a modified form of an investigation due to Holder, Gott. Nach. (1889), pp. 38–47. ↩
(II) Let \(f(x)\) and \(\phi(x)\) be integrable in the range \([a, b]\) and let \(\phi(x)\) be a positive decreasing function of \(x\). Then Bonnet’s[11] form of the Second Mean Value Theorem is that a number \(\xi\) exists such that \(a \leq \xi \leq b\), and \[\int_a^b \! f(x)\phi(x) \, dx = \phi(a) \!\int_a^{\xi} \! f(x)\,dx.\].

For, with the notation of §§4.1–4.13, consider the sum \[S= \sum_{s=1}^p (x_s-x_{s-1})\,f(x_{s-1})\,\phi(x_{s-1}) .\] Writing \((x_s -x_{s-1} )\,f(x_{s -1} ) = a_{s-1}\), \(\phi(x_{s-1}) = \phi_{s-1}\), \(a_0 + a_1 + \dots + a_s = b_s\), we have \[S= \sum_{s=1}^{p-1} b_{s-1} (\phi_{s-1}-\phi_s) + b_{p-1}\phi_{p-1}.\]

Each term in the summation is increased by writing \(\overline b\) for \(b_{s-1} \) and decreased by writing \(\underline b\) for \(b_{s-1}\), if \(\overline b\), \(\underline b\) be the greatest and least of \(b_0\), \(b_1, \dots , b_{p-1}\); and so \(\underline{b}\phi_0 \leq S \leq \overline{b}\phi_0\). Therefore \(S\) lies between the greatest and least of the sums \(\phi(x_0)\sum\limits_{s=1}^m (x_s-x_{s-1})\,f(x_{s-1})\) where \(m=1, 2, 3, \dots ,p\). But, given \(\epsilon\), we can find \(\delta\) such that, when \(x_{s} - x_{s-1} < \delta\), \[\left |\, \sum _{s=1}^p (x_s-x_{s-1}) f(x_{s-1})\phi(x_{s-1}) - \int_{x_{\tiny {0}}}^{ x_ {p}} \! f(x) \phi(x)\, dx \, \right| < \epsilon, \] \[\left|\, \phi(x_0) \sum_{s=1}^m(x_s-x_{s-1})f(x_{s-1}) - \phi(x_0) \!\int_{x_{\tiny {0}}}^{ x_ {m}} \! f(x) \, dx \, \right| < \epsilon,\] and so, writing \(a\), \(b\) for \(x_0\), \(x_p\), we find that \(\displaystyle \int_a^b \! f(x) \phi(x)\, dx\) lies between the upper and lower bounds of[12] \(\displaystyle\phi(a)\! \int_{a}^{ \xi_1}\! f(x) \, dx\pm 2\epsilon\), where \(\xi_1\) may take all values between \(a\) and \(b\). Let \(U\) and \(L\) be the upper and lower bounds of \(\displaystyle\phi(a) \!\int_{a}^{ \xi_1}\! f(x) \, dx \).

[12]By §4.13 example 6, since \(f(x)\) is bounded, \(\int_a^{\xi_1}f(x)\,dx\) is a continuous function of \(\xi_1\). ↩

Then \(U+2\epsilon \geq \displaystyle\int_{a}^{b}\! f(x)\phi(x) \, dx \geq L- 2\epsilon\) for all positive values of \(\epsilon\); therefore \[U\geq \int_{a}^{b}\! f(x)\phi(x) \, dx \geq L .\]

[13]Editor’s Note: In this usage, the word qua means treated as a…  ↩

Since \(\displaystyle\phi(a)\! \int_{a}^{ \xi_1}\! f(x) \, dx\) qua[13] function of \(\xi_1\) takes all values between its upper and lower bounds, there is some value \(\xi\), say, of \(\xi_1\) for which it is equal to \(\int_{a}^{b} f(x)\phi(x) \, dx \). This proves the Second Mean Value Theorem.

Example. By writing \(\left|\, \phi(x)-\phi(b)\, \right |\) in place of \(\phi(x)\) in Bonnet’s form of the mean value theorem, shew that if \(\phi(x)\) is a monotonic function, then a number \(\xi\) exists such that \(a \leq \xi \leq b\) and \[\int_a^b \! f(x)\phi(x)\, dx = \phi(a) \!\int_a^{\xi}\! f(x) \, dx + \phi(b) \!\int_{\xi}^{b}\! f(x) \, dx. \] (Du Bois Reymond.)

4.2 Differentiation of integrals containing a parameter.

The equation[14] \(\displaystyle \frac{d}{d\boldsymbol{\alpha}}\int_a^b \! f(x, \boldsymbol{\alpha}) \, dx = \int_a^b \!\frac{\partial f}{\partial \boldsymbol{\alpha}} \, dx\) is true if \(f(x, \boldsymbol{\alpha})\) possesses a Riemann integral with respect to \(x\) and \(f_{\boldsymbol{\alpha}}\, ( = \frac{\partial f}{\partial \boldsymbol{\alpha}})\) is a continuous function of both[15] of the variables \(x\) and \(\boldsymbol{\alpha}\).

[14]This formula was given by Leibniz, without specifying the restrictions laid on \(f(x, \boldsymbol{\alpha})\). ↩
[15]\(\phi(x, y)\) is defined to be a continuous function of both variables if, given \(\epsilon\), we can find \(\delta\) such that \(\left| \,\phi(x', y') - \phi(x, y)\, \right| < \epsilon\) whenever \(\{(x' - x)^2 + (y' - y)^2 \}^{\frac{1}{2}} < \delta\). It can be shewn by §3.6 that if \(\phi(x, y)\) is a continuous function of both variables at all points of a closed region in a Cartesian diagram, it is uniformly continuous throughout the region (the proof is almost identical with that of §3.61). It should be noticed that, if \(\phi(x, y)\) is a continuous function of each variable, it is not necessarily a continuous function of both; as an example take \[\phi(x,y)=\frac{(x+y)^2}{x^2+y^2}, \quad \phi(0,0) = 1;\] this is a continuous function of \(x\) and of \(y\) at \((0, 0)\), but not of both \(x\) and \(y\). ↩

For \[\frac{d}{d\boldsymbol{\alpha}}\int_a^b \! f(x,\boldsymbol{\alpha})\,dx = \lim_{h \rightarrow 0} \int_a^b \frac{f(x,\boldsymbol{\alpha}+h)-f(x, \boldsymbol{\alpha})}{h} dx\] if this limit exists. But, by the first mean value theorem, since \(f_{\boldsymbol{\alpha}}\) is a continuous function of \(\boldsymbol{\alpha}\), the second integrand is \(f_{\boldsymbol{\alpha}} (x, a + \theta h)\), where \(0 \leq \theta \leq 1\).

[16]It is obvious that it would have been sufficient to assume that \(f_{\boldsymbol{\alpha}}\) had a Riemann integral and was a continuous function of \(\boldsymbol{\alpha}\) (the continuity being uniform with respect to \(x\)), instead of assuming that \(f_{\boldsymbol{\alpha}}\) was a continuous function of both variables. This is actually done by Hobson, Functions of a Real Variable, p. 599. ↩

But, for any given \(\epsilon\), a number \(\delta\) independent of \(x\) exists (since the continuity of \(f_{\boldsymbol{\alpha}}\) is uniform[16] with respect to the variable \(x\)) such that \[\left|\, f_{\boldsymbol{\alpha}}(x, \boldsymbol{\alpha}') - f_{\boldsymbol{\alpha}}(x, \boldsymbol{\alpha}) \,\right| < \left. \epsilon \middle/ (b-a)\right. ,\] whenever \(\left|\, \boldsymbol{\alpha}' - \boldsymbol{\alpha} \,\right| < \delta\).

Taking \(\left|\, h \,\right| < \delta\) we see that \(\left| \,\theta h \,\right| < \delta\), and so whenever \(\left|\, h \,\right| < \delta\), \[\left|\int_a^b \frac{f(x,\boldsymbol{\alpha}+h)-f(x, \boldsymbol{\alpha})}{h} dx - \int_a^b \! f_{\boldsymbol{\alpha}}(x, \boldsymbol{\alpha})\,dx \,\right|\] \[ \begin{align*} \qquad \qquad \qquad &\leq \int_a^b \!\left|\, f_{\boldsymbol{\alpha}}(x, \boldsymbol{\alpha}+\theta h) - f_{\boldsymbol{\alpha}}(x, \boldsymbol{\alpha})\,\right|\, dx.\\ \\ & < \epsilon. \end{align*} \]

Therefore by the definition of a limit of a function (§3.2), \[\lim_{h \rightarrow 0} \int_a^b \frac{f(x,\boldsymbol{\alpha}+h)-f(x, \boldsymbol{\alpha})}{h} dx\] exists and is equal to \(\displaystyle \int_a^b \! f_{\boldsymbol{\alpha}} \,dx\).

Example 1. If \(a, b\) be not constants but functions of \(\boldsymbol{\alpha}\) with continuous differential coefficients, shew that \[\frac{d}{d\boldsymbol{\alpha}}\int_a^b \! f(x, \boldsymbol{\alpha})\, dx = f(b, \boldsymbol{\alpha})\frac{db}{d\boldsymbol{\alpha}}-f(a, \boldsymbol{\alpha})\frac{da}{d\boldsymbol{\alpha}}+\int_a^b \! \frac{\partial f}{\partial \boldsymbol{\alpha}} dx.\]

Example 2. If \(f(x, \boldsymbol{\alpha})\) is a continuous function of both variables, \(\displaystyle \int_a^b \! f(x, \boldsymbol{\alpha})\,dx\) is a continuous function of \(\boldsymbol{\alpha}\).