8.4 Methods of ‘summing’ series.
We have seen that it is possible to obtain a development of the form \[ f(x) = \sum_{m=0}^{n} A_mx^{-m} + R_n(x), \] where \(R_n (x) \rightarrow \infty\) as \(n \rightarrow\infty\), and the series \(\sum\limits_{m=0}^{\infty} A_m x^{-m}\) does not converge.
We now consider what meaning, if any, can be attached to the ‘sum’ of a non-convergent series. That is to say, given the numbers \(a_0\), \(a_1\), \(a_2, \dots\), we wish to formulate definite rules by which we can obtain from them a number \(S\) such that \(S = \sum_{n=0}^{\infty} a_n\) if \(\sum_{n=0}^{\infty} a_n\) converges, and such that \(S\) exists when this series does not converge.
8.41 Borel’s method of summation.[1]
We have seen (§7.81) that \[ \sum_{n=0}^{\infty} a_n z^n = \! \int_0^{\infty} \! e^{-t} \phi(tz) \, d t , \] where \(\phi(tz) = \sum\limits_{n=0}^{\infty} \dfrac{a_n t^nz^n}{n!} \), the equation certainly being true inside the circle of convergence of \(\sum\limits_{n=0}^{\infty} a_n z^n\) . If the integral exists at points \(z\) outside this circle, we define the ‘Borel sum’ of \(\sum\limits_{n=0}^{\infty} a_n z^n\) to mean the integral.
Thus, whenever \(\mathfrak{Re}(z) < 1\), the ‘Borel sum’ of the series \(\sum\limits_{n=0}^{\infty} z^n\) is \[ \int_0^\infty \! e^{-t}e^{tz} \, d t = (1-z)^{-1}. \] If the ‘Borel sum’ exists we say that the series is ‘summable \((B)\)’.
8.42 Euler’s method of summation.[2]
A method, practically due to Euler, is suggested by the theorem of §3.71; the ‘sum’ of \(\sum\limits_{n=0}^{\infty} a_n\) may be defined as \(\lim\limits_{x \rightarrow 1-0}\;\! \sum\limits_{n=0}^{\infty} a_n x^n\), when this limit exists.
Thus the ‘sum’ of the series \(1 - 1 + 1 - 1 + \cdots\) would be \[ \lim_{x \rightarrow 1-0} (1 - x + x^2 - \cdots ) = \lim_{x \rightarrow 1-0} (1 + x)^{-1} = \frac{1}{2}. \]
8.43 Cesàro’s method of summation.[3]
Let \(s_n = a_1 + a_2 + \cdots + a_n\); then if \(S =\lim\limits_{n \rightarrow \infty} \dfrac{1}{n}(s_1 + s_2 + \cdots + s_n )\) exists, we say that \(\sum_{n=1}^{\infty} a_n\) is ‘summable \((C\:\! 1)\)’, and that its sum \((C\:\! 1)\) is \(S\). It is necessary to establish the ‘condition of consistency’,[4] namely that \(S = \sum_{n=1}^{\infty} a_n\) when this series is convergent .
To obtain the required result, let \(\sum\limits_{m=1}^{\infty} a_m = s\), \(\sum\limits_{m=1}^{\infty} s_m = nS_{n\:\!\!}\); then we have to prove that \(S_n \rightarrow s\),
Given \(\epsilon\), we can choose \(n\) such that \(\left| \, \sum\limits_{m=m+1}^{n+p} a_m \, \right| < \epsilon \) for all values of \(p\), and so that \(\left| \, s - s_n \, \right| < \epsilon \:\!\!\).
Then, if \(\nu > n\), we have \[ \begin{align*} S_{\nu}= a_1 &+ a_2 \left(1 - \frac{1}{\nu} \right) + \cdots + a_n \left(1 - \frac{n-1}{\nu} \right) \\ &+ a_{n+1} \left(1 - \frac{n}{\nu} \right)+ \cdots + a_{\nu} \left(1 - \frac{\nu-1}{\nu} \right) . \end{align*} \] Since \(1\), \(1 - v^{-1}\), \(1 - 2v^{-1}, \dots\) is a positive decreasing sequence, it follows from Abel’s inequality (§2.301) that \[ \left| \, \:\! a_{n+1} \left(1 - \frac{n}{\nu} \right) + a_{n+2} \left(1 - \frac{n+1}{\nu} \right) + \cdots + a_{\nu} \left(1 - \frac{\nu-1}{\nu} \right) \, \right| < \left( 1 - \frac{n}{\nu} \right) \epsilon . \] Therefore \[ \left| \, \:\! S_{\nu} - \left\{ a_1 + a_2 \left(1 - \frac{1}{\nu} \right) + \cdots + a_n \left(1 - \frac{n-1}{\nu} \right) \right\} \, \right | < \left( 1 - \frac{n}{\nu} \right) \epsilon . \]
Making \(\nu \rightarrow\infty \), we see that, if \(S\) be any one of the limit points (§2.21) of \(S_{\nu}\), then \[ \left| \, S- \sum_{m=1}^n a_m \, \right| \leq \epsilon . \] Therefore, since \(\left|\, s - s_n \,\right| \leq \epsilon \;\!\! \), we have \[ \left| \, S-s \, \right| \leq 2 \epsilon . \] This inequality being true for every positive value of \(\epsilon\) we infer, as in §2.21, that \(S = s\); that is to say \(S_{\nu}\) has the unique limit \(s\); this is the theorem which had to be proved.
Example 1. Frame a definition of ‘uniform summability \((C\:\! 1)\) of a series of variable terms.’
Example 2. If \(b_{n,\;\!\nu} \geq b_{n+1,\;\!\nu} \geq 0\) when \(n< \nu\) and if, when \(n\) is fixed, \(\lim\limits_{\nu \rightarrow \infty} b_{n,\;\!\nu} = 1\), and if \(\sum\limits_{m=1}^{\infty} a_m =s\), then \( \displaystyle \lim_{\nu \rightarrow \infty} \left\{ \sum_{n=1}^{\nu} a_n b_{n,\;\!\nu} \right\} =s.\)
8.431 Cesàro’s general method of summation.
A series \(\sum\limits_{n=1}^{\infty} a_n\) is said to be ‘summable \((C\:\! r)\)’ if \(\lim\limits_{\nu \rightarrow \infty} \sum\limits_{n=1}^{\nu} a_n b_{n,\;\!\nu}\) exists, where \[ \begin{align*} b_{0,\;\!\nu} &=1 \\ b_{n,\;\!\nu} &= \left\{ \left( 1 + \frac{r}{v+1-n} \right) \left( 1 + \frac{r}{v+2-n} \right) \cdots \left( 1 + \frac{r}{v-1} \right) \right\}^{-1} \! . \end{align*} \] It follows from §8.43 example 2 that the ‘condition of consistency’ is satisfied; in fact it can be proved[5] that if a series is summable \((C\:\! r')\) it is also summable \((C\:\! r)\) when \(r > r' \!\); the condition of consistency is the particular case of this result when r = 0.
8.44 The method of summation of Riesz.[6]
A more extended method of ‘summing’ a series than the preceding is by means of \[ \lim_{\nu \rightarrow \infty} \;\! \sum_{n=1}^\infty \left( 1- \frac{\lambda_n}{\lambda_\nu} \right)^{\:\!\! r} a_n . \] in which \(\lambda_n\) is any real function of \(n\) which tends to infinity with \(n\). A series for which this limit exists is said to be ‘summable \((R\:\! r)\) with sum-function \(\lambda_n\)’.
8.5 Hardy’s convergence theorem.[7]
Let \(\sum\limits_{n=1}^\infty a_n \) be a series which is summable \((C\:\! 1)\). Then if \[ a_n = O\left( 1 \middle/ n \right), \] the series \(\sum\limits_{n=1}^\infty a_n \) converges.
Let \(s_n = a_1 + a_2 + \cdots + a_n\) then since \(\sum\limits_{n=1}^\infty a_n \) is summable \((C\:\! 1)\), we have \[ s_1 + s_2 + \cdots + s_n = n\left\{ s + o(1) \right\}, \] where \(s\) is the sum \((C\:\! 1)\) of \(\sum\limits_{n=1}^\infty a_n \).
Let \[ s_m - s = t_m, \quad (\, m = 1, \, 2, \cdots , \:\! n\,), \] and let \[ t_1 + t_2 + \cdots + t_n = \sigma_n . \]
With this notation, it is sufficient to shew that, if \(\left|\, a_n \,\right| < Kn^{-1}\), where \(K\) is independent of \(n\), and if \(\sigma_ n = n\cdot o (1)\), then \(t_n \rightarrow 0\) as \(n \rightarrow \infty\).
Suppose first that \(a_1\), \(a_2, \dots\) are real. Then, if \(t_n\) does not tend to zero, there is some positive number \(h\) such that there are an unlimited number of the numbers \(t_n\) which satisfy either (i) \(t_n > h\) or (ii) \(t_n < -h\). We shall shew that either of these hypotheses implies a contradiction. Take the former,[8] and choose \(n\) so that \(t_n > h\).
Then, when \(r = 0\), \(1\), \(2\), \(\dots\), \[ \left|\, a_{n+1} \,\right| < \left. \vphantom{z} K \middle/ n \right. . \] Now plot the points \(P_r\) whose coordinates are \((r, t_{\:\! n+r})\) in a Cartesian diagram. Since \(t_{\:\! n+r+1}-t_{\:\! n+r} = a_{n+r+1}\), the slope of the line \(P_r P_{r+1}\) is greater than than \(-\left. K \middle/ n \right.\). Let \(\theta= \arctan (\left. K \middle/ n \right.)\).
Therefore the points \(P_0\), \(P_1\), \(P_2, \dots\) lie above the line \(y = h - x \tan\theta\). Let \(P_k\) be the last of the points \(P_0\), \(P_1, \dots\), which lie on the left of \(x=h \cot\theta\), so that \(k \leq h \cot\theta\).
Draw rectangles as shewn in the figure. The area of these rectangles exceeds the area of the triangle bounded by \(y = h - x \tan \theta\) and the axes; that is to say \[\begin{align*} \sigma_{n+k} - \sigma_{n-1} &= t_n + t_{n+1} + \cdots + t_{n+k} \\ &> \frac{1}{2} h^2 \cot \theta = \frac{1}{2} h^2 K^{-1} n. \end{align*}\]
But \[\begin{align*} \left|\, \sigma_{n+k} - \sigma_{n-1} \,\right| & \leq \left|\, \sigma_{n+k} \,\right| + \left|\, \sigma_{n-1} \,\right| \\ &= (n + k)\cdot o(1) + (n-1)\cdot o(1) \\ &= n\cdot o(1), \end{align*}\] since \(k \leq hnK^{-1}\), and \(h\), \(K\) are independent of \(n\).
Therefore, for a set of values of \(n\) tending to infinity, \[ \frac{1}{2} h^2 K^{-1} < n \cdot o(1), \] which is impossible since \(\frac{1}{2} h^2 K^{-1}\) is not \(o(1)\) as \(n \rightarrow \infty\).
This is the contradiction obtained on the hypothesis that \(\varlimsup t_n \geq h > 0\); therefore \(\varlimsup t_n \leq 0\). Similarly, by taking the corresponding case in which \(t_n \leq - h\), we arrive at the result \(\varliminf t_n \geq 0\). Therefore since \(\varlimsup t_n \geq \varliminf t_n \), we have \[ \varlimsup t_n = \varliminf t_n = 0, \] and so \[ t_n \rightarrow 0. \] That is to say \(s_n \rightarrow s\), and so \(\sum\limits_{n=1}^\infty a_n \) is convergent and its sum is \(s\).
If \(a_n\) be complex, we consider \(\mathfrak{Re} (a_n)\) and \(\mathfrak{Im} (a_n)\) separately, and find that \(\sum\limits_{n=1}^\infty \mathfrak{Re} (a_n) \) and \(\sum\limits_{n=1}^\infty \mathfrak{Im} (a_n) \) converge by the theorem just proved, and so \(\sum\limits_{n=1}^\infty a_n \) converges.
The reader will see in Chapter ix that this result is of great importance in the modern theory of Fourier series.
Corollary. If \(a_{n}(\xi)\) be a function of \(\xi\) such that \(\sum\limits_{n=1}^\infty a_{n}(\xi) \) is uniformly summahle \((C\:\! 1)\) throughout a domain of values of \(\xi\), and if \(\left|\, a_{n}(\xi) \,\right| < Kn^{-1}\), where \(K\) is independent of \(\xi\), then \(\sum\limits_{n=1}^\infty a_{n}(\xi) \) converges uniformly throughout the domain.
For, retaining the notation of the preceding section, if \(t_{n}(\xi)\) does not tend to zero uniformly, we can find a positive number \(h\) independent of \(n\) and \(\xi\) such that an infinite sequence of values of \(n\) can be found for which \(t_{n}(\xi_{n}) > h\) or \(t_{n}(\xi_{n}) > -h\) for some point \(\xi_{n}\) of the domain;[9] the value of \(\xi_n\) depends on the value of \(n\) under consideration.
We then find, as in the original theorem, \[ \frac{1}{2} h^2 K^{-1} < n \cdot o(1), \] for a set of values of \(n\) tending to infinity. The contradiction implied in the inequality shews that \(h\) does not exist,[10] and so \(t_{n}(\xi) \rightarrow 0\) uniformly.
References.
- H. Poincaré Acta Mathematica, viii. (1886), pp. 295–344.
- E. Borel, Leçons sur les Séries Divergentes (Paris, 1901).
- T. J. I’a. Bromwich, Theory of Infinite Series (1908), Ch. xi.
- E. W. Barnes, Phil. Trans. of the Royal Society, 206, a (1906), pp. 249–297.
- G. H. Hardy and J. E. Littlewood, Proc. London Math. Soc. (2), xi. (1913), pp. 1–16.[11]
- G. N. Watson, Phil. Trans. of the Royal Society, 211, a (1912), pp. 279–313.
- S. Chapman, Proc. London Math. Soc. (2), ix. (1911), pp. 369–409.[12]
Miscellaneous Examples.
Shew that \[ \int_0^{\infty} \! \frac{e^{-xt}}{1+t^2} \, d t \sim \frac{1}{x} - \frac{2!}{x^3} + \frac{4!}{x^5} - \cdots \] when \(x\) is real and positive.
Discuss the representation of the function \[ f(x) = \int_{-\infty}^0 \phi(t) \:\! e^{tx} \, d t \] (where \(x\) is supposed real and positive, and \(\phi\) is a function subject to certain general conditions) by means of the series \[ f(x) = \frac{\phi(0)}{x} - \frac{\phi'(0)}{x^2} + \frac{\phi''(0)}{x^3} - \cdots . \] Shew that in certain cases (e.g. \(\phi(t) = e^{at}\)) the series is absolutely convergent, and represents \(f(x)\) for large positive values of \(x\) but that in certain other cases the series is the asymptotic expansion of \(f(x)\).
Shew that \[ e^z z^{-a} \! \int_{z}^{\infty} \! e^{-x} x^{a-1} d x \sim \frac{1}{z} + \frac{a-1}{z^2} + \frac{(a-1)(a-2)}{z^3} + \cdots \] for large positive values of \(z\). \(\vphantom{\\ 3\\}\)
(Legendre, Exercices de Calc. Int. (1811), p. 340.)Shew that if, when \(x > 0\), \[ f(x) = \! \int_{0}^{\infty} \! \left\{ \log u + \log \left( \frac{1}{1-e^{-u}} \right) \right\} e^{-xu} \frac{du}{u} , \] then \[ f(x) \sim \frac{1}{2x} - \frac{B_1}{2^2 x^2} + \frac{B_2}{4^2 x^4} - \frac{B_3}{6^2 x^6} + \cdots . \] Shew also that \(f(x)\) can be expanded into an absolutely convergent series of the form \[ f(x) = \sum_{k=1}^{\infty} \frac{c_k}{(x+1)(x+2) \cdots (x+k)} . \] (Schlömilch.)
Shew that if the series \(1+0+0-1+0+1+0+0-1+ \cdots\), in which two zeros precede each \(-1\) and one zero precedes each \(+1\), be ‘summed’ by Cesàro’s method, its sum is \(\frac{3}{5}\). \(\vphantom{\\ 3\\}\)
(Euler, Borel.)Shew that the series \(1 - 2! + 4! - \cdots\) cannot be summed by Borel’s method, but the series \(1+0 - 2!+0 + 4!+0 - \cdots\) can be so summed.