This is a work in progress. It is the online version of a series of talks given at Seminar in Summer 2025.
Content warning: This talk is intended for people who have completed their introductory calculus courses and a fair amount of algebra.
In first year calculus, we learn a variety of techniques for integrating functions. We learn that we can integate things such as: \[ \int xe^x dx = xe^x - e^x + C. \]
However, there are some integrals which resist all our integration techniques. For example, it seems to be impossible to integrate: \[ \int \frac{e^x}{x} dx. \] Why is it that one integral is routine and the other is impossible? What distinguishes these two integrals from one another? It is the purpose of this note to explore this issue.
The appearance of intractable integration problems, especially related to elliptic integrals, led mathematicians in the nineteenth to develop a theory of integration in finite terms. The first mathematician to prove significant general results in this direction was Liouville between 1834 and 1841. After this initial period, the theory was sporadically developed in England and Ukraine, however it was mostly dormant.
In 1948, Joseph Ritt published Integration in finite terms: Liouville’s Theory of Elementary Methods (UToronto). The first purely algebraic treatment of the problem was given by Rosenlicht in 1968. We will follow this approach to the problem borrowing heavily from the articles below.
Rosenlicht, Maxwell (1972). “Integration in Finite Terms”. American Mathematical Monthly. 79 (9): 963–972. doi:10.2307/2318066. JSTOR 2318066.
Siu, Man-Keung (1996). “Integration in finite terms: From Liouville’s work to the calculus classroom of today”. Vita Mathematica: Historical Research and Integration with Teaching, edited by R. Calinger, MAA, 1996, 321-330.
We begin with a simple example to get a taste for the theory. Once we know the sort of thing that we’re trying to prove, we’ll develop the theory more fully. Through out this note, we will work with complex valued functions. This will ensure that all our polynomials have roots. At this point it will suffice to say that “elementary” means simple, or expressible in finite terms, without paying too much attention to what that means precisely.
Suppose that $f(z)$ and $g(z)$ are rational polynomials in $\mathbb{C}(x)$. If $\displaystyle \int f(z) e^{g(z)}dz $ is elementary then it has the form $a(z) e^{g(z)}$ for some rational function $a(z)$. That is $a(z) = P(z) / Q(z)$ where $P$ and $Q$ have no common root at $Q \neq 0$.
If we accept this theorem without proof then we can explain why \[ \int \frac{e^z}{z} dz \] is not elementary. Suppose that it were and had the form we expect from Baby Liouville. We would have: \[ \int \frac{e^z}{z} dz = \frac{P(z)}{Q(z)} e^{z} + C. \] First notice that $Q$ cannot be constant. If it were, the anti-derivative would have the form $P(z) e^z + C$ for some polynomial $P(z)$. It would be impossible to any negative power of $z$ in the resulting derivative; one can check this by writing out the polynomial $P(z)$ explcitly and taking derivatives. So, we’ll work with non-constant polynomials $P(z)$ and $Q(z)$ having no common root. Our goal is to derive some sort of algebraic contradiction. Taking derivatives of both sides gives:
\[ \frac{e^z}{z} = \frac{P^{\prime}Q - PQ^{\prime}}{Q^2} e^z + \frac{P}{Q}e^z + 0. \] We want to get back to a fact about polynomials. If we cancel the $e^z$ term throughout and multiply through by $zQ^2$ we get: \[ Q^2 = (P^{\prime}Q - PQ^{\prime}z + PQ)z \Longleftrightarrow Q(Q - P^{\prime} - Pz) = -zPQ^{\prime} \] We will work with the version on the right hand side. Let’s highlight this equation with a star. \[ (\star) \quad \quad Q(Q - P^{\prime} - Pz) = -zPQ^{\prime} \] We know that $P$ and $Q$ have no roots in common and $Q \neq 0$. Let’s work with those facts. Suppose that $Q$ has a root $\zeta \in \mathbb{C}$ of multiplicity $m$. We now consider two cases: $\zeta \neq 0$ and $\zeta = 0$.
If $\zeta \neq 0$ then we know that the left hand side of $(\star)$ has $\zeta$ as a root of multiplicity at least $m$ because of the term $Q$. We know that $P(\zeta) \neq 0$. On the right hand side of $(\star)$ the root has multiplicity at most $m - 1$ because $zP$ can’t introduce any more copies of the root and $Q^{\prime}$ will have at most $m - 1$ copies of $\zeta$ as a root. This gives a contradiction: the root $\zeta$ has different multiplicities on each side of the equation.
If $\zeta = 0$ then we need to do a bit more work. We can factor $Q = z^mR$ where $P$ and $R$ have no roots in common. Moreover, $R$ does not have zero as a root. Replacing $Q$ by $z^mR$ in $(\star)$ and tidying up the algebra gives: \[ (\star \star) \quad \quad R(z^m - zP^{\prime} - zP + mP) = -z PR^{\prime} \] We can now make the same argument about multiplicities of roots. If $R$ has a root $\zeta \neq 0$ then it will have some multiplicity at least $k$ on the left hand side of $(\star \star)$ and multiplicity at most $k - 1$ on the right hand side of $(\star \star)$.
Now that we’ve seen how a baby version of Liouville’s theorem, we procceed to develop the general theory.
Published: Apr 22, 2025 @ 09:51.
Last Modified: Apr 23, 2025 @ 14:34.
Home / Now / Blog / Notes / Reading / Office Camera / Tags / RSS Feeds / Top of Page
Thanks for reading! If you have any comments or questions about the content, please let me know. Anyone can contact me by email.