# Anna Johnson Pell Wheeler

Anna Johnson Pell Wheeler (May 5, 1883 – March 26, 1966) was an American mathematician. She is best known for early work on linear algebra in infinite dimension, which has later become a part of functional analysis.[1]

## Biography

Anna Johnson was born on 5 May 1883 to Swedish immigrant parents in Hawarden, Iowa in the United States. At the age of nine her family moved to Akron, Iowa and she was enrolled in a private school. In 1903 she graduated from the University of South Dakota and began graduate work at the University of Iowa. Her thesis, titled The extension of Galois theory to linear differential equations, earned her a master's degree in 1904. She obtained a second graduate degree one year later from Radcliffe College, where she took courses from Maxime Bôcher and William Fogg Osgood.[1][2][3]

In 1905 she won an Alice Freeman Palmer Fellowship from Wellesley College to spend a year at the University of Göttingen, where she studied under David Hilbert, Felix Klein, Hermann Minkowski, and Karl Schwarzschild. As she worked toward a doctorate, her relationship with Alexander Pell, a former professor from the University of South Dakota, intensified. He traveled to Göttingen and they were married in July 1907.[2][3] This trip posed a significant threat to Pell's life, since he was a former Russian double agent whose real name was Sergey Degayev.[2]

After the wedding, the Pells returned to Vermillion, South Dakota, where she taught classes in the theory of functions and differential equations. By 1908 she was back in Göttingen, working on her dissertation; an argument with Hilbert, however, made its completion impossible. She moved with her husband to Chicago, where she worked with E. H. Moore to finish her dissertation, Biorthogonal Systems of Functions with Applications to the Theory of Integral Equations, and received a Ph.D. in 1909.[2][3][4]

She began looking for a teaching position, but found hostility in every mathematics department. She wrote to a friend: "I had hoped for a position in one of the good univ. like Wisc., Ill. etc., but there is such an objection to women that they prefer a man even if he is inferior both in training and research".[5] In 1911 her husband suffered a stroke and she, after teaching his classes at the Armout Institute for the remainder of the semester, accepted a position at Mount Holyoke College. She taught there for seven years.[3][5]

In 1917, her last year at Mount Holyoke College, she published (together with R. L. Gordon) a paper regarding Sturm's theorem.[6] In that she solved a problem that had eluded J. J. Sylvester (1853) and E. B. Van Vleck (1899).[7] That paper (along with her theorem) was forgotten for almost 100 years and was recently discovered by Akritas, Malaschonok and Vigklas.

In 1918 she became an associate professor at Bryn Mawr College in Pennsylvania. Although Alexander continued his research, he never taught again, and died in 1921. Three years later she became the head of the Bryn Mawr mathematics department, and became a full professor in 1925. In the same year she married a colleague named Arthur Wheeler, who soon went to Princeton University. She moved with him, commuting to Bryn Mawr, teaching part-time, and becoming active in Princeton's mathematics society. In 1927 she became the first woman to present a lecture at the American Mathematical Society Colloquium.[8] After Wheeler died in 1932, she returned to Bryn Mawr and taught full-time.[3][5]

Wheeler was instrumental in bringing German mathematician Emmy Noether to Bryn Mawr in 1933, after the latter's expulsion from the University of Göttingen by the Nazi government. The two women worked together happily for two years, until Noether died suddenly after an operation in 1935. Wheeler continued teaching at Bryn Mawr until she retired in 1948.[3] She died in 1966 after suffering a stroke.[5] Her doctoral students included Dorothy Maharam and Marion Cameron Gray.

## A Theorem on Polynomial Remainder Sequences (PRS's)

To be able to appreciate the theorem by Anna Johnson Pell Wheeler — written together with Ruth L. Gordon, for whom no information is available — we need to define four polynomial remainder sequences, or prs's for short shown in Figure 1 below, for any pair of polynomials ${\displaystyle f,g}$ of degrees ${\displaystyle n,m}$, respectively, with ${\displaystyle n}$${\displaystyle m}$. The polynomials ${\displaystyle f,g}$ are always the first two polynomials in every prs.

The first prs is obtained by applying the Euclidean algorithm on the polynomials ${\displaystyle f,g}$. The polynomial remainder sequence obtained this way is called Euclidean prs.

The second prs is obtained by applying the modified Euclidean algorithm. The modification consists in negating at each iteration of the Euclidean algorithm the polynomial remainder and using the negated polynomial in the next iteration. The modified Euclidean algorithm is of great importance because, when applied to ${\displaystyle f,g}$, where ${\displaystyle g=f'}$, the derivative of ${\displaystyle f}$, we obtain Sturm's theorem and the Sturm sequence of ${\displaystyle f}$, which can be used to isolate by bisection its real roots. To agree with the spirit of the Pell-Gordon article, the polynomial remainder sequence obtained this way is called modified Euclidean prs.

The two prs's defined above are closely related with two matrices introduced by James Joseph Sylvester in 1840 and 1853.[9][10] The different forms of these two matrices can be seen in Sylvester matrix. We call ${\displaystyle sylvester1(f,g,x)}$, the Sylvester matrix of 1840 of dimensions ${\displaystyle (n+m)}$ × ${\displaystyle (n+m)}$ and ${\displaystyle sylvester2(f,g,x)}$ the Sylvester matrix of 1853 of dimensions ${\displaystyle 2n}$ × ${\displaystyle 2n}$. Recall that the determinant of ${\displaystyle sylvester1(f,g,x)}$ is the resultant of ${\displaystyle f,g}$. By analogy, and to agree with the title of the Pell-Gordon article, the determinant of ${\displaystyle sylvester2(f,g,x)}$ is called the modified resultant of ${\displaystyle f,g}$.

The resultant of ${\displaystyle f,g}$ may differ from the corresponding modified resultant in sign and by a constant factor. Determinants of submatrices of ${\displaystyle sylvester1(f,g,x)}$ are called subresultants and likewise determinants of submatrices of ${\displaystyle sylvester2(f,g,x)}$ are called modified subresultants.

For the polynomials ${\displaystyle f,g}$ we can now define — by a process known to Sylvester and others in the 19th century — two additional prs's, which can be obtained from ${\displaystyle sylvester1(f,g,x)}$ and ${\displaystyle sylvester2(f,g,x)}$.

The third prs is called subresultant prs of ${\displaystyle f,g}$, whereby the coefficients of the remainder polynomials are all determinants of appropriately selected submatrices of ${\displaystyle sylvester1(f,g,x)}$.

Likewise, the fourth prs is called modified subresultant prs of ${\displaystyle f,g}$, whereby the coefficients of the remainder polynomials are all determinants of appropriately selected submatrices of ${\displaystyle sylvester2(f,g,x)}$.

It is worth noting that the subresultant and modified subresultant prs's of ${\displaystyle f,g}$ can be more efficiently computed employing their Bezout matrix, which has dimensions ${\displaystyle n}$ × ${\displaystyle n}$.

Definition 1 below divides the prs's into two important categories.

### Definition 1

A polynomial remainder sequence of two polynomials ${\displaystyle f,g}$ is called complete if the degree difference between any two consecutive polynomials is 1; otherwise, it is called incomplete.


In each prs, the signs of the leading coefficients of the polynomial remainders play an important role and so we have the following:

### Definition 2

The sign sequence of a polynomial remainder sequence is the sequence of signs of the leading coefficients of its polynomials.


Regarding complete prs's James Joseph Sylvester observed in 1853 that the sign sequences of the subresultant prs of ${\displaystyle f,g}$ and that of the corresponding Euclidean prs are identical. And the same is true for the sign sequences of the modified subresultant prs of ${\displaystyle f,g}$ and that of the corresponding modified Euclidean (Sturmian) prs.

Additionally, James Joseph Sylvester had observed that the integer coefficients of the polynomial remainders in a subresultant prs are the smallest possible that can be obtained without computing their gcd and without resorting to rationals. The same is also true for the integer coefficients of the polynomial remainders in a modified subresultant prs, provided that the leading coefficient of ${\displaystyle f}$ is 1.

The main point to observe is that for complete prs's the Euclidean and modified Euclidean prs of ${\displaystyle f,g}$ can be computed in such a way that it becomes identical with the subresultant and modified subresultant prs, respectively, of ${\displaystyle f,g}$.

On the contrary, regarding incomplete prs's James Joseph Sylvester had observed in his 1853 article that the sign sequences of the subresultant prs of ${\displaystyle f,g}$ and that of the corresponding Euclidean prs may differ. And the same is true for the sign sequences of the modified subresultant prs of ${\displaystyle f,g}$ and that of the corresponding modified Euclidean (Sturmian) prs.

In other words, things become extremely more complicated in case of incomplete prs's, and Sylvester himself could not see how to compute the modified Euclidean prs from the modified subresultant prs. Sylvester was in good company because Van Vleck, a renowned mathematician of the late 19th early 20th century, was also not able to solve the problem.[11]

Therefore, in case of incomplete prs's, there was a big problem computing the Euclidean and modified Euclidean prs of ${\displaystyle f,g}$ from the subresultant and modified subresultant prs, respectively, of ${\displaystyle f,g}$; and vice versa.

The answer to the problem came from Pell and Gordon in 1917, which — together with an observation by Sylvester of 1853 which was proved by Akritas and Malaschonok in 2015 — established a one-to-one correspondence between the modified subresultant prs of ${\displaystyle f,g}$, on one hand, and the corresponding Euclidean and modified Euclidean prs's on the other (see the arrows labelled PG – 1917 and SAM in Figure 1).[12][13] This one-to-one correspondence unequivocally refutes the claim that Euclidean prs’s are "non signed" sequences and that the signs of their polynomials can be changed arbitrarily. In this context, see also http://planetmath.org/sturmstheorem, where the reader is cautioned that some computer algebra systems may normalize remainders from the Euclidean Algorithm which messes up the sign.

The theorem is stated below.

### Theorem 1 (Pell-Gordon, 1917)

Let

${\displaystyle f={{\alpha }_{0}}{{x}^{n}}{\text{+ }}{{a}_{1}}{{x}^{n-1}}+...+{{a}_{n}}}$

and

${\displaystyle g={{b}_{0}}{{x}^{n}}+{\text{ }}{{b}_{1}}{{x}^{n-1}}+...+{\text{ }}{{b}_{n}}}$

be two polynomials of the n-th degree. Modify the process of finding the highest common factor of ${\displaystyle f}$ and ${\displaystyle g}$ by taking at each stage the negative of the remainder. Let the i-th modified remainder be

${\displaystyle {{R}^{(i)}}={{r}_{0}}^{(i)}{{x}^{{m}_{i}}}+{{r}_{1}}^{(i)}{{x}^{{{m}_{i}}-1}}+...+{{r}_{{m}_{i}}}^{(i)}}$

where ${\displaystyle ({{m}_{i}}+1)}$ is the degree of the preceding remainder, and where the first ${\displaystyle ({{p}_{i}}-1)}$ coefficients of ${\displaystyle {{R}^{(i)}}}$ are zero, and the ${\displaystyle {{p}_{i}}}$-th coefficient ${\displaystyle {{\rho }_{i}}={{r}_{{{p}_{i}}-1}}}$ is different from zero. Then for ${\displaystyle k=0,1,...,{{m}_{i}}}$ the coefficients ${\displaystyle {{r}_{k}}^{(i)}}$ are given by

${\displaystyle {{r}_{k}}^{(i)}={\frac {{{(-1)}^{{u}_{i-1}}}{{(-1)}^{{u}_{i-2}}}...{{(-1)}^{{u}_{1}}}{{(-1)}^{{u}_{i-1}}}}{{{\rho }_{i-1}}^{{{p}_{i-1}}+1}{{\rho }_{i-2}}^{{{p}_{i-2}}+{{p}_{i-1}}}...{{\rho }_{i}}^{{{p}_{1}}+{{p}_{2}}}{{\rho }_{0}}^{{p}_{1}}}}\cdot Det(i,k)}$

(it is understood that ${\displaystyle {{\rho }_{0}}={{b}_{0}}}$, ${\displaystyle {{p}_{0}}=0}$, and that ${\displaystyle {{a}_{i}}={{b}_{i}}=0}$ for ${\displaystyle i>n}$),

where

${\displaystyle {{u}_{i-1}}=1+2+...+{{p}_{i-1}}}$,
${\displaystyle {{u}_{i-1}}={{p}_{1}}+{{p}_{2}}+...+{{p}_{i-1}}}$

and

${\displaystyle Det(i,k)=\left|{\begin{matrix}{{a}_{0}}&{{a}_{1}}&{{a}_{2}}&...&.&.&...&{{a}_{2{{u}_{i}}-1}}&{{a}_{2{{u}_{i}}-1}}+1+k\\{{b}_{0}}&{{b}_{1}}&{{b}_{2}}&...&.&.&...&{{b}_{2{{u}_{i}}-1}}&{{b}_{2{{u}_{i}}-1}}+1+k\\0&{{a}_{0}}&{{a}_{1}}&...&.&.&...&{{a}_{2{{u}_{i}}-1}}-1&{{a}_{2{{u}_{i}}-1}}+k\\0&{{b}_{0}}&{{b}_{1}}&...&.&.&...&{{b}_{2{{u}_{i}}-1}}-1&{{b}_{2{{u}_{i}}-1}}+k\\.&.&.&...&.&.&...&.&.\\0&0&0&...&{{a}_{0}}&{{a}_{1}}&...&{{a}_{{{u}_{i}}-1}}&{{a}_{{{u}_{i}}-1}}+1+k\\0&0&0&...&{{b}_{0}}&{{b}_{1}}&...&{{b}_{{{u}_{i}}-1}}&{{b}_{{{u}_{i}}-1}}+1+k\\\end{matrix}}\right|}$

The proof of this theorem is by structural induction on the polynomials of the prs.

In the freely available, python based, computer algebra system ${\displaystyle sympy}$ (version 1.0 or higher) there exists the module ${\displaystyle subresultants\_qq\_zz.py.}$[14]

Using Theorem 1, or ${\displaystyle pg}$ for short, four functions have been developed in the module ${\displaystyle subresultants\_qq\_zz.py}$ in order to compute the Euclidean and modified Euclidean prs of two polynomials ${\displaystyle f,g,}$ as well as their subresultant and modified subresultant prs; these functions are ${\displaystyle euclid\_pg(f,g,x),}$ ${\displaystyle sturm\_pg(f,g,x),}$ ${\displaystyle subresultants\_pg(f,g,x),}$ and ${\displaystyle modified\_subresultants\_pg(f,g,x).}$ However, they all perform their operations in ${\displaystyle Q[x],}$ as a result of which these functions are slower than the equivalent ones of the form ${\displaystyle function\_name\_amv(f,g,x),}$ which perform all their operations in ${\displaystyle Z[x].}$

Moreover, Theorem 1 led Akritas, Malaschonok and Vigklas to the discovery of another theorem, call it ${\displaystyle amv,}$ which establishes a one-to-one correspondence between subresultant prs’s, on one hand, and Euclidean and modified Euclidean prs’s on the other (see the arrows labelled AMV – 2015 in Figure 1).[15] This one-to-one correspondence unequivocally refutes, a second time, the claim that Euclidean prs’s are "non signed" sequences and that the signs of their polynomials can be changed arbitrarily.

Using Theorem ${\displaystyle amv,}$ four additional functions have been developed in the module ${\displaystyle subresultants\_qq\_zz.py}$ in order to compute the Euclidean and modified Euclidean prs of two polynomials ${\displaystyle f,g,}$ as well as their subresultant and modified subresultant prs; these functions are ${\displaystyle euclid\_amv(f,g,x),}$ ${\displaystyle sturm\_amv(f,g,x),}$ ${\displaystyle subresultants\_amv(f,g,x),}$ and ${\displaystyle modified\_subresultants\_amv(f,g,x).}$

The complete picture is given by Figure 1.

Figure 1: The framework for Theorem 1 (arrow labelled PG – 1917). The double ended arrows indicate one-to-one correspondences that exist between the coefficients of the polynomials in the respective nodes. The labels indicate those who first established the correspondences and when. The dashed arrow labeled DG – 2004 is due to Diaz–Toca and Gonzalez–Vega.[16]

## References

1. ^ a b Louise S. Grinstein and Paul J. Campbell, "Anna Anna Johnson Pell Wheeler: Her life and work, Historia Mathematica 9(1982)37-53
2. ^ a b c d Kimberling, Clark. "Emmy Noether and Her Influence". Emmy Noether: A Tribute to Her Life and Work. Ed. James W. Brewer and Martha K. Smith. New York: Marcel Dekker, Inc., 1981. ISBN 0-8247-1550-0. pp. 3–61.
3. O'Connor, J.J. and E.F. Robertson. "Anna Johnson Pell Wheeler". The MacTutor History of Mathematics archive. January 1997. Retrieved on 10 April 2008.
4. ^ Kimberling gives the year of her Ph.D. as 1910.
5. ^ a b c d Riddle, Larry. "Anna Johnson Pell Wheeler". Biographies of Women Mathematicians. 23 May 2007. Retrieved on 10 April 2008.
6. ^ A. J. Pell and R. L. Gordon:The Modified Remainders Obtained in Finding the Highest Common Factor of Two Polynomials. Annals of Mathematics, Second Series, Vol. 18, No. 4, 188--193, 1917
7. ^ Akritas, A.G., Malaschonok, G.I., Vigklas, P.S.:On a Theorem by Van Vleck Regarding Sturm Sequences. Serdica Journal of Computing, Vol. 7, No 4, 101--134, 2013.
8. ^ "Prizes, Awards, and Honors for Women Mathematicians". agnesscott.edu. Retrieved 2014-01-25.
9. ^ Sylvester J.J.:"A method of determining by mere inspection the derivatives from two equations of any degree". Philosophical Magazine, 16 (1840), 132–135.
10. ^ Sylvester, J.J.:"On the Theory of Syzygetic Relations of Two Rational Integral Functions, Comprising an Application to the Theory of Sturm’s Functions, and that of the Greatest Algebraical Common Measure." Philosophical Transactions, 143, (1853), 407–548.
11. ^ Akritas, A.G., Malaschonok, G.I., Vigklas, P.S.:"Sturm Sequences and Modified Subresultant Polynomial Remainder Sequences." Serdica Journal of Computing, 8(1), (2014), 29–46.
12. ^ Sylvester J. J.: "On a remarkable modification of Sturm’s theorem." Philosophical Magazine and Journal of Science, V, Fourth Series, (January–June, 1853), 446–456. http://books. google.gr/books?hl=el&id=3Ov22-gFMnEC&q=sylvester#v=onepage&q&f=false
13. ^ Akritas, A.G., G.I. Malaschonok, P.S. Vigklas: "On the Remainders Obtained in Finding the Greatest Common Divisor of Two Polynomials." Serdica Journal of Computing, 9(2) (2015), 123–138
14. ^ For earlier versions of ${\displaystyle sympy}$, the module can be downloaded from https://github.com/sympy/sympy/blob/master/sympy/polys/subresultants_qq_zz.py. Obviously, the module can be ${\displaystyle load}$-ed or ${\displaystyle attach}$-ed in a session of ${\displaystyle sage,}$ the other freely available python based computer algebra system.
15. ^
16. ^ Diaz–Toca G. M., L. Gonzalez–Vega:"Various New Expressions for Subresultants and Their Applications". Applicable Algebra in Engineering, Communication and Computing, 15, (2004), 233–266