I've read that the original approach to demonstrating the efficacy of calculus left something to be desired among mathematicians. What exactly was the problem? Does the later accepted definition illuminate anything about the original problem or avoid it?
-
You can see Paolo Mancosu, The metaphysics of the calculus: A foundational debate in the Paris Academy of Sciences, 1700–1706 (1989) for an example of early debate on foundations and rigour. – Mauro ALLEGRANZA Jan 10 '16 at 21:55
2 Answers
Problems were abundant. There was no rigorous definitions of limits, convergence, and even functions and real numbers. And without definitions there could be no real, rigorous proofs. All this was not achieved until 19 century. Newton's and Leibniz results were correct, but they were not proved to the standards of rigor which already existed in mathematics since the times of Euclid and Archimedes, not speaking of the later standards.
And mathematicians of 17 and 18 century understood this. In many arguments they had to rely on intuition, as modern physicists frequently do, when they use mathematical tools which are not fully justified.
As calculus developed further, this lack of rigor led to paradoxes and controversies. Only in 19s century, starting with Gauss, Abel, Cauchy, and later Weierstrass, Dedekind and Cantor, a satisfactory and rigorous foundation of calculus was established.
EDIT. Here are two examples of 18 and early 19 century calculus: $$x=2\sum_{n=1}^\infty\frac{(-1)^{n-1}}{n}\sin nx.$$ This can be verified numerically by plugging some values, and even experimentally. But this makes no sense because the RHS is periodic while the LHS is not.
The second example is from Euler. By multiplication term by term we obtain $$(1-x)(1+x+x^2+x^3+\ldots)=1.$$ Putting $x=2$ we obtain $$1+2+4+8+16+\ldots=-1.$$ This can be justified from the modern point of view, but at the time of Euler this led to controversies of course.
Today, the situation is very similar. Physicists discover new mathematical results using methods which make no sense to pure mathematicians. Mathematicians consider these results as conjectures and sometimes prove them with their rigorous methods. As it was in 17s and 18s centuries the process is very beneficial to both physics and mathematics. Actually Archimedes also used physical reasoning to discover new mathematical results, and he always made it very clear what is really proved and what is just ``discovered''.
- 48,930
- 3
- 80
- 177
-
Back then, using physics to justify math was common. Nowadays we don't allow it. But some are arguing that we should... – Gerald Edgar Jan 10 '16 at 13:58
-
1Today the situation is exactly the same. Physicists make new discoveries in mathematics by using arguments which make no sense from the mathematicians point of view. Then mathematicians sometimes prove them with rigorous methods. – Alexandre Eremenko Jan 10 '16 at 15:42
-
@AlexandreEremenko, it would be helpful if you could clarify what you mean by rigor and whether this refers to procedure or ontology. – Mikhail Katz Jan 12 '16 at 15:57
-
@katz: "rigor" is understood in the usual meaning in mathematics. It changed somewhat since the time of Euclid and Archimedes but for the purpose of this discussion not much. Most mathematicians understand what constitutes a rigorous proof. (We have axioms and rules of deduction, and a theorem is what can be deduced from the axioms). – Alexandre Eremenko Jan 12 '16 at 21:14
-
@AlexandreEremenko, what I am wondering about is comparison with other fields. In physics, for example, they don't have a magic date when they consider to have reached ultimate understanding. The idea that something of that order happened in mathematics in 1870 would not be something acceptable to historians of science, for example. Now I am a professional mathematician (as you are) with over 60 publications in refereed journals, and I can certainly appreciate the value of rigor (though I have made my share of mistakes :-),... – Mikhail Katz Jan 13 '16 at 03:16
-
...but it seems to me that the historical narrative of mathematics as triumphant march toward the radiant future of the Cantor-Dedekind-Weierstrass developments, as claimed by many, is a reductive account. We need a more meaningful account of the history of mathematics. I elaborated on this in my article "Burgessian critique", see http://dx.doi.org/10.1007/s10699-011-9223-1, let me know if you need a pdf. I suggest Ian Hacking's recent book where he elaborates on the distinction between the butterfly model for the development of a science and the Latin model. There are some good insights. – Mikhail Katz Jan 13 '16 at 03:18
-
I disagree with those people you criticize. If there was a magic date, then it probably was in 6 century bc. – Alexandre Eremenko Jan 13 '16 at 04:44
-
@AlexandreEremenko, great, the point I was trying to make is that, as you seem to agree, mathematics develops much like physics through stages of better understanding of the underlying issues. The related point is that the concept of "rigor" is too vague to be useful in discussing historical developments. For example, Gauss is generally considered to be rigorous, yet he did not have any of the definitions we have today. Newton, Leibniz, and Euler often seem more unrigorous to us than they really were because they used a different language and certainly did not provide set-theoretic definitions – Mikhail Katz Jan 13 '16 at 09:04
-
...But the concept of "limit" for example was already in Newton: he just called it "ultimate ratio" and emphasized that it is not a ratio, unlike "prime ratios". Had he used the "ult" notation in place of "lim" his concept would be at bottom indistinguishable from ours at the procedural level. The kind of criticism one hears of Euler is outrageous. – Mikhail Katz Jan 13 '16 at 09:04
When speaking about rigor in a historical context, one must be careful not to apply modern habits of thought to historical developments where they are inappropriate. For example, one distinction that must be made is between the set-theoretic foundations we take for granted today that were not available before the second half of the 19th century. More specifically, modern punctiform continua (i.e., continua made of points) were certainly not the "foundational" background before, say, 1870.
This does not mean, however, that one can't meaningfully discuss the work of earlier authors without calling it unrigorous. For example, the work of Gauss is generally considered rigorous. The key distinction helpful in disengaging ourselves from the too-easy dismissal of historical mathematicians as unrigorous is the distinction between procedure and ontology.
This was dealt with by authors varying from Benacerraf to Quine to Wartofsky but it will suffice to say that the term "procedure" refers to the actual inferential moves as they appear in the work of those mathematicians, whereas "ontology" refers to the justification of the entities like number, point, function, etc. used by those authors. The set-theoretic ontology commonly taken for granted today was not there in the work of Gauss and others, but this shouldn't prevent a scholar from analyzing their procedures, which often turn out to be rigorous to a satisfactory degree.
Thus, Cauchy used infinitesimals in much the way they would be used today, and his procedural definition of the continuity of a function is essentially indistinguishable from a modern one (an infinitesimal increment assigned to the variable always produces an infinitesimal change in the function), even though the ontological foundation that would be expected today was not there.
To answer your question about Leibniz more specifically, his infinitesimal procedures were more soundly founded than the generally touted critique thereof by George Berkeley. This overlooks the procedure/ontology distinction.
- 5,743
- 18
- 39
-
1An interesting perspective! ...and one that I think should be more widely known. +1 – Danu Jan 13 '16 at 16:09