Are there examples of originally widely accepted proofs that were later discovered to be wrong by attempting to formalize them using a proof assistant (e.g. Coq, Agda, Lean, Isabelle, HOL, Metamath, Mizar)?
-
I have not seen such a proof, but Hilbert's program was believed to be true, before Godel announced his first and second incomplete theorem – XL _At_Here_There Jan 21 '18 at 04:04
-
18Proofs formalized by now are in hundreds, and the vast majority of them are elementary (from nineteenth century). So, I think what you described is very unlikely to happen. This may change dramatically in the future though. – Alex Gavrilov Jan 21 '18 at 04:26
-
4related: https://cstheory.stackexchange.com/questions/37299/has-a-proof-checker-bug-ever-invalidated-a-major-proof – Carlo Beenakker Jan 21 '18 at 08:04
-
46Proofs can be checked by formalization, but not shown to be wrong. At worst, the (human) formalizer can give up and admit not knowing how to construct the formalized proof. On the other hand, a proof can be shown to be wrong by giving a counter example (not just for the main result, but also for any part of the proof or sub-argument). This is in any case how proofs are scrutinized and does not require formalization. – Igor Khavkine Jan 21 '18 at 09:12
-
@Igor Khavkine. Yes, a good point. I forgot to mention this. – Alex Gavrilov Jan 21 '18 at 09:38
-
8The Coq-combi project proves the Littlewood-Richardson rule imitating the "Plactic Monoid" chapter of Lothaire's Algebraic Combinatorics. But the latter chapter is known to contain minor gaps and inaccuracies, so they have to have been fixed somehow in Coq-combi. Not sure if it counts... – darij grinberg Jan 21 '18 at 17:25
-
4@Igor Khavkine: How is giving a counterexample for a sub-argument of a proof different from demonstrating the proof wrong? And why do counterexamples need formalization any less than proofs - they might be inconsistent. – Bernhard Stadler Jan 22 '18 at 18:11
-
3I understand the point about the inability of formalization to, strictly speaking, show that an informal proof is wrong. I guess the main motivation for my question was to find examples where going through the process of formalization helped to find errors in the informal proof that would have been overlooked had everything stayed in the informal setting, even if the actual counterexample was not automatically given by the proof assistant. I think the case of the Flyspeck project that @TimothyChow mentions in his answer is a good example of this. – Mei Zhang Jan 24 '18 at 02:31
-
I adjusted the wording of the question to reflect the above discussion. – Todd Trimble Oct 31 '18 at 12:04
6 Answers
Since this question was asked in January there have been some developments. I would like to argue that the scenario raised in the question has now actually happened. Indeed, Sébastien Gouëzel, when formalising Vladimir Shchur's work on bounds for the Morse Lemma for Gromov-hyperbolic spaces, found an actual inequality which was transposed at some point causing the proof (which had been published in 2013 in J. Funct. Anal., a good journal) to collapse. Gouëzel then worked with Shchur to find a new and correct (and in places far more complex) argument, which they wrote up as a joint paper.
http://www.math.sciences.univ-nantes.fr/~gouezel/articles/morse_lemma.pdf
The details are in the introduction. Anyone who reads it will see that this is not a "mistake" in the literature in the weak sense defined by Manuel Eberl's very clear answer -- this was an actual error which was discovered because of a formalization process.
- 40,559
-
3I have marked this as the accepted answer now. For future reference, since this answer references the previous accepted answer, I'll add a note here to inform that the previous accepted answer was the one written by Manuel Eberl. – Mei Zhang Oct 26 '18 at 05:46
-
I edited my answer to remove the reference to "the accepted answer" and now reference Eberl's answer explicitly; hopefully this lessens the confusion even more. – Kevin Buzzard Oct 31 '18 at 11:51
-
1The Gouëzel-Shchur paper has now been published in the Journal of Functional Analysis 277 (2019), 1258-1268 – Kevin Buzzard Jul 24 '19 at 19:51
First of all, to explain my perspective: I'm a PhD student working in the formalisation of mathematics with Isabelle/HOL, and I've been working with that system for about 7 years. I was introduced to it in a lecture when I was an undergraduate and I got hooked immediately. I do think it is useful, but I don't do it because of that. I do it because it's fun.
Anyway, your question is a bit tricky to answer because it depends a lot on what you mean by a ‘wrong proof’ and by ‘shown wrong by formalizing them’. A lot of the time, it's something of a grey area.
Normally, one needs a very thorough understanding of the paper proof in order to formalise it and one has to think of a way of formalising the argument. Conceptual problems with the paper proof often become apparent at this stage already, when there is no theorem prover involved as such yet.
Secondly, of course, if you formalise something like the Prime Number Theorem or Cauchy's Integral Theorem, you're probably not going to find out that it's all wrong and everything collapses. But you might well find problems in particular proofs in textbooks.
I do find a lot of ‘mistakes’ in proofs, including textbook proofs and published papers. Most of these mistakes are easily fixed and most mathematicians would likely brush them off as insignificant. Some take me a few days to figure out. Some actually require changing definitions, adding assumptions, or altering the statement of the theorem.
Most ‘mistakes’ are something like this:
surprisingly non-trivial arguments being declared as trivial/left to the reader
going very quickly and vaguely over part of the proof that is perceived as uninteresting and thereby missing a subtle flaw that would have become apparent if one had done it more thoroughly
missing cases that have probably been overlooked by the author
arithmetic mistakes (my favourite being multiplying an inequality with a constant and not checking that it is non-negative)
missing assumptions that are still implicitly used
Let me give you a few examples (I won't mention the exact authors; my intent is not to shame anybody for making these mistakes, only to show that this is quite common):
I recently had a case where a theorem from a widely-used textbook from the 70s was simply plain wrong, which I realised when I wanted to find out how to formalise it. I am not an expert in that field, but apparently the people who do work in that field know that this is wrong.
One of the first algorithms (working on non-deterministic automata) that I formalised apparently assumed that the automaton is total (i.e. it has an outgoing transitions for every letter in the alphabet from every state). In my opinion, that should have absolutely been mentioned in the paper, but one could possibly argue that that was just implicit in their idea of an automaton.
A colleague of mine found a problem with the proof of a lemma used in the correctness proof for some some complicated automata algorithm that had been used in state-of-the-art software for years. Somebody else later showed not only does the lemma not hold, but the algorithm is also incorrect. As far as I know, nobody has been able to fix the algorithm yet.
In one instance, I had formalised a kind of program transformation technique from a published paper. The authors then extended that paper to a more detailed journal version and also added some new transformation rules. One of them dealt with multiplication with a constant, but they did not realise that multiplication with 0 is a special case that makes their rule unsound.
I worked on formalising a new result that had just been published in a journal and found out that one part of the proof that the authors didn't explain in much detail due to page limits had a subtle problem that became only apparent when I had already formalised everything in Isabelle and got stuck at this part. The authors immediately admitted that this was a problem that could not be fixed in any apparent way except by adding an additional, somewhat technical assumption to the entire argument. However, they later managed to prove a stronger result that subsumes that result, but the proof of that was much more involved. (more details on this at the end of this answer)
I don't remember the exact details about the Kepler conjecture that somebody mentioned before, but off the top of my head, I seem to recall that several smaller problems were found in the original program code, and Nipkow found one problem that actually caused Hales to revise a part of the proof completely.
As a theorem proving guy, my reaction to this is ‘This shows that formalising stuff in a theorem prover is worthwhile’. I am aware that mathematicians often have a different perspective. It is not an uncommon view to say that the so-called ‘mistakes’ I mentioned above are insignificant; that someone would have found them eventually even without a theorem prover; that the theorems were still correct (in some sense) and it was only the proofs that had some minor problems.
However, I disagree with that. I want my proofs to be as rigorous as possible. I want to be sure I didn't miss any assumptions. And I think that things like Kepler's conjecture show that there are instances where it is just infeasible for humans to check the correctness of a proof with a reasonable amount of certainty.
EDIT: As requested, some more details on point 5.
The paper in question is The impossibility of extending random dictatorship to weak preferences. They also published a corrigendum. The purpose of that paper is to show that no Social Decision Scheme (SDS) for at least 4 agents and alternatives exists that is an extension of Random Dictatorship (RD) and fulfils four nice properties.
It works by first showing that none exists for 4 agents and 4 alternatives and then shows that an SDS for more than 4 agents/alternatives can be turned into one for exactly 4/4 while preserving the nice properties, so that it cannot work for more than 4. Typically, in this kind of proof, the base case is the difficult one and lifting it to a larger number of agents/alternatives is pretty trivial. However, in this case, the property "the SDS is an extension of RD" does not survive the lifting to more agents/alternatives, which completely breaks that step. I myself only noticed that when I had already typed most of the proof into Isabelle and it just didn't go through.
The proof for the base case here was based on considering 12 particular preference profiles and, as you can see, relatively short. The authors then later found a proof for the same statement without the RD extension assumption, but that one needed 47 preference profiles and was much longer. I formalised that proof in Isabelle without any problems (see my BSc thesis).
- 1,211
- 2
- 9
- 14
-
10Excellent answer! Your example #5 is interesting enough that I wonder if you could write to the authors and ask if they mind having their identities revealed here. The purpose of course is not to shame them (we have all made mistakes) but to provide some concrete documentation about the value of formalization. From this point of view, #5 is much more interesting than #1 to #4 in my opinion (and of course #6 is already documented). – Timothy Chow Jan 24 '18 at 16:07
-
-
1+1. You should probably inform the authors about all of the errors you found, at least when the research is reasonably new. (Though #4 might be a matter of mismatched definitions of $\mathbb{N}$.) – darij grinberg Jan 24 '18 at 16:36
-
5I usually do. Often, I never get any sort of reply, but in many cases the original authors are very helpful. #4 was something with real numbers and indeed a mistake. I think the authors were aware that 0 was a special case, but they thought it would still work out in the end. – Manuel Eberl Jan 24 '18 at 16:41
-
7
-
4Awesome answer, thanks for sharing your experience and knowledge. – Filippo Alberto Edoardo Jan 27 '18 at 13:57
-
where a theorem from a widely-used textbook from the 70s was simply plain wrongwhat happens then? It seems serious that a widely used textbook contains an error which is much more likely passed from generations to generations than a research paper, I think. – Z. M Apr 27 '22 at 10:46 -
Good question! I don't know. The author isn't alive anymore so I doubt it will be fixed. – Manuel Eberl Apr 27 '22 at 15:42
This question was raised on the Foundations of Mathematics mailing list back in 2014, and the short answer is no, there are no examples of this. [EDIT: Although this may have been true at the time I originally wrote this, it is no longer true, as other answers amply demonstrate. But I think that this answer is worth leaving here nonetheless.]
The longer answer is that the process of formalizing any non-trivial mathematical argument is likely to reveal some minor gaps, such as degenerate cases for which the main argument doesn't quite work as stated. If you're sufficiently pedantic, then you might claim that in such cases, the original proof is "wrong," but I suspect that this is not the sort of thing you're asking for.
The Flyspeck project did turn up one gap in the original proof of the Kepler conjecture that was large enough that the authors felt the need to write a few pages of human explanation about it. There is also an interesting paper by Fleuriot and Paulson where they undertook to formalize Newton's Propositio Kepleriana with Isabelle using nonstandard analysis to implement Newton's use of infinitesimals. There was one step where Fleuriot and Paulson could not find a plausible way to imitate Newton's reasoning exactly and found themselves having to use a different argument. Again, it is debatable whether this means that Newton's proof was "wrong."
- 78,129
-
2Yeah, despite what some zealous proponents of proof assistants like to claim, human beings are pretty good at checking proofs to the extent of ignoring degenerate cases. And when something slips through the cracks, it is either not getting used all that much, or counterexamples and "reaction papers" will pop up. – Asaf Karagila Jan 21 '18 at 19:47
-
12@AsafKaragila : As we know from another MO question, there are cases where a wrong result is widely accepted for some time and even used by others, but so far we lack convincing examples where a proof assistant plays an essential role in the discovery of the error. This may change in the future if proof assistants become sufficiently easy to use (but by the time that day arrives, it may no longer be easy to decide whether to attribute the bug report to the human, or the proof assistant, or both working together). – Timothy Chow Jan 21 '18 at 20:12
-
1Yeah, of course. That's what I meant. Something is accepted, once a lot of people start using it, it becomes susceptible to more rigorous checking, reproofs, new proofs, rewrites, and more consequences. When there is a significant problem (gaps, mistakes, whatever), these tend to surface. – Asaf Karagila Jan 21 '18 at 20:14
-
@AsafKaragila What do you mean by "Something is accepted, once a lot of people start using it, ..."? Does it mean that a result is not accepted if nearly no one is using it? I agree that it is true that using something is a powerful mechanism for finding errors (and a good motivation to recheck it), but ... I just wanted to reuse an old answer, decided to add the input file, but now see what happened after the Edit 22 Jan 2018 update... "cannot save me from mis..." – Thomas Klimpel Jan 22 '18 at 12:40
-
4"If you're sufficiently pedantic, then you might claim that in such cases, the original proof is "wrong," but I suspect that this is not the sort of thing you're asking for." No, that's not being pedantic. That's just knowing what 'wrong' means. If a proof doesn't cover every case it claims to cover, then it's not a proof of what it claims to be. It's wrong. It might be a proof that non-X, non-Y, non-Z somethings have property P, but it's not a proof that all somethings have property P. – Jan 23 '18 at 21:12
-
@MilesRout Exactly. It seems some people understood "wrong proof of X" as "the proof claimed that statement X was true, but it turned out X was false". Those would be very interesting cases, but even without going there, I think instances where formalization helps to catch gaps in certain proofs are already interesting, and I'd consider those "wrong proofs" for the same reason you just mentioned. – Mei Zhang Jan 24 '18 at 01:54
-
8@MeiZhang : Thanks for clarifying. With this definition of "wrong proof" the Flyspeck example certainly counts. But let me point out that in this sense, most published proofs are wrong, or at least it's not clear whether they're wrong. E.g., it is common for parts of a proof to be "left to the reader." Is this a "gap"? If I pick up a random math journal not in my field, I will probably not understand any of the proofs. Are these "gaps"? Or just my lack of training? Human proofs always assume the reader will do a lot of work to fill in missing details. It's not clearcut what a real "gap" is. – Timothy Chow Jan 24 '18 at 02:36
-
2Generally, people call something a "gap" only if they judge the process of filling in the missing details to need a "new idea" or to be much harder than the proof seems to suggest. But these are subjective judgment calls. (Of course if the gap can't be filled at all then that's a serious problem, but that's a "major gap" not a "minor gap" and we're back to having no examples involving proof assistants in an essential way.) If a proof does not contain an outright false statement, and the gaps can all be filled with a modest amount of work, then most people hesitate to call it "wrong." – Timothy Chow Jan 24 '18 at 02:50
-
I guess the main point of informal proofs is to convince other humans that certain statement is true. Would it be fair to say that informal proofs can't even be wrong, because it's a somewhat subjective matter? In the case of a formal proof, we can always check it against a proof assistant. Even if the typechecker has bugs, we at least have an objective way to determine if the proof is correct with respect to a specific trusted implementation. Maybe the distinction between correct and incorrect becomes a bit too philosophical at that point... comments on this distinction would be appreciated. – Mei Zhang Jan 24 '18 at 02:54
-
3@MeiZhang : Just because there's subjectivity involved doesn't mean that all informal proofs are "not even wrong." If there is a statement that is essential to the proof and that is false, and the community can't see how to fix it easily, then the proof is wrong. Similarly if there is a gap in the argument that the community can't see how to fill easily, then the proof has a major gap. This definition of "wrong" is intersubjective and maybe you don't consider it objective. But even with a proof assistant, you have to trust what other humans are telling you about the hardware and software. – Timothy Chow Jan 24 '18 at 03:38
-
I'm aware that I'm trusting things developed by others such as software and hardware, but we have a sensible notion of when these things have bugs and are therefore wrong and should to be fixed. To me this notion is a bit more unclear when we talk about informal math. Maybe at this point I am actually being too pedantic, but let me share this video which I think is relevant to the discussion. In particular, the talk between 43:24 and 44:25 which touches on the "not even wrong" argument: https://youtu.be/psSyM1zp82k?t=43m24s – Mei Zhang Jan 24 '18 at 03:56
-
5@MilesRout, you write: "No, that's not being pedantic. That's just knowing what 'wrong' means." I notice from your profile that you identfy yourself as a software developer, not a mathemician. With respect, you are wrong about the mathematician's notion of "wrong". The distinction between missing pedantic details and a genuinely wrong argument is central to our ability to make advances in mathematics -- finding the crux of an argument and concentrating on it is how we know where to focus our energy to find (or check) a proof. – HJRW Jan 24 '18 at 09:25
-
This is why I use the terms "Proof sketch" or "Proof idea" in doing write-ups. I think professionals need to make more explicit the fact that they are communicating ideas for recipes and not recipes themselves (unless they are providing a mechanically checkable proof). This would help people outside the group understand and appreciate (or be dismayed at) how the professionals "do their thing". Also, creating a class of "mechanics" who can test and fix a proof sounds like a good thing. Gerhard "'Let's Look Under The Hood...'" Paseman, 2018.01.24. – Gerhard Paseman Jan 24 '18 at 16:14
-
1@GerhardPaseman : This is easily accomplished. Simply do a search and replace in all mathematical journals: "Proof -> Proof sketch." – Timothy Chow Jan 24 '18 at 16:24
-
OK @Timothy. I will give you my (offline) collection of Westzynthius and Kanold papers. Let me know when you want to start. Gerhard "Prepares For Trip To NRLF" Paseman, 2018.01.24. – Gerhard Paseman Jan 24 '18 at 16:29
-
1@HJRW I'm working as a software developer. I have a mathematics background. I can absolutely assure you that if you claim to have proved something but have missed an edge case then you are simply, utterly and totally wrong whether you identify as a mathematician or not. Mathematics is not about being 'pretty sure' or 'right most of the time'. If you only have proved something for a particular case, then you should state that up front and not claim that you've proved more than you have. – Jan 24 '18 at 22:22
-
3@MilesRout : The old joke "Do as I say, not as I do" reminds us that in many areas of life, we do not always live up to the standards that we officially preach. Yes, mathematicians will officially agree with the principle you state here. But in the real world, you'll often seen in a published proof, "The other case is similar; we omit the details." Usually, the authors have checked the other case and convinced themselves it's fine, and usually they're right. But sometimes, the other case turns out to be trickier than expected. It's still "similar" but the authors may have overlooked a nuance. – Timothy Chow Jan 24 '18 at 22:45
-
Is the proof "wrong" in this case? Since the printed page contains so few details, it's hard to say. Maybe they caught the nuance and thought it wasn't worth mentioning, or maybe they didn't. For a real-life example, consider Perelman's papers on geometrization. By your criteria, Perelman's papers were wrong, wrong, wrong. It took years of serious effort by top experts to fill in all the gaps. There were a few actually false statements. But the consensus today is to say that Perelman's papers were correct and he gets full credit for the proof, because all the crucial ideas were there. – Timothy Chow Jan 24 '18 at 22:49
-
1Now I happen to believe that in Perelman's case, the experts who worked so hard to flesh out the proof in full detail should get somewhat more credit than they do, because if they don't get credit then it leads to a situation where people lack incentive to do the "dirty work" of writing out full details. But I cite it as an example where the mathematical community simply does not behave the way you are describing. It refuses to say that Perelman's proof was wrong. The Chinese mathematicians who published one of the fleshed-out versions even got into trouble for seeming to claim partial credit. – Timothy Chow Jan 24 '18 at 22:55
-
@TimothyChow I'm not claiming that every statement that one claims to have proved needs an airtight, totally formal proof. On the contrary. But it still needs a proof, surely anyone can agree with that? If you're going to claim you proved something, you need to actually prove it. 'The other case is similar' is a proof, if the other case actually is similar. – Jan 25 '18 at 06:20
-
@TimothyChow The Perelman case is interesting, because the mathematical community seems fine with saying that Wiles' proof of the modularity theorem was wrong, and he fixed it, but not that Perelman's proof was wrong and some Chinese mathematicians fixed it. That sounds like more of a case of some cultural bias than anything else. Especially because if you compare to the Wiles case, he was almost doing the same thing they were: filling in the last gap, that many people were pretty sure was true, to prove something. – Jan 25 '18 at 06:21
-
-
3@MIlesRout, if you miss an edge case for which the result is false, or even hard to prove, then of course, your proof is wrong (or at least your result is missing a hypothesis). If you omit an edge case because it’s well known and easy to deal with, then the proof isn’t wrong. Your tone in these comments epitomises many mathematicians’ frustration with the theorem-proving community: you claim to want to assist mathematicians, but show little understanding of how mathematics is actually done. Failing to appreciate the difference between Wiles and Perelman is a good example... – HJRW Jan 25 '18 at 17:55
-
1... Wiles’s first proof was wrong, because it contained a gap that he didn’t know how to fill, and neither did anyone else at the time. Perelman’s manuscript is regarded as (essentially) correct because it contained all the new ideas that the mathematical community needed to complete the proof of the geometrization conjecture. If your notion of “wrong” doesn’t distinguish between these, then you are missing the point about modern research mathematics. – HJRW Jan 25 '18 at 18:00
-
@HJRW I never have claimed that not explicitly detailing an easy case makes a proof wrong. – Jan 25 '18 at 21:36
-
2@MilesRout, you claimed exactly that. "I can absolutely assure you that if you claim to have proved something but have missed an edge case then you are simply, utterly and totally wrong whether you identify as a mathematician or not." And if one traces back further, the "edge case[s]" under discussion are "degenerate cases for which the main argument doesn't quite work as stated" -- in other words, edge cases which require a minor (ie easy) variation of the main argument. – HJRW Jan 25 '18 at 21:48
-
Missing an edge case and saying 'this case is easy' are two very different things. – Jan 25 '18 at 23:04
-
@MilesRout, we're discussing the situation where an easy edge case is omitted. When you're reading the paper, you can't tell if it was omitted because they authors actually missed it, or because it was easy. So no, in practice they're not "two very different things". – HJRW Jan 27 '18 at 12:34
-
6@MilesRout, to take a step back for a moment, your position here looks to a mathematician a lot like a power-grab on the part of the theorem-proving community. You implied that proofs that require minor corrections to be formalized are "wrong, wrong, wrong". This suggested to me that if a mathematician proves a result, and a theorem-prover makes some minor corrections when formalizing the result, then you think the mathematician was "wrong" and the theorem-proved gave the correct proof. I hope the two communities can work together much better than that. – HJRW Jan 27 '18 at 12:50
-
You can tell when it's missed because it was easy because they explicitly state 'this case is easy'. If they don't, then it's pretty clear they missed it, and claim 'oh yeah that one is easy that's why I left it out' after the fact to CYA. – Jan 28 '18 at 20:35
Not an example that a result was shown wrong by using a proof assistant, but inconsistency was discovered in the premises of Gödel's ontological argument by a higher-order theorem prover: https://www.ijcai.org/Proceedings/16/Papers/137.pdf
- 18,946
- 4,006
- 27
- 38
-
3
-
-
1Some context may be helpful here. Gödel did not publish his own proof, but shared it with Scott, who then publicized it. Scott's version is perfectly fine, but Gödel's original unpublished version was missing a clause. The conventional wisdom is that Gödel knew the clause was needed but inadvertently omitted it in the draft he showed Scott. In any case, it has long been known that the clause is needed for the argument to go through. But what had not been noticed until the formalization effort is that omitting the clause actually leads to a contradiction. – Timothy Chow Aug 16 '23 at 15:44
I learned of the following stellar example from Lawrence Paulson.
Anthony Bordg, Yijun He, and Hanna Lachnitt have been involved in an ongoing effort to formalize quantum algorithms and results in quantum information theory in Isabelle/HOL. You can read about their efforts here.
In the course of their project, they naturally found themselves examining one of the seminal papers in the subject, Quantum games and quantum strategies, by J. Eisert, M. Wilkens, and M. Lewenstein. As of this writing, Google Scholar claims that this paper has nearly a thousand citations. Bordg, He, and Lachnitt found that there was a fundamental and unfixable error in one of the main results of the paper. They explain the details in an arXiv preprint.
- 78,129
Although it may also not fully count as an example discovered via a proof assistant involving fully formalized mathematics, the classification of maximal subgroups of all finite classical groups of dimension $\le 12$ [Bray, John N.; Holt, Derek F.; Roney-Dougal, Colva M., The maximal subgroups of the low-dimensional finite classical groups., London Mathematical Society Lecture Note Series 407. Cambridge: Cambridge University Press (ISBN 978-0-521-13860-4/pbk; 978-1-139-19257-6/ebook). xiv, 438 p. (2013). ZBL1303.20053] involved large pieces of Magma code by which it was able to correct/fill gaps in the earlier theoretical results of [Kleidman, Peter; Liebeck, Martin, The subgroup structure of the finite classical groups, London Mathematical Society Lecture Note Series, 129. Cambridge etc.: Cambridge University Press. x (1990). ZBL0697.20004].
- 1,114