I thought I had a grasp on this. Do Gödel's apply to just math; logic, too; or more, and what does its applicability entail? If it applies to math, does it apply to physics? Similarly with Tarski: can we describe and prove truths in informal languages like English? If so, how, because logic is a formal language(s), no?
- 125
- 1
- 4
-
1The incompleteness theorem applies to any system that can prove a certain amount of arithmetic in a certain way, and is consonant with the halting problem (see answers to this Computer Science SE question). A knowledge-theoretic result in this connection is Fitch's paradox of knowability. – Kristian Berry Dec 06 '23 at 22:20
-
1@Sayetsu Please give a short introduction and state explicitly the two theorems of Goedel resp. Tarski. – Jo Wehler Dec 06 '23 at 23:03
-
2No, they just show that "every truth" cannot be captured by a single first-order recursively axiomatized system. Natural languages are not first order, let alone axiomatized, and most of our proving in science is not deductive anyway. It uses inferences to the best explanation of evidence based on informal heuristics and cognitive values. – Conifold Dec 06 '23 at 23:17
-
@Jo Wehler Quite complicated. I did my best with these: Tarski: https://www.youtube.com/watch?v=7uLYQ8nXJFM&list=PLz0n_SjOttTeAWHg3rgjzYFs1b0z1mRlv Godel: https://www.youtube.com/watch?v=I4pQbo5MQOs – Sayetsu Dec 06 '23 at 23:43
-
@Conifold: Thanks! – Sayetsu Dec 06 '23 at 23:46
-
@Conifold What about the Laws of Thought? Those are axioms, and logic is a formal language, right? So, are deductive logical proofs possible? Edit: Symbolic logic or something. Poorly versed in terminology there. – Sayetsu Dec 06 '23 at 23:47
-
@Sayetsu the incompleteness theorems are theorems, so they're deductively proved. They don't rule out all, but some, proofs (relatively speaking). – Kristian Berry Dec 06 '23 at 23:49
-
@Conifold I watched a playlist (linked above) on Tarski's theory of truth. His theorem seemed to prove a formal language (logic) can't make truth claims about itself. Do I have that right? – Sayetsu Dec 06 '23 at 23:52
-
"Laws of thought" is what they historically called simple inference rules formalized in formal logic. But thought and reason are much broader than that. For example, no formal rules will get you from planetary motions and free falling bodies to the law of universal gravitation. Tarski's theorem says that certain first-order languages cannot make truth statements about their own sentences without becoming inconsistent. However, nothing obliges us to use a single language (Tarski uses an infinite hierarchy of them), restrict to first-order languages or meet his other conditions. – Conifold Dec 06 '23 at 23:57
-
@Conifold: I thoght there were considered three most basic axioms: the laws of identity, noncontradiction, and the excluded middle. Is that wrong? – Sayetsu Dec 07 '23 at 01:54
-
The term 'laws of thought' is what George Boole called the three theorems you mentioned (non-contradiction, excluded middle, identity). They don't have any special status in modern logic, so calling them laws is rather arbitrary and calling them laws of thought is potentially misleading since they are not really about how people think. They are theorems of classical logic with identity. Some non-classical logics do not have all of them as theorems. – Bumble Dec 07 '23 at 04:51
-
See the post What are the philosophical implications of Gödel's First Incompleteness Theorem? – Mauro ALLEGRANZA Dec 07 '23 at 07:13
-
@Bumble News to me! Thanks a bunch! – Sayetsu Dec 07 '23 at 12:18
-
@ Mauro Allegranza Or do I thank you? Not sure how all of this site works yet. – Sayetsu Dec 07 '23 at 12:20
1 Answers
Descartes applied his Evil Demon to mathematics. He wonders whether a God of deception may have "brought it about that I too go wrong every time I add two and three or count the sides of a square, or in some even simpler matter, if that is imaginable".
It is imaginable. It can even be real. There are people who suffer from dyscalculia. I remember a description of an extreme case where someone could not count the number of cars in the car-park. There was one car. The demon could make us blind to an integer between 0 and 1.
I had similar doubts when I was reading Russel and Whitehead's 'Principia Mathematica' in my lunch-hours, a few hundred pages in, about where they show that multiplying two integers is commutative. They have an origin integer O, which turns out to be zero. They extend the set of integers by adding a unique successor O, which they can call OS. They can repeat that, adding a new number to the set that is the successor to the last number that had no successor. They can then, by induction, generate the mathematics of an unbounded set of integers.
I was not worried about Descarte's demon or dyscalculia. I was more worried that they had listed a set of laws for the integers, but this had done little more than translated the properties of integers that they knew about into lexical rules, and by applying these rules they were in effect counting the number of the letter 'S' following 'O' if they stuck with the original syntax. Or, worse, they were not counting anything obvious but the method of generating the set of integers by induction was effectively counting the number of times we had been around that logic loop. Russell was worried about this sort of thing too.
You can try and isolate the most primitive basis for the integers. Conway manages to do that using sets in a novel way with Surreal Numbers. In the end we have to admit that there are several ways of getting the same mathematics, and none of them are absolute.
If I know anything, I am pretty sure we haven't missed an integer between 0 and 1, and the properties of the integers hold. If I don't know that, then we have defined the meaning of the verb 'to know' so tightly that it becomes useless. Decsartes' Evil Demon could still be at work, but this assumption is not a helpful one.
The incompleteness theorem is a different thing. There will be statements within any formal logic system that may be true or false, but the proof may not lie within that formal logic. Things like the Goldbach Conjecture may be true for the set of natural numbers, but we have not yet proved it.
- 518
- 5