30

In teaching, we sometimes necessarily oversimplify concepts. Terry Pratchett famously referred to this as Lies to children:

A lie-to-children is a statement that is false, but which nevertheless leads the child’s mind towards a more accurate explanation, one that the child will only be able to appreciate if it has been primed with the lie.

In online discussions about lies to children, most examples I have come across are in the natural sciences, most frequently physics and chemistry. There are some examples on the Wikipedia page as well. Does anyone know of any good examples in mathematics and statistics, particularly at the undergraduate level or higher?

I don’t mean examples where we as teachers simplify and give hand-waving arguments because the level required for a rigorous proof is far too high. What I have in mind are explanations that are completely and utterly incorrect, but somehow still helpful.

Wrzlprmft
  • 2,548
  • 16
  • 33
Joel Ottar
  • 409
  • 3
  • 7
  • 4
    Some people saying that it's not correct to say "infinitely close". But I disagree (with reference to non-standard analysis). I'll keep trying to think of something. Closest is saying square root of a negative is undefined, maybe? – Sue VanHattum Jan 26 '23 at 22:13
  • 2
    I think you would have more luck targeting the question at a younger age range than undergrads. You might get omissions in the intro calc sequence, and the applications problems will have a lot of gross simplifications, but those simplifications are not of the math. – Adam Jan 27 '23 at 01:56
  • 1
    While searching for good examples myself, I predict that there will be many answers suggesting things that are only lies, i.e., bad didactics or plain wrong, such as “learning long division is directly important for real-life application” or “1.5 is closer to 1 than to 2 and that’s why we round down”. – Wrzlprmft Jan 27 '23 at 08:23
  • What about the idea of imagining an infinitely tall triangle, such that the sides are functionally parallel when calculating interference patterns in the double-slit experiment? – Dugan Jan 27 '23 at 16:05
  • 1
    Would you include ‘lies’ that are never stated, but merely implied by usage and example? I'm thinking particularly of the misapprehension that almost all real numbers are rational (which covers the answers to most problems, most polynomial coefficients, and most numbers you meet — especially for younger students). – gidds Jan 27 '23 at 18:46
  • I thought maths and stats were pretty-much the only subjects immune to such lies… Go into chemistry - certainly physics - and you will meet those lies but wait a while…

    Those lies crop at roughly A-level, before which most people stopped studying those subjects.

    – Robbie Goodwin Jan 27 '23 at 21:24
  • I think it may be harder to have a lie-to-children in maths than in natural sciences, since maths is about the manipulations of precisely defined imaginary objects following precisely defined rules. In cases where children are taught something different to what mathematicians do I think it's often that objects and rules are different, not just the results. – bdsl Jan 28 '23 at 11:36
  • Earth rotates around its axis exactly once every 24 hours. – gnasher729 Jan 28 '23 at 15:27
  • In my experience teaching mathematics, the core issue is that instead of claiming that some result is absolutely true - I do say that it is an approximation to the logic or only true in context. Eg, you really seriously cannot subtract a larger number from a smaller if the theory you are teaching is that of the natural numbers. To go to negative numbers is to step outside that context. That is a point that I try to make strongly to my students. – Ponder Stibbons Jan 29 '23 at 05:30
  • Are you asking about fabrications that were artificially created, or do you also accept lies by omission or technically-correct-but-misleading truths? For example, the sun technically does revolve around the Earth (since any two orbiting bodies enact forces on eachother). That is technically not incorrect, but severely misleading. Do you consider us calling that statement wrong (due to the misleading) as a lie to children? – Flater Jan 29 '23 at 23:22
  • Christy Graybeal wrote an article called "Mathematical Lies We Tell Students" in the journal Teaching Children Mathematics back in 2014. The link to the journal is here: https://pubs.nctm.org/view/journals/tcm/21/4/article-p197.xml. Unfortunately the article itself is paywalled and I don't have access to it, but I have submitted an ILL request for it and will post a summary when it arrives. – mweiss Jun 01 '23 at 00:16

20 Answers20

41

Young children 5-8 years old, are taught to subtract the smaller number from the bigger number. They are told that you can't subtract a bigger number from a smaller number. This lie has its advantages and helps cement the order of the numbers in a subtraction sentence. Then when students learn about negative numbers they can easily unlearn the previous "lie" that you can't subtract a smaller number from a bigger number.

Of course, there is often a bright 7 year old who has been taught about negative numbers at home who objects to not being able subtract bigger numbers from smaller numbers.

Amy B
  • 8,017
  • 1
  • 27
  • 51
  • 7
    A similar thing with division, before fractions and/or decimals are introduced. – OrangeDog Jan 27 '23 at 11:39
  • @OrangeDog Yes but there students have trouble unlearning that you can't divide 3 by 5. – Amy B Jan 27 '23 at 11:43
  • 9
    Not a lie, just a statement in which not all terms are fully defined. It is certainly true that there is no natural number that you can add to a given natural number to obtain one smaller than that. – Carsten S Jan 27 '23 at 12:17
  • 4
    @OrangeDog At least with division, teaching that tends to come along with the concept of "and 3 remainder". So teaching fractions adds the extra layer of what else you can do with the remainder. – Graham Jan 27 '23 at 13:15
  • 17
    "This lie has its advantages and helps cement the order of the numbers in a subtraction sentence." This sounds to me like "this completely misleads the children but it's good because it helps them get the right answers on standardized tests without having to think too hard about making those standardized tests any good" – JounceCracklePop Jan 27 '23 at 18:19
  • 1
    What children are learning about is simply the metric space $\mathbb{N}$ with the distance function $d(x,y) = |x-y|$. – leftaroundabout Jan 27 '23 at 18:49
  • 4
    @leftaroundabout: And this is of course precisely what we would like children in elementary school to learn... :-) Reminds me a bit of Arnold who once complained: "To the question 'what is 2 + 3' a French primary school pupil replied: '3 + 2, since addition is commutative'." – Jochen Glueck Jan 27 '23 at 19:07
  • 3
    The follow on lie when they find out about negative numbers is "OK, sure, you CAN subtract the big number from the small number, but you absolutely can't take a square root of the result" – chessofnerd Jan 27 '23 at 21:33
  • @chessofnerd sure you can. A number is just something you can cram into our algebra without breaking it. Which is why a matrix is not a number. Neither is infinity. Nor, for that matter, a zero. – candied_orange Jan 27 '23 at 21:37
  • @leftaroundabout Are they not learning about the natural number space where subtraction is defined as a partial function from N to N? – bdsl Jan 28 '23 at 11:38
  • @JochenGlueck that is outright hateful. "Mathematics is the part of physics where experiments are cheap" - This Arnold doesn't understand mathematics very well, does he – bytepusher Jan 28 '23 at 20:25
  • 4
    @bytepusher: Erm... "This Arnold" was a highly renowned mathematician who was working in mathematical analysis, dynamical systems, and mathematical physics; he wrote several hundred research articles: link to his zbMATH profile. I'm not sure what precisely you find "outright hateful". Arnold's little rant that I linked? Well, one does certainly not need to agree with what he says (I personally disagree with a lot of his claims in this speech) and yes, the speech is polemic and, on occasions, quite sarcastic. But hateful? – Jochen Glueck Jan 28 '23 at 21:24
  • 2
    I don't think this is a lie. If I have five rocks, I can't subtract six rocks, it doesn't work! – Wowfunhappy Jan 29 '23 at 00:25
  • @JochenGlueck fair, I went overboard there. But it does seem to me he's too focussed on just one aspect of mathematics, personally, I see it very differently and have had the opposite experience with a number of students that don't have such an obsession with geometry. And declaring mathematics as a part of physics? Pah – bytepusher Jan 29 '23 at 03:05
  • 1
    the bright kid can have a negative slice of cake, made from antimatter, occupy him with antimatter physcs. – bandybabboon Jan 29 '23 at 06:36
  • @Wowfunhappy If you have 5 rocks, you can indeed subtract six rocks. Imagine that you are playing a game where you have to give someone 6 rocks but only have 5. You would give 5 and then owe 1 (and have -1 rocks). As soon as you got 1 rock you would "pay them back". It's true you can't take 6 rocks from 5, but subtraction doesn't only mean physically taking; it can also mean leaving yourself with a negative balance. – Amy B Feb 04 '23 at 19:06
  • 1
    @AmyB Well, but now you made the rocks into a currency. Subtraction doesn't only mean physically taking, but it sometimes does! You have to know the context of the problem, and we teach some contexts before others. I still wouldn't consider that a "lie", personally. – Wowfunhappy Feb 04 '23 at 23:18
  • 1
    If I was teaching a class that you can't subtract smaller numbers from bigger numbers and using a grocery store as the context, and a precious child asked "but what if I borrow the difference from someone else and offer to pay them back later", I would say something to the effect of "Yes, you could do that if someone was willing to lend you money. When you get older, you'll learn how to do math problems like that can. But for right now, there's no one to borrow money from. So, can you subtract..." – Wowfunhappy Feb 04 '23 at 23:21
32

We usually teach:

$$\int\frac1xdx=\ln{|x|}+c$$

Whereas it should be:

$$\int\frac1xdx = \begin{cases} \ln{x}+c_1 & x>0 \\ \ln{(-x)}+c_2 & x<0 \end{cases}$$

Why don't we teach the correct version? Using the "wrong" version usually leads to the right final answer, whereas using the correct version would probably overburden struggling calculus students.

Dan
  • 851
  • 4
  • 10
  • 4
    I got through four years of mathematics at university without noticing this! To be fair, once we got to complex integrals, I realised pretty quickly that the "log |x| + c" solution isn't exactly right; but it's funny that if we never leave the real numbers then it's wrong in a different and incompatible way. (I say "incompatible" because in the complex numbers you don't get to choose c1 and c2 totally independently.) – kaya3 Jan 27 '23 at 12:27
  • 8
    Even on the reals, the second version isn't exactly "the correct one". The correct version says that the function $x\mapsto 1/x$ is not integrable on any domain that includes $0$. What you actually can do is obtain one antiderivative that works restricted to the positive and one that works restricted to the negative reals, but writing them both in a single equation is more sloppy than the simple version that excludes the negative numbers entirely. – leftaroundabout Jan 27 '23 at 19:04
  • 10
    @leftaroundabout The second version isn't sloppy: if a function $f \colon U \to \mathbb{R}$ happens to be defined on an open subset $U \subseteq \mathbb{R}$ (possibly disconnected), then $\int f(x) dx$ can (should?) be interpreted as the set of all functions $F \colon U \to \mathbb{R}$ satisfying $F'=f$ in $U$, and this is what we get. – Michał Miśkiewicz Jan 27 '23 at 19:16
  • Saw this example one on Trefor Bazett's YouTube recently! – ruferd Jan 27 '23 at 19:33
  • 4
    The fancy way of saying this is that the 0th de Rham cohomology group of $\mathbb{R} - {0}$ is given by the locally constant functions. The "+C" should really stand for a locally constant function in each instance, not just a constant. – Steven Gubkin Jan 27 '23 at 20:23
  • I've never seen any teacher teach this, and if I had I wouldn't have called it a lie, but rather a sloppy mistake. – Stef Jan 27 '23 at 20:31
  • 1
    @Stef The result "ln |x| + c" certainly appeared in the textbooks I learned from at school. I can't say for sure whether my actual teachers would have known that this result was not correct, but it's beyond my doubt that the authors of those textbooks would have known. (The authors were professors; I got to meet one of them, and joked to my teacher that I should have gotten him to sign a book.) – kaya3 Jan 28 '23 at 11:31
  • @kaya3 Perhaps the authors should have known, but made a sloppy mistake? A lie-to-children is something that should make things easier to understand and be a good approximation until one knows more, like ignoring air friction when teaching classical mechanics. I don't see how giving a purposefully wrong answer for the antiderivative of 1/x could make any sense. It really looks like just a sloppy mistake. – Stef Jan 29 '23 at 11:12
  • 4
    @Stef It avoids the need to introduce an exception to the rule that the integral of a function is "some other function plus c", and in all of the exercises in that textbook that require an integral of 1/x, the integral is over an interval that doesn't contain the pole anyway, so there would be no need to write a solution with two different c variables just to ignore one of them. At that level, the goal of a problem is often not "describe the family of all functions whose derivative is this", but rather "find the area under this curve". – kaya3 Jan 29 '23 at 11:37
  • @leftaroundabout Just to second what Michal said above: framing an indefinite integral as waiting for integration limits to be filled in is a convenient and fine model for functions that are integrable and posesses an antiderivative, but pathological cases exist where an a function isn't integrable on an interval while its indefinite integral exists on that interval (I gave an example here); – ryang Jan 31 '23 at 10:28
  • in the above example though, saying that $\frac1x$ has an indefinite integral (set of antiderivatives) on $\mathbb R{\setminus}{0}$ is correct (as is distinguishing between its independent arbitrary constants $C_1$ and $C_2$) and not the same as saying that $\frac1x$ is integrable on $\mathbb R{\setminus}{0},$ which is incorrect. – ryang Jan 31 '23 at 10:46
  • @ryang yeah... I'm convinced-ish. But it does require a proper awareness that the Fundamental Theorem is not just a syntactic triviality. And that's the crux: students just aren't aware of the subtleties, and any single formula risks being misused in this case. The safe and honest version would be to avoid the problem by giving only two antiderivatives for each of the connected parts of the domain. Of course this necessitates a proper explicit treatment of function domains instead of the awkward “reals minus undefined points” story, which would anyway be a good idea IMO. – leftaroundabout Jan 31 '23 at 11:06
  • @leftaroundabout Doesn't the correct version above do just that? Anyway, while its display of multiple integration constants is just of technical interest, paedagogically, I like it for reminding the candidate/student that $\frac1x$ has a break at $0$ (and to be cautious about attempting to integrate it on say $[0,5]$...notwithstanding the fact that $\int_0^5\frac1{\sqrt x},\mathrm dx$ does converge). – ryang Jan 31 '23 at 12:36
  • @ryang no, the version I would consider correct starts by defining two functions, $f_- : \mathbb{R}^{<0}\to\mathbb{R}, f_-(x) = 1/x$ and $f_+ : \mathbb{R}^{>0}\to\mathbb{R}, f_+(x) = 1/x$. Then you get $\int!\mathrm{d}x:f_-(x) = {x\mapsto\ln(-x) + c: |: c\in\mathbb{R}}$ and $\int!\mathrm{d}x:f_+(x) = {x\mapsto\ln(x) + c: |: c\in\mathbb{R}}$. The explicit set comprehension and quantification should IMO also always be used, and would allow (later on) teaching much better how the concept can be generalized to functions that aren't integrable and what's meant in that case. – leftaroundabout Jan 31 '23 at 12:50
  • @leftaroundabout I wrote a response here, hopefully not controversial. – ryang Feb 03 '23 at 06:11
  • The first "version" is not "incorrect". It depends on what precisely the statement even means. See https://matheducators.stackexchange.com/questions/2338 –  May 25 '23 at 02:56
21

The idea that “a number” means “this decimal expansion”, rather than the expansion being a way of representing a number that has some more set-theoretic definition. It's the de facto truth for everyone in the world who isn't going into rigorous mathematics, but thinking of the decimal as “the number” and other properties as things that are true about it is the root of the classic confusion around $0.999... = 1$.

When I was introduced to the concepts at the undergraduate level, we couldn’t even get through the basic definitions without acknowledging that $0.999... = 1.000...$ as an assumption of how the representation is constructed to allow us to prove that numbers map (almost) uniquely in both directions.

This relates closely to Patrick Stevens' answer – a straightforward understanding of rational and real numbers works intuitively, but the formalism has to go the other way from the intuition.

Wrzlprmft
  • 2,548
  • 16
  • 33
LizWeir
  • 319
  • 1
  • 4
  • I have a background in mathematics but not education, so I'd be interested in input from people with early-years education experience on how rational numbers and their equivalent representations as fractions and decimals are taught - at this point I very much don't remember my first introduction! – LizWeir Jan 27 '23 at 09:14
  • 2
    One could also formalize real numbers as infinite decimal expansions, with the equivalence with 9s imposed. I think I actually saw a writeup of this a few years ago, and proving that you have a totally ordered complete field based on this definition was not as horrible as you might imagine. – Steven Gubkin Jan 27 '23 at 12:32
  • 1
    That makes sense, although if we were going to do that I'd definitely want to see an argument that the equivalence was a natural thing to impose. (I suppose the algebraic argument based on 9.999... = 10 * 0.999... does supply that.) – LizWeir Jan 27 '23 at 13:13
  • https://www.dpmms.cam.ac.uk/~wtg10/decimals.html – Steven Gubkin Jan 27 '23 at 13:55
  • https://scholarworks.umt.edu/cgi/viewcontent.cgi?article=1511&context=tme – Steven Gubkin Jan 27 '23 at 13:56
  • 2
    @LizWeir non-integral quantities are taught primarily as fractions initially. This has a number of advantages: it allows the introduction of concepts like factors, etc and matches natural language more closely. (Children will already be eating "three quarters" of a cake, not "0.75 of a cake"). At this point other representations for integers like tallymarks and roman numerals are often taught. This is much maligned but they are culturally marginally important, and from a mathematical perspective help break the "a number is merely this sign" mistake. – Dannie Jan 27 '23 at 14:04
  • 1
    Decimals are initially treated as merely the awkward language spoken by calculators and computers, and usually alongside ten times-tables and basic SI prefixes so that "moving the dot" helps the kids understand the structure of them and their practical use in trades, crafts, etc. – Dannie Jan 27 '23 at 14:05
  • 4
    I'll note that 0.999...=1 is not true under all number systems. Some students are (intuitively) working in a different mathematical system (which allows infinitesemals) than the standard one. Mathematical systems with infinitesimals can have similar power/rigor/coherency to the "standard" system. Nonstandard Student Conceptions About Infinitesimals discusses this from the perspective of educators. It's approachable, despite being over 100 pages. – Brian Jan 27 '23 at 14:16
  • The set-theoretic definition of numbers is one possible formalisation, certainly not "the only true definition and anyone who says differently is a liar". – Stef Jan 27 '23 at 19:56
  • I don't really get this answer. If a number is not it's decimal expansion, then what is it? I don't get the gist of the answer. – Adam Rubinson Jan 31 '23 at 20:37
  • Also, what is the "the classic confusion around $0.999... = 1$." ? – Adam Rubinson Jan 31 '23 at 20:48
  • @AdamRubinson: Some students believe that $0.999... = 1-ε$, where $1-ε≠1$ . They may not phrase it that way, though. See link in my previous comment for a more exhaustive investigation of how students get it wrong. – Brian Feb 02 '23 at 20:05
  • Yeah but that's not due to a lie-to-children, as children are not told that $1 > 0.999...$. They are told that $1=0.999...$. So this is more of a mistake on an individual level: students "making up their own rules", rather than something that is taught (which it isn't). – Adam Rubinson Feb 02 '23 at 22:23
  • In my experience (in the UK, possibly teaching has changed since), students are in a position to construct the decimal 0.999... well before they're told it's equal to 1 - I don't remember encountering the latter fact in a formal education environment until the degree level. Left to their own devices it's natural for a student to assume that two different decimal expansions must be two different numbers. I was meaning that more as an example of a consequence of the 'lie', though - the 'lie' itself is more or less the omission of the 1 = 0.999... detail when first introducing decimal numbers. – LizWeir Feb 03 '23 at 09:29
  • I'll chew on whether I can make the answer clearer. The root of what I'm getting at is that decimal expansions aren't a straightforward one-to-one mapping to 'numbers' and back - the symbols 1/2 and 2/4 are different ways to write the same number, so more than one way-to-write-a-fraction can be the same number, and the same's true with decimals but in a way that's elided until quite a lot later in a mathematical education. Exactly clarifying what we mean by 'number' (and indeed 'is') is a question of mathematical philosophy that I'm not sure I'm equipped to answer, though. – LizWeir Feb 03 '23 at 09:34
  • @AdamRubinson: This is veering off-topic, but if you look at the standard "how to compare decimal numbers" presentation, it could be problematic (i.e.: compare sequential decimal places, and when a pair doesn't match you know one is lesser than the other). Not mentioning the exceptional repeating-9's case could count as a concealment/lie that leads to the intuition that $1 \neq 0.999...$. – Daniel R. Collins May 24 '23 at 01:52
  • When I was younger, I was asked to calculate $1-0.999\ldots.$ I quickly realised that this was $0.000...1,$ and since the zeros never stop recurring, you never "reach $1$", so this must be equal to $0.00000\ldots = 0.$ So, the difference between $1$ and $0.999...$ is $0$ and therefore they are in fact equal in value, which is all that matters. – Adam Rubinson May 24 '23 at 21:38
18

The average

where the lie-to-children is the word "the".

Ask anyone what "the average" of a set of values is, and immediately you'll be told the arithmetic mean. That's how it's taught initially, and that's what everyone falls back to by default.

A little further on in primary school, you're taught that actually there are three kinds of averages - mean, median and mode. But only three.

Study actual statistics though, and you get onto least-squares, standard deviation and other ways to know how confident you are about your "average", fitting to polynomials or other functions, Bayesian statistics, and so on.

Graham
  • 399
  • 2
  • 5
  • 19
    "Most people have an above-average number of arms." – Eric Duminil Jan 27 '23 at 12:51
  • @EricDuminil Which becomes an issue if you want to quote a number of significant digits on a value, yes. :) – Graham Jan 27 '23 at 13:13
  • 15
    I disagree. I have never heard anyone say "the average" to mean anything other than the mean. – Daron Jan 27 '23 at 13:23
  • 1
    @Daron Which is what I said right at the start. The mean is *an* average, not *the* average. And there are many, many other averaging techniques. – Graham Jan 27 '23 at 15:12
  • 10
    @Daron But which mean? Arithmetic, geometric, harmonic, something else? The term ‘mean’ has the same issue to a large extent, it almost always means the arithmetic mean, but it’s still technically ambiguous. – Austin Hemmelgarn Jan 27 '23 at 16:00
  • 11
    When you use standard spreadsheet software, you need to be comfortable with the fact that the AVERAGE() function returns the arithmetic mean and not anything else. – Daniel R. Collins Jan 27 '23 at 16:30
  • 23
    @Graham If no other context is given, "The average" almost always means the mean. This is how language works. I would not call the existence of the median and mode a lie to children. It is just a change of convention. – Daron Jan 27 '23 at 17:53
  • An even bigger issue with many averages is the complete lack of questioning of the sample population the value was calculated from. Was sample population representative of the whole population? Maybe or maybe not, but that is rarely discussed. – hlovdal Jan 27 '23 at 18:08
  • 2
    I think it's exactly the other way around. "Mean" is the word that's ambiguous, with "arithmetic mean" meaning "average". It's true that some confused reporters say "average" when they mean "median", but I've never heard "mode" ever confused with the average. – JounceCracklePop Jan 27 '23 at 18:32
  • Another issue with this appears when we move from finitely many values to infinitely many. Then, one has to specify the distribution before we can consider even the arithmetic mean. – Michał Miśkiewicz Jan 27 '23 at 19:12
  • 1
    @Daron The question isn't about language or common usage, it's about teaching. Are you disputing my assertion that children are first taught "this is the average", then taught "that was actually an average, here are two more", and then taught more averaging when they learn statistics? I can look for a primary school syllabus if you want me to substantiate that claim, but I suspect most people can remember enough of their school days that I don't have to. – Graham Jan 27 '23 at 19:16
  • 2
    unfortunately, I don't find this lie to children to be particularly helpful considering median and mode are far easier to calculate for a young child, and many adults never make it past the arithmetic mean as THE average. As we know, arithmetic means are often extremely misleading in popular statistical reporting where extreme outliers exist, for example average income, average home price, etc. which is why they are usually reported as medians. – BlackThorn Jan 27 '23 at 19:38
  • @Graham I am disputing the claim that referring to the mean of a bunch of heights as "the average height" is incorrect usage. In my experience, it is used by many adults who presumably do not know about the mean and mode, but also often used by technical people who do know about the mean and mode. – Daron Jan 27 '23 at 20:50
  • 1
    Oh poo this is simply arguing semantics. The average has an exact meaning for most people. For others it's a known junk term. The correct catch all term is "a measure of center". Use that to avoid being clear about which you mean. – candied_orange Jan 27 '23 at 20:54
  • 1
    This just supports my hypothesis that about half the population has a below-average understanding of statistics. – Robert Columbia Jan 28 '23 at 15:46
  • @RobertColumbia There is no understanding or misunderstanding going on here. It is a question of semantics. – Daron Jan 28 '23 at 19:01
  • For anyone interested, there's a historical account of this issue in Eisenhart's The Development of the Concept of the Best Mean of a Set of Measurements from Antiquity to Present Day (Annual Meeting of the American Statistical Association, 1971). – Michał Miśkiewicz Feb 01 '23 at 02:34
18

Whether or not the derivative $\frac{dy}{dx}$ is a fraction. Similarly, what, exactly, are $dy$ and $dx$?

This actually goes through several iterations of lies:

  1. We first hammer it into Calc I students that $\frac{dy}{dx}$ is not a fraction, but instead the limit of fractions and that we write it as a fraction for intuition. We teach that $dx$ should be thought of as "an infinitesimal change in $x$" and sometimes even mention that this isn't strictly accurate, but does mirror the intuition of Newton, Leibniz, and others during the historical development of infinitesimal calculus.

  2. Somewhere in a differential equations or physics course, we teach students to manipulate $\frac{dy}{dx}$ as a fraction. But only in special circumstances and that it's not really a fraction; it just plays one on TV. All of these fraction manipulations are simply algebraic shorthand for things like change of variables in integration when solving a separable differential equation.

  3. Somewhere in a differential geometry/calculus on manifolds course we introduce the idea that $\frac{dy}{dx}$ is, in fact, a fraction relating vectors in tangent spaces. And simultaneously introduce the idea that a derivative is a linear mapping between tangent spaces and not a fraction. And that $dx^i$ is simply a map that returns the $i$-th coordinate of a vector in a tangent space.

erfink
  • 1,129
  • 7
  • 15
15

"Random variable."

...because, as we all know, a random variable is neither random nor a variable. It is a real-valued function. But if we tried to introduce the concept, the feeling, of a stochastic, uncertain, incompletely known environment using such a deterministic terminology as "real-valued function", we would certainly fail. Hence, random variable it is.

Alecos Papadopoulos
  • 1,542
  • 8
  • 16
  • 2
    Is a random variable really a fiction? It may be that they don't exist in the world of digital computers that we currently normally use, but that's not to say they don't exist. – DJClayworth Jan 27 '23 at 15:28
  • 11
    I've never been fond of this somewhat-snarky take. Random variables are the mathematical model for things that appear random in the world, and are used as such in applications all the time. – Daniel R. Collins Jan 27 '23 at 16:35
  • 14
    -1 for two reasons: (i) This is not an example of a lie-to-children (or students), since it's not something that is "corrected" later on. On the contrary, random variable is standard terminology which is used all the time by mathematicians working in probability theory. (ii) I feel obliged to say that I don't find the reasoning in your second paragraph convincing. As @DanielR.Collins pointed out, random variables are commonly interpreted as quantities that are "variables" in a model and that are random. – Jochen Glueck Jan 27 '23 at 18:12
  • 1
    Random variables are subsets of "real-valued functions", assuming you're only meaning that the codomain is real (the domain need not be). But not all "real-valued functions" are random variables, so it's incorrect to equate the two. And variables can hold functions! It is a variable (holding a function), and it is formalizing some notion of "randomness". "Random variable" is more correct. – JounceCracklePop Jan 27 '23 at 18:27
15

I was introduced to the real numbers as "all the points on a [two-sided infinitely long] line".

  • At best this is a circular definition. It's certainly very sloppy, and those words could be used to describe many different objects.
  • The real-world intuition is incorrect. By the time you're looking at a small enough scale for it to matter whether you've got $\mathbb{R}$ or $\mathbb{Q}$, space has stopped behaving like $\mathbb{R}^3$.

But it's intuitively obvious what it means. Eventually you'll be introduced to "Dedekind- or Cauchy-completion of the rationals" to formalise what it means for something to be a "point on a line" in the intended sense.

Patrick Stevens
  • 413
  • 2
  • 10
  • Well, but for actually writing down 5 or 6 geometric definitions that come immediately from intuition this is completely rigorous...see Bell's Smooth Infinitesimal Analysis, for example, where he builds out almost all of calculus and the basics of differential geometry for an (inequivalent!) axiomatization of the reals using Euclidean geometry alone. Every function is continuous, and infinitely differentiable---made possible by a constructive ambient logic preventing definition of point discontinuities. The theory has topos models, and so is as consistent as ZF(C?). – Duncan W Jan 28 '23 at 03:19
  • Cute! Although to convince me that this is a counterexample, you'll have to show me a high school teacher who avoids using excluded middle in their arguments involving the reals (or else proves that the instances of LEM they use are intuitionistically true). – Patrick Stevens Jan 28 '23 at 08:10
  • 1
    "I was introduced to the real numbers as "all the points on a [two-sided infinitely long] line". " What do you mean by "two-sided line"? – Adam Rubinson Jan 31 '23 at 20:40
  • 1
    A line rather than a ray. – Patrick Stevens Feb 01 '23 at 18:33
14

$ $

$\LARGE\mathbb R$

Students are introduced to real numbers long before they are ready for the formal definition. At second level they are primed for dealing with fractions and not fractions and told there are numbers like $\pi$ where the decimal expansion goes on forever.

The formal definition is in terms of completion of a metric space or Dedekind cuts. I was in my early twenties when I first encountered it.

Interesting fact: The metric space definition is already circular if you insist on defining a metric as a function $d: X \times X \to \mathbb R$. This assumes $\mathbb R$ is defined and cannot be used to create the definition. Your homework is to repair this problem.

Daron
  • 240
  • 1
  • 4
  • 1
    Define $\mathbb R$ using something other than Cauchy sequences, and then feel free to use Cauchy completion to define other complete metric spaces? – Dave Jan 27 '23 at 16:06
  • 5
    Initially the (usual) metric $|\cdot| : \mathbb{Q}\times\mathbb{Q} \to \mathbb{Q}$ can be used to complete $\mathbb{Q}$ into $\mathbb{R}$. – Aeryk Jan 27 '23 at 17:39
  • @Aeryk Ten points! – Daron Jan 27 '23 at 17:51
  • @Dave That also works but requires back-and-forthing. Nine points! – Daron Jan 27 '23 at 17:52
  • 1
    You can define the real numbers using Cauchy sequences without circularity - just swap out "for all real epsilon > 0" with "for all rational epsilon > 0". The only reason the definition of Cauchy sequences depends on the real numbers is because you're using a definition of Cauchy sequences of real numbers. – kaya3 Jan 28 '23 at 09:10
  • Isn't the "infinite decimal" definition just a specific Cauchy sequence of rational numbers? – Solomon Ucko Jan 28 '23 at 22:35
  • @SolomonUcko It is indeed. – Daron Jan 29 '23 at 10:43
12

I was told in high school that Euclidean geometry can be derived from the five postulates written by Euclid, but this is not the case. Several of Euclid's proofs have holes in them and one can create models of the five axioms that do not satisfy those results (e.g., in $\mathbb{Q}^2$ a line can be closer to the center of a circumference than the length of its radius and still not inersect the circle).

A rigorous foundation of Euclidean geometry was done by Hilbert in 1899 and it requires 20 axioms, which exclude the above models and make all classical theorems valid.

Rad80
  • 223
  • 1
  • 4
  • 7
    Are these lies told to children specifically? I feel like these are just inherited lies that had been told to adults for a very long time. (And when I put it that way, I feel uncomfortable calling them lies, because certainly Euclid did not set out to deceive anyone about the various models of his axioms.) – Misha Lavrov Jan 27 '23 at 17:54
  • 3
    Euclid made a honest mistake, assuming that the text was not corrupted by copying it over the centuries. That his 5 axioms are good in taught in high schools, at least in Italy a few years ago. – Rad80 Jan 27 '23 at 21:51
9

A lot of things around the limitation and construction of number spaces come to mind such as:

  • You cannot divide 5 by 2.
    You cannot take the square root of a negative number.
    You must not divide by zero.

  • Real numbers hold more reality than imaginary numbers.

  • Imaginary numbers hold some reality because they have applications in electrical engineering, quantum physics, …

  • Mathematicians invented complex numbers because they wanted to take the square root of negative numbers.

    (There may be some truth to this one. It’s not a good motivation though.)

  • We use complex numbers to be able to solve more equations.

  • Infinity is not a valid result and you cannot perform computations with it.

Whereas the truth is more like:

  • All number spaces (and infinity) are artificial constructs.
  • “Advanced” number spaces (and infinity) allow us to treat certain problems more efficiently or avoid irrelevant pathologies.
  • Taking the square root of a real number (and similar) is most often not something we do for its own sake, but something that comes up in application and is either a way to an useful answer or not.
  • The restrictions of number spaces reflect restrictions of real-life or inner-mathematical applications and it’s our duty to apply them as appropriate.
Wrzlprmft
  • 2,548
  • 16
  • 33
  • "The restrictions of number spaces reflect restrictions of real-life applications and it’s our duty to apply them as appropriate." Maybe if you're an applied mathematician… – wizzwizz4 Jan 27 '23 at 17:04
  • @wizzwizz4: Now that you are saying this, I might as well add inner-mathematical applications to the list. That only leaves those few people who investigate the particular consequences of certain restrictions for their own sake. – Wrzlprmft Jan 27 '23 at 18:16
  • Well, I still cannot take the square root of a negative number. As soon as I extend square roots to more than just nonnegative real numbers, suddenly every number except zero has two square roots, so it's no longer possible to take *the* square root of a number. – Stef Jan 27 '23 at 20:34
  • 3
    @Stef: That’s a problem you also have when taking the square root of positive numbers. For example, it is only by convention that $\sqrt{4} = 2$ and not $\sqrt{4} = -2$. We can introduce similar conventions to make the square roots of negative numbers (and complex numbers) unique. – Wrzlprmft Jan 27 '23 at 20:57
  • Sure, you can come up with conventions for anything and everything if you want to. But that doesn't make "lies" all statements made by other teachers who are not using your conventions. – Stef Jan 27 '23 at 21:07
  • I also cannot divide by zero, and imaginary numbers are not real numbers, and infinity is not a real number, so I don't quite understand your other points either. – Stef Jan 27 '23 at 21:09
  • @Stef: Sure, you can come up with conventions for anything and everything if you want to. But that doesn't make "lies" all statements made by other teachers who are not using your conventions. – I didn’t call the conventions that make the square root a function “lies”. Such conventions are required for square roots of positive as well as negative numbers and you can introduce them right away with the square root. It’s a different “problem” altogether. – Wrzlprmft Jan 27 '23 at 22:56
  • @Stef: I also cannot divide by zero, – Hence “you must not”, which suggest that divisions by zero are something you choose to do instead of coming up in problems and you have to deal with it. — … imaginary numbers are not real numbers, and infinity is not a real number, so I don't quite understand your other points either. – I have been using the term real in the colloquial sense here, which was not a good idea given the context. See my edit. – Wrzlprmft Jan 27 '23 at 23:03
8

My biggest pet peeve:

"A vector is a quantity with both magnitude and direction"

One is required to say this to pick up marks on on A-level physics exam, say, despite it being very, very wrong and arguably placing emphasis on the wrong concepts. I am not yet at university, but I imagine there is a large culture shock when students are first exposed to linear algebra. That said, introducing any other notion seems to be very counter productive to the high school education, and with regards to one actually needs to be able to do, the "magnitude-and-direction" lie is successful.

Many examples of this can be found in high school mathematics curricula. Unrigorous notions or incorrect definitions of the derivative, even of "increasing/decreasing function", integral, etc. are all lies that tend to cause students some difficulty unlearning in the first years of university but are essential for teaching the applications of high school maths to the typical high schooler.

FShrike
  • 478
  • 3
  • 12
  • 1
    What about "A vector is a quantity with both magnitude and direction" is not true? Maybe not all vectors have magnitude and direction, but the ones being discussed at A Level do, so in the context it is presented it makes sense to me... – Adam Rubinson Jan 31 '23 at 20:53
  • 1
    @AdamRubinson It's not at all the definition of a vector. It's a really annoying pseudo-definition... but it is a reasonable thing to teach at A level, as you say, which is precisely why I deem it a suitable response to this post. – FShrike Jan 31 '23 at 20:56
  • Yeah, that's true: definitionally, a vector is a more abstract object. – Adam Rubinson Jan 31 '23 at 20:57
6

I think sometimes when introducing $\pi$ to children they are told that it's exactly $3.1415$ or some other arbitrary number of decimals, or even that it equals $22/7$

Ivo
  • 227
  • 1
  • 4
  • 14
    I wouldn’t classify this as a lie to children as specified in the question. Of course it’s said to children and it’s a lie, but I am very skeptical that this is a didactic benefit to this as in “leads the child's mind towards a more accurate explanation, one that the child will only be able to appreciate if it has been primed with the lie”. – Wrzlprmft Jan 27 '23 at 08:18
  • 8
    Never heard of this – OrangeDog Jan 27 '23 at 11:39
  • 4
    I don't recall ever being told this is the exact value, it was always explained as an approximation. – Barmar Jan 27 '23 at 15:40
  • 1
    Even if this qualifies, it’s not exactly something that matters at all. The margin of error involved in just using 3.1415 instead of 3.1416 (correctly rounded), or 3.14159, or 3.141592654, or some higher number of digits is so small that it just doesn’t matter even in many scientific applications. Even 22/7 is not all that horrible when dealing with small circles and low precision (it’s only about 0.04% larger than π, so it’s still useful in real life because it’s predictably a tiny bit high). – Austin Hemmelgarn Jan 27 '23 at 16:18
  • 2
    Some teachers teach badly – but, unfortunately, I am aware of this being taught. @AustinHemmelgarn It matters when you're working with (e.g.) Taylor series representations of trigonometric functions, since thinking pi is rational will lead to contradictions. – wizzwizz4 Jan 27 '23 at 17:07
  • @wizzwizz4 Yes, it does matter for mathematicians and for some scientists, but for the average person, not so much (most people never need to do things like computing the surface area or volume of a sphere, let alone the type of math where π being transcendental significantly matters). I will admit I was a bit more dismissive though in my above comment than I probably should have been. – Austin Hemmelgarn Jan 27 '23 at 17:36
  • I've definitely seen this in undergraduates I teach. – Daniel R. Collins Jan 27 '23 at 22:13
  • 1
    Indeed it's true. When I was first told about $\pi$ It was like $\pi = \frac{22}{7}$ . It wasn't until two years later that I recognised. – An_Elephant Jan 28 '23 at 15:37
6

We can define irrational numbers to be those numbers which are not rational, and then define the real numbers to be the union of the rationals and irrationals.

The problem with this definition is that it first requires use to have defined the set of real numbers.

Here is a related discussion:

Why does the widespread erroneous definition of "irrational number" persist without being taught?

If we try to define irrational numbers as "numbers that are not rational" then we unwittingly capture numbers like $\sqrt{-1}$, quaternions, $\aleph_0$, surreal numbers, etc.

user52817
  • 10,688
  • 17
  • 46
2

Anyone who was taught about Venn Diagrams was probably told that a set is a collection of objects.

This is, of course false. But, it is a handy way to think about sets, and even when you get around to getting to a more formal notion of sets- you'll still think of them as collections of objects, while reasoning about them.

  • In what way is this false? As far as I understand the axioms of set theory, a set is more or less a formalization of a "collection of objects" (but my knowledge of set theory is quite limited). – Michał Miśkiewicz Jan 30 '23 at 05:29
  • Sets are not defined objects in set theory.Sets can relate to each other, in two ways.Equality and elementhood. While the Axioms do define equality between sets , the axioms don't define elementhood.Sets being collections of objects is just a particular way to think/talk about the elementhood relation. But, that's not in any way related to what sets are. Formally, sets are variables which satisfy some formulas in the language of set theory. There are various set existence axioms, If a formula can be derived from one of those axioms , the Variables which satisfy it are sets. – Michael Carey Jan 30 '23 at 07:01
  • There are many interpretations of "elementhood" that are consistent with the axioms. Instead of collection of objects, you can think of it as less-than, or subset, or, stronger than, etc... the formula you build which are satisfied by variables, assign intuitive meaning to elementhood. – Michael Carey Jan 30 '23 at 07:17
  • Sets aren't at their core, a formalization of collections of objects. They are variables which satisfy properties which can be defined with well formed formula ( the technical term for formula in the language of set theory). - So, to be an element of a set is to satisfy some property. As sets are defined by what elements they have, that's what sets are: a formalization of what it means to define something in terms of the properties that it has. – Michael Carey Jan 30 '23 at 07:45
  • 2
    @MichałMiśkiewicz Thinking of any collection of sets as a set leads to contradictions, like Russel's paradox (Consider the collection of all the sets that do not contain themselves. If this is a set, called R, then it is either in or not in R, Both of these lead to a contradiction). An illustration of this is the barber paradox (Define the barber as a person who shaves those and only those who do not shave themselves. Does a barber shave himself?). – Ferenc Beleznay Mar 04 '23 at 09:12
1

As OP asks for examples "particularly at the undergraduate level or higher", here is a meta-mathematical "lie-to-children" that undergraduates tend to learn (often probably implicitly) when they major in mathematics:

Validating a mathematical result means checking how every single step in the argument follows logically from the previous steps.

The skill to check how every single step of a proof follows from the previous steps is obivously something that one needs to learn well. So I don't think one can avoid that the students get, at some point, the impression that this consecutive validation of steps is the essence of checking a mathematical result. However, when more experienced mathematicians read a result and its proof, many of them approach it quite differently then reading every single step in the proof in detail.

Consequences of this (maybe unavoidable) "lie-to-children" can be observed on many occasions (for instance, on several Stackexchange sites):

  • Some (many?) people seem to be under the impression that a mathematical theory were essentially a logical house of cards that collapses once one removes a single piece. Experience shows that this is not the case, though: mistakes in research papers do occur on a regular base and at varying degrees of severity, yet mathematics (or subfields of it) is far from collapsing. Several reasons for this are discussed in the answers to this MathOverflow post.

  • Many PhD students believe that the "canonical" way to referee a mathematical research paper is to check every single step in all the proofs. However, this is hardly what one does in practice. In this post I described how checking proofs during peer review tends to work in practice.

Jochen Glueck
  • 2,195
  • 6
  • 21
0

Independence in probability theory is not independence in usual sense because it does not take into account causation.

Addition1: The notion of independence is well-known en.wikipedia.org/wiki/Independence_(probability_theory). For example, events $A$ and $B$ are called independent if $\mathbb{P}(AB)=\mathbb{P}(A)\mathbb{P}(B)$. As we can see, this is not at all the same as causal independence or smth.like that, because there is not a word about causes in this definition. And if smb. speaks about independence, for example "Student John's grade in physical education does not depend on the size of Napoleon's army", it doesn't look like he means that $\mathbf{P}(AB) = \mathbf{P}(A)\mathbf{P}(B)$ for the corresponding events.

Addition2: This is not the first time someone put a dislike, failing to formulate a single objection against the obviously correct thesis)

Botnakov N.
  • 179
  • 5
  • 3
    What do you consider independence in the usual sense? I wouldn’t consider two causally connected events independent in any sense that I am aware of. – Wrzlprmft Jan 27 '23 at 14:15
  • @Wrzlprmft What I'm saying is that it is a standard error to misunderstand the difference between the concept of independence in probability theory and the concept of causal independence, but the confusion of these concepts is a "lies to children". – Botnakov N. Jan 27 '23 at 16:58
  • 3
    You call this an "obviously correct thesis", but I don't see it. Can you explain this in the Bayesian formalism? – wizzwizz4 Jan 27 '23 at 17:05
  • 2
    @wizzwizz4 Independence is determined within the Kolmogorov axioms. And the Bayesian approach is defined within the framework of Kolmogorov's axioms. The notion of independence is well-known en.wikipedia.org/wiki/Independence_(probability_theory). For example, events $A$ and $B$ are called independent if $\mathbf{P}(AB) = \mathbf{P}(A)\mathbf{P}(B)$. As you can see, this is not at all the same as causal independence, because there is not a word about causes in this definition. That's why I call the obvious thesis obvious. – Botnakov N. Jan 27 '23 at 17:15
  • 2
    @BotnakovN. If you included that comment in your answer, I would still disagree with it (because I don't think that independence "in the usual sense" is causal independence) but I would think it's a fine answer because I would know what you mean. Right now, your answer is incomprehensible without that comment. (I did not downvote, though, to be clear.) – Misha Lavrov Jan 27 '23 at 17:51
  • @MishaLavrov When smb. speaks about independence, for example "Student John's grade in physical education does not depend on the size of Napoleon's army", it doesn't look like he means that $\mathbf{P}(AB) =\mathbf{P}(A)\mathbf{P}(B)$ for the corresponding events. By the way, what do you mean by "independence in usual sense" if you don't mean causal independence? – Botnakov N. Jan 27 '23 at 18:59
  • 1
    @BotnakovN. I would be hard-pressed to give a precise definition of causal independence, so it can't possibly be what I mean! (It's not just "one did not cause the other", because we want to include cases where a common cause affects both - on the other hand, there are surely events that precede both student John and Napoleon, they just "don't seem relevant", but what exactly does that cash out to?) My intuitive notion of independence is more like "If you tell me how big Napoleon's army was, it doesn't tell me anything new about John's grade". – Misha Lavrov Jan 27 '23 at 19:24
  • @MishaLavrov So it's smth.like "Informational independence" instead of "causal independence". I understand you. It is interesting, since it already comes down to pure philosophy:) – Botnakov N. Jan 27 '23 at 19:32
  • 1
    This isn't a "lie to children", it's just a word that has a technical meaning in mathematics that is not the same as its everyday meaning or its meaning in some other technical contexts. Mathematics is full of those, e.g. in topology "open" is defined in a way which is not mutually exclusive with "closed". Essentially you are arguing that the definition of independence is itself incorrect; but the definition of independence we teach to students is indeed the mathematical definition of independence that is used throughout all of probability, and it does not have to be unlearned. – kaya3 Jan 28 '23 at 08:53
  • @kaya3 When we say "open set" or "closed set" students are unlikely to think that the set is open/closed in the same sense that a door is open or a toilet is closed. Students understand that "open set" is an abstract term. The opposite is true with independence. It's not just about matching words. – Botnakov N. Jan 28 '23 at 10:05
  • @kaya3 Let's compare: when we say that the outcome of a die roll is independent of the outcome of a coin toss, we are referring to some kind of "independence in usual sense" (causal independence or something like that), not an artificial and abstract concept. But then we use an abstract notion while solving problems. "Lying to children" occurs when the teacher, for the sake of simplicity of exposition, ignores the difference between independence in the usual sense and independence in probability theory. – Botnakov N. Jan 28 '23 at 10:06
  • That's not what a "lie to children" means, though. A "lie to children" is a simplification or outright falsehood which is taught to develop some understanding, and later has to be unlearned to reach a higher level of understanding. The teacher who teaches the definition that A and B are independent if and only if P(A and B) = P(A) * P(B), is not teaching a different definition to the one that will be used later, and the student will not have to unlearn that definition at a later stage of their education. – kaya3 Jan 28 '23 at 11:21
  • Definition: a "lie to children" is "a simplification or outright falsehood which is taught to develop some understanding, and later has to be unlearned to reach a higher level of understanding."

    Situation: teacher, for the sake of simplicity of exposition, ignores the difference between independence in the usual sense and independence in probability theory.

    We have a) simplification b) confusion of notions while taking example from real world (such as coins, Brownian motion and so on) - hence there's a mistake

    – Botnakov N. Jan 28 '23 at 12:05
  • @kaya3

    c) it's for understanding d) later students understands that there was a mistake. Thus it's a lie to children by your definition.

    Maybe you suppose that students (after giving them a definition of independence) immediately 1) understands that independence in probability in almost independence in usual sense 2) realize, that nevertheless it is not independence in usual sense 3) all (almost all) students understand it very fast and can explain it.

    Practise shows that not the real situation. Of course, maybe your students are very clever and the situation is real for them.

    – Botnakov N. Jan 28 '23 at 12:05
  • 5
    The definition via P(A and B) = P(A) * P(B) is not a simplification - it is the full definition of "independence" used in probability theory. There is no later stage of education where a student will learn that independence in probability theory is not really defined that way. It is really defined that way. You just dispute that "independence" is a suitable word for the concept, so your dispute is with the terminology used in the field, not with how it is taught. – kaya3 Jan 28 '23 at 12:18
  • @kaya3 Suppose that a student John see how somebody draws a playing dice (which is very symmetrical) twice. Will John suppose that these die rolls are independent? Will John suppose that $P(AB) = P(A)P(B)$? Practice says: "Yes". Because lots of times John solved problems about dices on probability theory and lots of times John supposed that $P(AB) = P(A)P(B)$. But in fact "independence in probability theory" differs from "independence in everyday life". Hence John makes a mistake. – Botnakov N. Jan 28 '23 at 19:07
  • @kaya3 Do you see this mistake or do you reject it? If you see the mistake then it's obvious that John's teacher "lied to children" because after probability lessons John still does this mistake. The onliest way not to agree with these facts is to reject that John makes a mistake, but it's not a good way. – Botnakov N. Jan 28 '23 at 19:08
  • I cannot make any sense of your proposed scenario involving John, because I don't know what it means to "see how somebody draws a playing dice" unless you mean John observes an artist create a picture of a dice. What events are you talking about? Is there a reason you keep writing P(AB) instead of P(A and B) or P(A intersect B)? – kaya3 Jan 28 '23 at 23:06
  • @kaya3 Yes, there was a strange wrong word in the previous comment. Suppose that student John see how somebody rolls a playing dice (which is very symmetrical) twice. Do you think that John suppose that these die rolls are independent?

    Writing P(AB) instead of P(A and B) is a standart notation, every book on probability theory uses such notation, see, e.g., Feller, An introduction to probability and its application, vol.1.

    – Botnakov N. Jan 30 '23 at 21:34
  • In that case, it would be very reasonable for John to suppose that the two dice rolls are independent, and if John is wrong about this then the problem is that he has made an assumption that wasn't given to him in the problem statement, not that he has used an incorrect definition of independence. (He hasn't.) – kaya3 Jan 31 '23 at 09:43
0

1-2-3-4-5-6-7-8-9-10... Well, not if you are using binary...

The angles on a triangle add up to 180 degrees... Only if the triangle is on a flat surface.

Multiplication is repeated addition. Have you ever multiplied, say, (-3)*(-4) and wondered about how you were adding up a negative number of times? No?

Most of elementary mathematics when you get to abstract algebra, really. Adding two positive numbers always gives a positive, larger number? Nope, sorry, we are in Z7 today so 4+4=1. (Or we are actually adding time--if it is currently 10pm and your homework is due in 11 hours, 10+11=9am.)

Subtraction is adding a negative number. Unless you are subtracting a negative number, oops.

IMO math is absolutely crammed full of lies to children, to the point where any elementary math teacher who uses the words "always", "never", "every", etc., is probably automatically fibbing.

user3067860
  • 246
  • 1
  • 6
  • 5
    I claim these aren't lies, but either unusual notations or generalisations. "Well, not if you are using binary" - that is, if the symbols "10" do not in fact mean the natural number ten. "We are in Z7 today", so the symbol "4" does not refer to the natural number four. "The angles on a triangle add up to 180 degrees", unless the triangle isn't drawn on a plane. "Multiplication of natural numbers is repeated addition". None of your statements are false, they're just misleadingly or sloppily phrased. – Patrick Stevens Jan 27 '23 at 23:48
  • I also thought about "Multiplication is repeated addition" for this question, but there's one big problem : it's not a lie at all. If you post a link to the corresponding question, you might as well post a link to the accepted answer, which basically refutes Devlin's rant. Math is full of examples which are correct for a class of numbers, and need to be extended for superset. It doesn't mean it's a lie to describe the example with a simpler version for subsets. – Eric Duminil Jan 28 '23 at 21:20
0

$$\frac{1}{0} = \infty$$

I was told this by my Math teacher in my 8 or 9 grade. I only knew at that time about infinity was that it is a very large number; so large that no can ever write it.

I don't expect it to be very common though.

An_Elephant
  • 149
  • 3
0

Students are often taught the chain-rule as a trivial cancellation law. In reality, there are intricacies within the chain-rule.

Vivaan Daga
  • 257
  • 2
  • 9
  • 1
    Welcome to the site. Can you be more specific? Showing an example of how the chain rule is taught and how it should be taught would clarify details for the reader. – Amy B Jan 30 '23 at 09:58
0

In the US, UK, and former UK colonies, at certain primary-school grades, you can't leave your final answer as an improper fraction. You'll lose marks for answering

$$\frac{3}{4}+\frac{1}{2}=\frac{5}{4}.$$

Students are taught that there is something "improper" and wrong about leaving the above as your final answer.

For full marks, you must answer

$$\frac{3}{4}+\frac{1}{2}=\frac{5}{4}=1\frac{1}{4}.$$

Some grade levels later, students discover this "rule" was nonsense.


This seems to be mainly a US/UK thing: See What is the rationale for distinguishing between proper and improper fractions?.

  • This feels like just a lie (or rather bad convention) to me rather than a “lie to children”. Most of the answers to the linked question go along the lines of: It is good to know this exists or to avoid weird representations (like 22/7), but that’s no reason to enforce this. – Wrzlprmft May 25 '23 at 06:43
  • @Wrzlprmft: See the linked discussion: It seems the main reason for teaching students how to convert improper fractions into mixed numbers is that the latter commonly appear in everyday life (at least in the US and UK). –  May 25 '23 at 06:54
  • Not only the US/UK thing. – Rusty Core May 30 '23 at 00:42