69

I can't resist asking this companion question to the one of Gowers. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.

So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics.

I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.

Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.

Minhyong Kim
  • 13,471
  • 45
    +1: the title made me laugh. – Alicia Garcia-Raboso Dec 10 '10 at 03:46
  • 8
    After having a conversation in the hall today about whether or not Grothendieck spent time computing things, I thought the title meant something else... – Dan Ramras Dec 10 '10 at 04:03
  • 4
    Certainly Russell and Whitehead were "computers": it seems everything in their Principia could be proved by computers in the 80s in a couple of minutes. – Chandan Singh Dalawat Dec 10 '10 at 07:21
  • 24
    Submit candidate to Turing test. – Georges Elencwajg Dec 10 '10 at 10:08
  • Sorry if this question is not appropriate, but what meand "typically computer-like thinking" to someone who does not have a working knowledge of the principles of microprocessor design? – Tim van Beek Dec 10 '10 at 12:39
  • I would submit that all good mathematical proofs and trains of thoughts are "like a computer"; my point being that a proof must be clear and precise and lacking ambiguity, with steps that follow from one on to the next, and connecting links that show why this must be the path that is followed. It is finding the proof that is the artistry in mathematics. Following a proof presented by someone else is like being a "computer" (in the steady-path sense, not the "calculating" numerical-sense) in not jumping to conclusions or taking anything for granted. – sleepless in beantown Dec 10 '10 at 13:07
  • Tim van Baek: I suppose I'm applying the usual 'computer is software' prejudice. I agree it could be a very limiting view. – Minhyong Kim Dec 10 '10 at 14:05
  • 3
    @Alberto: Of course, Grothendieck is on the record as a fervent computer-hater, so I doubt this would make him laugh... (which makes the title even funnier to me). – Thierry Zell Dec 10 '10 at 14:17
  • 2
    I suggest to change the title to "Is Grothendieck a computer?" (instead of "Was Grothendieck a computer?"). – Pierre-Yves Gaillard Dec 11 '10 at 11:22
  • 8
    I realize the title is a bit of a joke. But perhaps there is something to it. I have wondered, could an ordinary human possibly produce so many thousands of pages of output? – Donu Arapura Dec 11 '10 at 14:37
  • @Donu-Arapura, it's a bit like the hundreds upon hundreds of books that were typed and produced by Isaac Asimov, another entry in the "really an ordinary human" category (or perhaps there's a reason that Asimov wrote so many stories about robots and "positronic brains") . – sleepless in beantown Dec 11 '10 at 16:18
  • If Yes, only someone like him could have program a like-him computer... – Buschi Sergio Jul 06 '12 at 13:07

7 Answers7

34

A simplicial set is surely an idea which would be more natural to a computer. Breaking a shape up into simplices is still something a human would do, because simplices are contractible geometric objects whose gluings one can explicitly describe. But to pass from this to finite strings with face and degeneracy maps, and then to base your theory on that, is pure computer-thought... and, like any good computer idea, extremely pretty.

  • 2
    The insight behind simplicial sets is, I think, a bit deeper than that. The motivation you've given is for simplicial complexes. It's completely non-obvious from a homotopy perspective that simplicial sets should be able to model spaces so well up to homotopy. – Harry Gindi Dec 10 '10 at 03:55
  • 2
    As I understand it, a simplicial complex in fact is, in some sense, how objects are encoded in computer-adided design. Furthermore, it might be argued that it is simplicial complexes that are fundamental, and the move to simplicial sets might have been made by any old computer, once it was required to consider morphisms. The proof that this is a good model for spaces, I agree is far more involved. – Minhyong Kim Dec 10 '10 at 04:59
  • 1
    @Harry- That's what I meant! Simplicial complexes are still a human idea... but simplicial sets are the computer idea, and are extremely pretty. @Minhyong- I would argue that simplicial sets go far beyond simplicial complexes (despite the existence of geometric realization), and are therefore "more fundamental". Braids form a simplicial set (face-map = deleting a strand, degeneracy= cabling)- but I have no idea how they might form a meaningful simplicial complex. – Daniel Moskovich Dec 10 '10 at 12:55
  • 7
    I don't think I had ever heard simplicial sets called extremely pretty before... – Mariano Suárez-Álvarez Dec 10 '10 at 14:05
  • @Mariano: We've had plenty of conversations on IRC. I've never said anything like that to you? I'm surprised! – Harry Gindi Dec 10 '10 at 15:49
  • 1
    I dont understand the computer human distinction here and why simplicial sets are natural to computers. Anyway, dear Daniel, who are the computeroids most associated to the simplicial set idea through the ages? – Gil Kalai Dec 10 '10 at 21:02
  • @Gil Kalai: The computeroid closest to home might be Farjoun! I was thinking Bousfield, Kan, Curtis, May, and Quillen, although I'm probably overlooking important people. Sepaking of which, another example of an idea more natural to computers would be abstraction of the polynomial Hirsch conjecture. I think a "computer idea" means, among other things, "to strip a geometric/topological problem of all its geometry, and to transform it into a combinatorial problem about sets, boxes, and arrows". A computer would never think geometrically, but only in terms of arrays, pointers, and data sets! – Daniel Moskovich Dec 11 '10 at 22:47
  • Why are simplicial sets more "computery" than abstract simplicial complexes? – Omar Antolín-Camarena Jun 27 '13 at 14:31
10

I think the human/computer dichotomy you set up should be extended to a human/mathematician/computer trichotomy, just because a substantial portion of "mathematical maturity" is about learning to think like a computer, in your sense.

Anyway I've just put that in place to try and shore up my example. It seems that humans read "let's say we have X and Y..." and automatically take the extra step of assuming X and Y are unequal. Computers wouldn't bother to take this extra step. Mathematicians, or at least I myself, split into the X=Y and X$\neq $Y cases but try to obviate that split when writing down a proof.

Allen Knutson
  • 27,645
8

I disagree with the premise of this question.

Conventional computers follow a program written by a human. I think, for example, Daniel Moskovich's answer about simplicial sets is something that a human programming a computer (or a computer scientist) would think of when trying to program a computer.

Formalisms like these are things that we humans think of when programming a computer. Hence we have a tendency to think of the as "more mechanical", or "more like a computer", etc.; but I think it's a mistake to think that this is something that a computer "would come up with" on its own. Really it's us humans that come up with them, just when we are thinking in terms of computing.

There are computers which can be thought of as actually "thinking" in a way similar to humans (as opposed to just following a program), e.g. IBM's Watson computer. They need some large data set to learn from, though (just like we do), and if this large data set is all of the mathematics created by humans, then I think the mathematics produced by the computer would look a lot like things "a human would think of"!

John Pardon
  • 18,326
5

Grothendieck seems to be still alive. So should not the question be: Is Grothendieck a computer? (Ask him, good luck!) Or perhaps: How did he morph from a computer to ... whatever it may be?

3

I suppose asymptotics for certain functions (e.g., Prime Number Theorem), or any sort of conjecture based on large empirical evidence, would count, but that's probably not what you mean.

Perhaps more interesting is the following. In high school/college, I was briefly interested in automated theorem proving and read about this (I don't remember the source and may miff the details--perhaps someone can help). Around the 60's or 70's, someone wrote an AI program to use numerical evidence to have a computer "deduce" many theorems/conjectures in number theory. They showed their answers to Knuth, and he marked the ones he thought were mathematically interesting. At least one thing that stood out was several interesting "results" on highly composite numbers, which I think Ramanujan may have studied as well.

Kimball
  • 5,709
1

This paper on homotopy and set theory seems to take this question seriously: if you restrict yourself to posetal categories and try to do model categories in a brute-force naive way, you arrive to definitions of some set-theoretic invariants...So maybe we can say that Shelah is a computer. ;)

o a
  • 468
0

I think that many conjectures from number theory (which I think count as insights) might be more obvious to a computed than a human being since they would have access to a huge amount of empirical data from which to discover patterns and obtain estimates as to the probability that something is true or plausible. This is a very effective way to discover theorems in number theory. The method of discovery is along the lines described by Polya in his books on plausible reasoning in relation to Euler's discoveries.

To obtain the same level of confidence humans need insight and proof which in number theory is often really hard to obtain.

There are some mathematicians like Ramanujan, Euler, Gauss who had similar abilities but this is quite rare.

Also mathematical results that are accessible by humans must be true for a reason i.e. there must be a reasonably short deductive route from known theorems. Work of Chaitlin and others suggests that some theorems are not true for any reason i.e. they are not amenable to any deduction from a set of axioms of less complexity than themselves. On the borderline there must be profound mathematical results that are close to being empirically true in that sense. You would imagine that computers might have a better chance of understanding and perceiving these results since they might be able to reason more effectively from a much wider vantage point empirically speaking given their massive processing power.

Ivan Meir
  • 4,782