9

I really want to say rejecting a line of reasoning because ChatGPT created it would be an ad machina argument.

(Note, I'm interested in the case where the rejection is made without any consideration of the validity, or lack their of, of the rejected material.)

Futilitarian
  • 4,352
  • 1
  • 8
  • 41
BCS
  • 201
  • 1
  • 5
  • Not the point of the question, but FWIW, the motivation for the question was a comment implicitly discarding someone's point with "Are you ChatGPT?" (I happen to have good reason to believe the rejected point came from a human.) – BCS Aug 31 '23 at 05:14
  • 6
    It is not, technically, ad hominem, but similar in nature. There is already a classification it falls under, genetic fallacy:"a fallacy of irrelevance in which arguments or information are dismissed or validated based solely on their source of origin rather than their content." As always with informal fallacies, they may well be pragmatically justified in many contexts. We do not have time to consider validity of every output of dubious origins. Dismissals on bad rep are common practice, and ChatGPT is known to be bad at math, for example. – Conifold Aug 31 '23 at 10:54
  • 1
    It depends a little whether by "rejecting a line of reasoning" you mean (a) deciding that the line of reasoning is likely to be flawed, (b) deciding that the conclusions are likely to be flawed, or (c) deciding that it's not worth wasting your time on. – Michael Kay Sep 01 '23 at 07:42
  • *ad machinam (https://en.wiktionary.org/wiki/machina#Declension) – adam.baker Sep 01 '23 at 10:47
  • How did this argument go about machine generating an ad hominem fallacy? And why would "attacking" be an argument? If your title was ChatGPT generated, we would not be able to meaningfully decide which meaning the title had. We can only do that if we assign a mind to you, and guess what you would be likely to intend us to understand. There is no way to interpret the language in a ChatGPT text and know that the interpretation is correct. In fact, we know that no interpretation of the language *can* be correct, because their is no actual communicative intention to decode (1/2) – Araucaria - Not here any more. Sep 01 '23 at 19:33
  • And thus there can be no correlation between the interpretation and the intention to make the interpretation correct. (2/2) – Araucaria - Not here any more. Sep 02 '23 at 09:26
  • @Araucaria-Nothereanymore. I find that assertion that meaning of a text only follows from intent to be interesting. I'm of the school that language use has specific meaning irrespective of the speakers intent. (If I were to say "@$#&@ you" and then honestly say I intended that to mean "live long and prosper " I don't think many people would agree that it did.) I'll grant things like context can have a big say, but the intent isn't what crates the meaning IMHO. (1/2) – BCS Sep 02 '23 at 14:38
  • @Araucaria-Nothereanymore. I know of one specific case where the meaning of randomly generated chunks of language was of great interest: they were in a programming language and part of a fuzz test. Nobody cares what the meaning ended up being but only that the compiler correctly extracted that meaning. (2/2) – BCS Sep 02 '23 at 14:39
  • @BCS Perhaps see this question here and my answer to it. Maybe also my answer below. I'd be interested to know if you still believe that language has specific meaning outside of a speaker's intent, or whether those answers make it more difficult to do so. – Araucaria - Not here any more. Sep 02 '23 at 16:10
  • @Araucaria-Nothereanymore. I do still believe that. Your linked answers seems to me to tie intent, truth and meaning together too much. A sentence can have meaning, even without any intent and when that meaning is trivially falsifiable. The meaning can even be at odds with the intent of the speaker. -- Or maybe we are disagreeing about the meaning of meaning? If so, then the thing you are talking about seems rather uninteresting to me, and most of the interesting bits are better described by other words. – BCS Sep 03 '23 at 06:36
  • Here's another way of putting it. No sentence can have meaning unless it has referents (things being referred to), which means things that are being referred to. It needs referents for its subject and objects and other arguments of the verb and for the time indicated by the tenses used and intended times which can be understood from non-tensed verbs and so on and so forth. And when you talk to someone you use your shared knowledge and your knowledge about what they know about you to help you guide them towards a successful identification of the referents (1/3). – Araucaria - Not here any more. Sep 03 '23 at 13:53
  • So, let's say you want to talk about your daughter. If you're speaking to someone who know your daughter well, you'll use their first name because that will help the person identify who you mean. And they'll know that you know their daughter's name and that they're the kind of person that you might talk about using their first name. But if your talking to someone who doesn't know your daughters name (or even maybe that you have a daughter), you'll say "my daughter" or maybe "my daughter [insert first name here]". You're using the context to guide the listener to the right referent (2/3) – Araucaria - Not here any more. Sep 03 '23 at 13:54
  • Note that the meaning involved is not "Mary did X" or "My daughter did X". The meaning is that the referent of the word "Mary" or "my daughter", the actual person involved, did X. The term guiding you to understand which person is trivial (that's why you almost never remember the words someone says, just the information they tell you). So there must be an actual referent an actual person, or animal or chair or building behind each referring expression. But in a ChatGPT text, there isn't one. There simply is no referent behind any of the referring expressions. (3/3) – Araucaria - Not here any more. Sep 03 '23 at 14:01
  • @Araucaria-Nothereanymore. Nothing you have said has changed my opinion in any way. As a counter example, I asked GhatGPT "Write a 27 line C++ program" and it gave me something that has a meaning (even if it's useless) in the sense of "meaning" that I'm interested in talking about as clearly evidenced by the fact it compiles and runs producing the same results regardless of who "sees" (compiles) it: https://gcc.godbolt.org/z/6Pxo3a7r3 --- What makes natural language different? – BCS Sep 03 '23 at 23:32

12 Answers12

39

It is fallacious to make the formal argument that a conclusion is false because of its provenance.

It not fallacious to dismiss an argument out of hand because it comes from a source which is well known to almost always make specious, dishonest, or nonsensical arguments. It's not even an argument, but it is a good idea.

It is not fallacious to make the argument: "Because this argument comes from a source which is well known to almost always make specious, dishonest, or nonsensical arguments, it is probably specious, dishonest, or nonsensical."

It is not fallacious to make the argument: "I cannot always tell when an argument is specious, dishonest, or nonsensical. This argument comes from a source which is well known to almost always make specious, dishonest, or nonsensical arguments. Therefore even though I cannot tell that it is specious, dishonest, or nonsensical, there is a significant probability that it might be."

It is not fallacious to dislike and discourage the making of specious, dishonest, or nonsensical arguments, whether or not they're plagiarized. It's not even an argument, but it is a good idea.

It is not fallacious to dislike and discourage plagiarism. It's not even an argument, but it is a good idea.

g s
  • 5,767
  • 2
  • 6
  • 24
  • Good points, however I'd class most of that as some form of "considering the validity of the argument". Also, about half the question is if the correct term changes in this case vs the normal ad hominem case. – BCS Aug 31 '23 at 05:11
  • 4
    tl;dr; heuristically assessing an argument as not worth considering based on its source can be reasonable, even if that isn't grounds for invalidating the same argument? - my question is about trying to do the second. – BCS Aug 31 '23 at 05:20
  • 1
    Well, suppose a source produces 99% logically flawed arguments, 1% valid and novel arguments about important topics. Because of that 1%, this is a very useful source. We should not dismiss it just because 99% of what it says is wrong. We should not trust it, but it could be very rewarding to put in the effort to "pan for gold" and find that 1% useful and novel arguments. – causative Aug 31 '23 at 06:03
  • 1
    (However, ChatGPT probably doesn't produce 1% valid and novel arguments. But if a hypothetical source did, the fact that the hypothetical source also produces 99% fallacious arguments should not deter us from studying it.) – causative Aug 31 '23 at 06:05
  • More relevantly, brainstorming (among humans) is good, and brainstorming produces mostly bad ideas, that we need to think about to find a few good ones. And what seems like a bad idea initially may turn out to be good, when considered in more depth. Coming up with bad ideas and talking about them is indispensable to eventually finding some good ideas. – causative Aug 31 '23 at 06:19
  • @BCS I think that is a reasonable tldr. As regards terminology, I care not at all for definitions, so long as they are either explicit or in common use. I would suggest "Fallacy of Origins" for a name, because naming things in dead language is pretentious and obfuscative. – g s Aug 31 '23 at 06:49
  • 3
    @causative I agree that novel valid arguments about important topics are rare and valuable, although I would be surprised to learn that ChatGPT has generated even one such argument, given that its purpose is to produce text indistinguishable from that generated by brainless sophomores. – g s Aug 31 '23 at 06:52
  • @BCS Fallacy of Origins is just a renaming of the Genetic Fallacy, so Futilitarian's answer is correct if that's what you were looking for. – g s Aug 31 '23 at 07:01
  • You may want to change "a conclusion is false" to "an argument is unsound", otherwise there would also be an implied fallacy fallacy. – NotThatGuy Aug 31 '23 at 07:56
  • 10
    @causative "Because of that 1%, this is a very useful source. We should not dismiss it just because 99% of what it says is wrong" I would say it depends on how much effort it takes to sift out the 1% from the crud. One's time may be better spent tryng to find the 1% yourself. – TripeHound Aug 31 '23 at 13:58
  • @TripeHound True. But reading 100 arguments to produce one valid and novel argument on an important subject seems like a good use of time to me. Valid and novel arguments on important subjects are normally very hard to come by. – causative Aug 31 '23 at 14:17
  • 2
    Refusing to argue with a ChatGPT instance someone set up is not rejecting the argument because of its source, but because this is not good faith debate. – Simon Richter Sep 01 '23 at 01:36
  • @causative But when we get back to the OP's question, it can never be novel because (by definition of how ChatGPT works) it's merely repackaging an existing concept. – Graham Sep 01 '23 at 10:49
  • @Graham ChatGPT is capable of writing some fairly creative stuff. It's a serious problem to stop ChatGPT from fabricating false information that sounds plausible but was never in its data set. If it can fabricate lies, it can also be original. Human artists also repackage existing concepts. You know the saying, "good artists copy, great artists steal." The main weakness of ChatGPT is that it is bad at checking what it says for consistency and factuality, not that it is bad at coming up with novel stuff. – causative Sep 01 '23 at 14:00
12

See genetic fallacy.

In brief:

This fallacy avoids the argument by shifting focus onto something's or someone's origins. It's similar to an ad hominem fallacy in that it leverages existing negative perceptions to make someone's argument look bad, without actually presenting a case for why the argument itself lacks merit.

Futilitarian
  • 4,352
  • 1
  • 8
  • 41
  • *Which* fallacy avoids the argument? And which argument is the text saying it avoids? And why are someone's origins similar to an ad hominem fallacy? And why does someones subject or direct object look bad. And which someone is it whose argument is always made to look bad? I'm kidding. We know the answer to those questions because we can tell what the writer was thinking and how they would expect us to interpret it and someone and argument and so forth. We can see how they'd use the context and our understanding of it to successfully decode their intention. – Araucaria - Not here any more. Sep 01 '23 at 19:25
  • 1
    When someone reads a ChatGPT text, they are making all kinds of decisions about what the 'speaker' intended a term to mean. But there is no way of determining what any of the terms mean if there was no intention of any speaker for them to mean anything. So, because there is no actual argument in any ChatGPT text, it is valid to dismiss a perceived "argument" in a ChatGPT text. That's because there is no argument there. – Araucaria - Not here any more. Sep 01 '23 at 19:29
  • @Araucaria-Nothereanymore. You're correct in your critique - the citation has awkward phrasing in the context of textbots - but the identification of the genetic fallacy is still correct. Arguments are internally complete logical objects that connect premises and conclusions. To make a fallacious argument in reference to another argument, whether it's a real argument or a machine-generated squiggle picture that happens to look like an argument, the referenced argument has to be packaged into a premise. – g s Sep 02 '23 at 14:42
  • Premises can be true or false without changing the fallacy or correctness of the rest of the argument. It doesn't matter therefore whether the unstated premise "the following squiggle picture whose merits I shall consider is an argument or conclusion created by an intentional agent" is true or false. – g s Sep 02 '23 at 14:44
  • Also consider that most of the time unless you're the one typing in the query into the chatbot, the squiggle picture has been relayed to you by an intentional agent, at which point it's reasonable to infer that the agent means the plain meaning of the generated text. – g s Sep 02 '23 at 14:46
  • @gs I think the point is made more clearly in my post - where I try to show why we cannot have what may first look like premises or a conclusion from them in a ChatGPT text. I think one can reject a duck as an argument bon the basis that it came out of a duck's egg, because this shows it is not an argument but a duck. Not every argument that rejects something on the basis of its origin is a fallacy, even if a genetic fallacy rejects something on the basis of its origin. – Araucaria - Not here any more. Sep 02 '23 at 15:01
12

Are we really dealing with an argument if the text is generated using a method that does not involve any kind of reasoning or understanding about the subject?

ChatGPT only generates sequences of lexical symbols (that is, letters and punctuation) based on very complex set of rules that have been tuned to generate new sequences that have same features as those in the input corpus. Even if ChatGPT does it so well that it mostly looks like what a human, sometimes even an expert could have written, it is still nothing but sequence of lexical symbols.

Without any actual reasoning or understanding, everything in the response is completely, totally superficial: it only looks like a proper response, but it is not. That is why the responses are so often nonsense in some way or another.

I argue that such text does not provide an argument. ChatGPT is just a very advanced symbol sequence generator even if the output looks like an argument. It looks like that because the whole point of ChatGPT is to mimic what people have written.

It is more likely that assuming that such text contains an argument worth attacking is the actual fallacy here.

Jani Miettinen
  • 329
  • 1
  • 6
  • "ChatGPT only generates sequences of lexical symbols (that is, letters and punctuation) based on very complex set of rules that have been tuned to generate new sequences that have same features as those in the input corpus." One could argue that this is precisely what humans do when they write an argument... – Carl-Fredrik Nyberg Brodda Sep 01 '23 at 05:08
  • Is a computer generated proof an argument? Assume the proof is generated via software specifically written to solve an open question. I'd say that it is, and I'd further say that there is no bright line between that and ChatGPT blindly stumbling on a good logical construction. – BCS Sep 01 '23 at 06:14
  • 1
    @Carl-FredrikNybergBrodda That is potentially true. However, the ruleset in that case is more complex by several orders of magnitude, and the processing has the possibility cover all domains and aspects of mind, and that includes the absolutely crucial one that makes the difference: this one involves at least some knowledge of the world. Chat bots we know of only operate on sequences of symbols, and there is nothing more. – Jani Miettinen Sep 01 '23 at 08:53
  • 3
    @BCS If the software is specifically written to generate a question on a subect, then it absolutely must (1) possess at least some domain knowledge on the subject, and (2) be capable of doing reasoning. But, these chat bots basically only know of what typed text should look like. There is not a single bit of domain knowledge and without it, they cannot do reasoning on it either. They all are as dumb as a rock, literally. – Jani Miettinen Sep 01 '23 at 09:12
  • 1
    The only "knowledge" these chat bots can have (even in theory) is basically limited to what written text looks like. There is nothing else. The amount of knowledge on features of written text is so immensely vast, however, that the output can fool people easily. But the output is typically nonsense specifically due to total, absolute lack of any knowledge on anything that is of substance. – Jani Miettinen Sep 01 '23 at 09:23
  • 3
    We can probably deny that ChatGPT is arguing. But that's a different consideration from whether some chunk of prose it produces contains an argument. – John Bollinger Sep 01 '23 at 15:53
  • @JohnBollinger That is a good point, and it clears out some common misunderstandings about how ChatGPT functions. – Jani Miettinen Sep 01 '23 at 17:51
  • @JohnBollinger My answer below sets out exactly why we cannot say that any given ChatGPT "prose" contains an argument. Or rather, it tries to. – Araucaria - Not here any more. Sep 01 '23 at 19:09
  • @JaniMiettinen +1 I think you said much more succinctly a large part of what I was trying to set out in my answer. I missed it somehow before I set about writing mine ... – Araucaria - Not here any more. Sep 01 '23 at 19:10
  • 1
    It is possible that, due to some emergent phenomena, something akin to domain knowledge can get spontaneously extracted from the words... but only if the network is complex enough. Its capacity must definitely exceed something that is just enough to generate the text, and I seriously doubt there is computing capacity enough for that yet. However, until the AI can test its ideas against reality, that would be coincidental because the same process can generate also total nonsense. And nonsense is more likely to appear there. – Jani Miettinen Sep 02 '23 at 21:40
  • So, there are serious limits on computing capacity today, and AI is unable to test any of its ideas against reality. So, if it happens to generate something that is a valid argument, then it is purely by coincidence. It has no capacity to output anything else. That is why dismissing its output is not a fallacy, instead providing its output as an argument is. You need at least to verify it, and if it passes verification, then you have done some studying, too, and then you can write a nice argument based on what ChatGPT generated. But... studying on internet... – Jani Miettinen Sep 02 '23 at 21:48
  • In the end, using ChatGPT is not too different from reading tea leaves. Very much just sometimes you it happen that you see something in the tea leaves that coincidentally also happens in the future. Would that make results of reading tea leaves a valid argument? – Jani Miettinen Sep 02 '23 at 21:56
  • You can, however, take action on what you see there, and check it there is something to it. If it happens to confirm the output, then it is no longer just something that is potentially random. Now there is an argument, the tea leaves or ChatGPT only led you to you it. And, typically people appreciate that the one who presents the argument also makes the effort to confirm its validity. – Jani Miettinen Sep 02 '23 at 21:59
  • 1
    Of course you can argue that its output is an argument, it is just not validated yet, but you also can say same thing about tea leaves. – Jani Miettinen Sep 02 '23 at 22:02
  • Ps. I don't intend to underappreciate neither reading tea leaves nor ChatGPT or its kin. – Jani Miettinen Sep 02 '23 at 22:06
8

The problem with the philosophical ideal of judging every argument on its merits is that a human lifetime is not long enough to do it, by very many orders of magnitude. Like it or not, you will reject (i.e., ignore) the vast majority of human output based not on intrinsic merit but other criteria.

Any time you start to read someone's writings, decide they are mentally ill, and leave the rest unread, you're rejecting their other arguments based only on (what you believe to be) a property of the person who made them. Any time you tell someone that that person's work is not worth their time, you are for practical purposes arguing ad hominem. I find it impossible to see that as wrong, because it's unavoidable.

Nothing in that argument changes if you replace Gene Ray by ChatGPT. I've read enough of ChatGPT's writings to form the opinion that nothing it writes is worth reading (except as humor, or as a window into its "mind"). I would tell others the same, and it's up to them to decide whether they trust my opinion.

benrg
  • 1,208
  • 6
  • 11
4

I do not dismiss ChatGPT arguments because I know them to be invalid. I decide to not bother to engage with them because experience has shown them to usually not be worth the effort. It is an argument from efficiency.

And while ad hominen is a recognized fallacy, there is another, less recognized fallacy: that any being in the universe has unlimited time and/or energy to faff around with the firehose of nonsense some people or programs can generate. Maybe call it "the fallacy of omnipotence"?

I understand that ignoring the time and energy required to deal with an argument is convenient when modeling thought, and lets us get at other interesting questions, but in reality nobody sane lives that way. And any philosophy that out-right refuses to deal with that fact will in some sense be fundamentally unhuman.

JonathanZ
  • 480
  • 4
  • 9
3

It's obviously a fallacy to dismiss an argument without reason just because you distrust the source. Like dismissing the argument of a known liar might be a good heuristic in terms of where not to waste your time, but it's ultimately a fallacy as they are still capable of telling the truth and might even do so even if it's rarely done.

The more interesting question with regards to AI generated text is rather who is the author and what is the argument. Because so far the AI doesn't do logic so if there are followup questions you're almost inevitably required to steelman the argument.

haxor789
  • 5,843
  • 7
  • 28
  • 2
    E.g. the "boy who cried wolf" did eventually tell the truth. – Barmar Aug 31 '23 at 15:08
  • 1
    People who lead sects try to avoid lying as much as possible, and they are very good at reasoning and being argumentative, so if you decide to not dismiss their claims on the grounds that they are a sect leader, and spend too much time listening to their claims as a result, then you end up signing all your money and children over to them. – Stef Sep 01 '23 at 07:56
  • @Stef I'm not saying there can't be good reason to dismiss an argument based on it's source, it's nonetheless a fallacy to assume it's false just because the source is not trustworthy. And sure a bad faith actor can use rhetoric rather than reasoning, could exploit psychological biases and weaknesses of the human body such as a limited attention span. You could technically perform DDOS and buffer overflow attacks on a human being but that's not an application of logic but an attempt to bypass it. – haxor789 Sep 01 '23 at 10:02
0

Yes, either an ad hominem or the genetic fallacy that Futilitarian pointed out, depending on your reasoning for ignoring the arguement. If you argue against the virtues of a chat bot (e.g. it hallucinates) then I would say it is ad hominem.

To understand this its best to think about what exactly ChatGPT is doing, its been trained on lots of writing, including arguments, to learn how to respond to inputs. A very simple form of this would be if I got a bunch of experts (or not so experts) to write a bunch of arguments to question X, then put them in a hat, mix them around and draw one of them as my response. If you argued that you can ignored the arguments because I drew them from a hat then you are committing a fallacy (especially if we only used experts)

We can make this closer to ChatGPT by splitting arguments into parts adding labels of what kinds of arguments can follow this part (e.g. all As have B, can be followed by arguments of type: "X is A" or "B implies C"), and what kinds of arguments this follows (e.g. requires A to be defined). Then if I randomly draw an argument part that assumes nothing (or only that I'm trying to answer a particular type of question), I can then follow this by collecting all parts that could be the next argument parts and therefore is something that would follow the first thing I drew from my hat. Do this a number of times and I have an argument that maybe none of the experts proposed, but built out of valid parts that experts have considered. Again, ignoring this argument because it came from drawing your answers from a had would lead to a fallacy.

If you are still thinking that this is a random process and has no understanding, it is entirely possible for a human to make a valid argument based on repeating what they have heard experts say. Sometimes mixing up similar arguments from different experts. This means you cannot immediately ignore randomly generated arguments because they came from a machine. If you say you ignore all arguments where the speaker has ignorance, then you forget that even experts don't lack ignorance (they just have a lot less than the average person).

Overall if you have an argument generated by ChatGPT, it is valid if someone can read that argument and can champion it as true, and it is invalid if someone can read that argument and say it is wrong because of these reasons. The main reasoning towards automatically ignoring it is opportunity cost to validate that it didn't just make convincing garbage, but that is a different question to the one you asked.

N A McMahon
  • 101
  • 1
0

That chat-gpt created it, I suppose not actually, though obviously it depends on context (it can be cheating). But becasue chat-gpt agrees with it? Absolutely 100% convinced we should not pass over out critical faculties to robots.

0

Short answer

No, it is not an ad hominem attack to dismiss "an argument" generated by ChatGPT. This is because only a little investigation will show that any "text" generated by ChatGPT cannot contain any arguments at all. Ever. To think otherwise is to massively misunderstand what language is and how it works. Asking this question is like asking:

Is dimissing an argument because it's machine generated an ad hominem fallacy if the argument is not an argument but a bonobo monkey?


Full answer:

It is a mistake of epic proportions to assign an argument to something which contains no argument. Take, for example,a set of symbols which has been generated to look like human speech.

It would be absolutely fallacious, immoral in fact, to say of such a text that it contained an argument which is either valid or sound. This is because what you would be looking at is not evidence of the intent of a speaker to influence a listener or reader's stock of assumptions in some way. This will not be immediately clear without an example.

Let's consider the following piece of supposedly machine generated text:

The man punched John. Then he punched him - harder. So they were both violent.

We cannot consider this argument to be either sound or valid. The reason is that it means nothing. It means nothing because there was nobody at the creating end of this argument who had an intention to influence a listener. There is no creator who is using a combination of context and shared knowledge to make sure that the intended interpretation of their utterance influences the listener's stock of assumptions in the intended way.

For a start, who is the referent of the word John meant to be? John Lennon, my friend John from the pub? Your uncle? We don't know. You can assign a referent to the word John if you wish. But you cannot argue the case, or make out that your assignment is correct. You might say, "Oh, but we can say it refers to some man called John". But, erm, no you can't. It could easily be retorted that it refers to an orangutang, or a tiger and there's nothing we can meaningfully say or do to counteract this.

I can hear someone say: "Well, we can nonetheless say that the argument is valid, even if we don't know who the word John refers to and whether they actually hit anyone" But, erm, no we can't. Not nearly. The reason is that we cannot know who the words he and him are meant to represent. That's because they're not meant to represent anything or anyone. And that's because there was no communicator meaning them to be interpreted in any particular way. You could interpret he to mean John, and the word him to mean the man. But, so what? Someone else can say that the assignment should be the other way round. You could argue the point, but it is a meaningless argument. You cannot clarify who the pronouns refer to. There was nobody intending them to refer to anyone. Furthermore, I have a female friend called John. And so could I easily say that in this case neither he nor him refer to John. And I could further argue that therefore the argument is not valid because I can argue that the word both refers to John and the man, and John has not been shown to be violent. But that would be ridiculous drivel too!

As nobody can definitively define any referents to any of the seeming referring expressions in the 'text' that was generated, one cannot truthfully say that it has any meaning or that it is valid or invalid or sound or not sound. To assign such a meaning to it would not only be idiotic it would be mendacious and any assessment of the text as representing a valid argument would be equally fallacious.

The words that speakers use to communicate are only meaningful because of both the communicative intention of the speaker and the successful interpretation of that intention by the listener. The language in any given utterance is normally hugely underdetermined as well as being multiply semantically and syntactically ambiguous. Language itself it not a code produced by an enigma machine for which a listener has a key. It is just an archeological trail, which given the context and shared knowledge can lead a listener to a correct interpretation of the speaker's intentions.

No speaker, no communication. No communication, no argument. You wouldn't assign validity to a camel, or soundness to a cloud. You can't assign an argument to a jumble of ChatGPT generated symbols designed to to look like an artefact of human communication.

  • This supposes a definition of "argument" different from the one the OP seems to be using. If I am presented with a sequence of symbols that I would recognize as a logical argument if it were prepared as one by a human, then it is perfectly reasonable to evaluate it as an argument and to consider whether to accept the conclusion of that argument-in-form or how to refute it, subject to whatever definitions I attribute to its terms, whether I know that it was in fact prepared by a human or not. That does not change if you insist that we should not actually call it an argument. – John Bollinger Sep 01 '23 at 20:04
  • @JohnBollinger Not really. What's happening is that you are used to interpreting the mental and contextual situation of the speaker - even if you don't know them. You can imagine away, but all it takes is a disagreement about what an imaginary speaker would have meant had the imaginary speaker said anything and the whole exercise disappears in a puff of nonsense. You can make up an argument, by taking inspiration from the fictional argument that you have decided your imaginary speaker might have intended the jumble of squiggles to mean, and present that as your argument or an existing argument – Araucaria - Not here any more. Sep 01 '23 at 20:19
  • @JohnBollinger But you'll have to take responsibility for the (semantic) interpretation that people have of the argument that you presented yourself, at which point it is you making the argument. There is no argument in the squiggles that you got your inspiration from. And presenting the argument as existing in the squiggles would be positively misleading. And suppose people don't want to play the game where they pretend/imagine that the squiggles were generated by a speaker who had a communicative intention? – Araucaria - Not here any more. Sep 01 '23 at 20:22
  • Whether the argument is in the words themselves or inspired in my mind by the words, or whether I merely connect the words with an argument in some abstract space of logical arguments does not matter. Whether the same words represent multiple distinct arguments also does not matter. That I can read the words and perceive an argument is sufficient for me to ask the question of whether the argument I perceive is sound, and if not, how it can be refuted. This is the exercise the OP is asking about. There is an argument, even if I brought it with me. – John Bollinger Sep 01 '23 at 21:05
  • @JohnBollinger It might be fallacious to say that an argument that you present to me is fallacious because it was inspired in your Monty Python amusement centre by a Bot. I can have a meaningful interpretation of what you mean, and whether that's correct or not can be verified by asking you. However, that's not an argument in a.ChatGPT text. It's a John-generated argument. And it's the fact you exist that makes it a meaningful exercise to say that what you intended to convey was valid or sound or whatever. – Araucaria - Not here any more. Sep 01 '23 at 21:21
0

Is attacking an argument because it is machine generated an ad hominem fallacy?

No. People use calculators to show that 7 plus 5 equals 12, but no one believes that equation is true because a calculator said so. The fallacy in that example would be an appeal to authority. Here, the source of truth is not the calculator, but rather in decades of human thought, starting with Bertrand Russel’s proof that 1 plus 1 equals 2.

Similarly, someone can not argue that maybe, just maybe, 7 plus 5 does not equal 12 simply because the source of the answer is a calculator. Such an argument would simply apply the inversion of the appeal to authority, though just as fallacious.

The "machine-generated" problem is just the calculator problem writ large. The proponent cannot say they are right because artificial intelligence wrote the argument, as that would be a 21st-century version of the appeal to authority.

The suspicions surrounding artificial intelligence arise from its track record, not from something inherent in the technology. Examples include instances where the machine just made stuff up. Such instances provide a legitimate basis for challenging a machine-generated argument.

Mark Andrews
  • 6,240
  • 5
  • 22
  • 40
0

I have just asked a chatbot, Falcon. I tried 3 times and got 3 different answers: no it isn't; yes it is; no it isn't an ad hominem fallacy, but it is an ad hominem attack. All three answers were supported (not necessarily strongly) by reasons.

I think the OP's question turns on the exact definition of ab hominem. For example:

The argument attacks a position by appealing to the despicable qualities, moral turpitude, and over-all lowness and meanness of a person who holds the position.

I think it is fair to say that the Chatbot argues inconsistently. I don't think that it is really an ad hominem to point out that a person has a track record of lying and arguing both side of the question on different days. If a clever lawyer argues the case two different ways, we may know that they can't both be right, even if we can't spot the precise fallacy. Maybe this is what Falcon "meant" the third time: an attack, but not a fallacy?

Simon Crase
  • 622
  • 1
  • 7
-1

To add to the collection of answers I'd say ... technically ... an ad hominem (attempts to) discredit an argument by attacking the person (literal translation) which in 9 outta 10 cases takes the form of maligning a person's moral character e.g. that the arguer has a police record, that the arguer beats his dog, that the arguer is adulterous, etc. That is to say "the argument is no good because the arguer is bad"

I don't know if an ad machina fits into this mold perfectly. Can a machine be bad??

Agent Smith
  • 3,642
  • 9
  • 30