9

Does the use of AI make someone more intelligent? I think I remember this coming up in the SEP, with respect to whether access to the internet means I "know" (or similar) everything that appears therein, perhaps even before I access it, as if a memory of having learned it, but I cannot find anything of that sort now, though thanks for anyone who can.

I suppose that if we completely fuse with the AI then we are as intelligent as it is. Given that this is a superhuman kind of intelligence, I wonder what that human spark we've passed to it would matter, and if that would be closer to noise within the machine than a powerup to either.


Is Ceaser Octavius as powerful as Rome?

Augustus's famous last words were, "Have I played the part well? Then applaud as I exit" ("Acta est fabula, plaudite")—referring to the play-acting and regal authority that he had put on as emperor.

Perhaps Ceaser is as powerful as the Rome he controls, but you wouldn't say he was as intelligent as Rome, nor even the most intelligent man in Rome.

user66697
  • 639
  • 11
  • 10
    "Artificial intelligence" is a buzz phrase coined by early computer scientist John McCarthy. If he called it "statistical munging," it would not sound so impressive. AI is not "intelligent." Reliance on it is a sign of stupidity, not intelligence. – user4894 Mar 25 '24 at 18:50
  • well i agree @user4894 – user66697 Mar 25 '24 at 18:50
  • @user4894 and if we go by modern tools like chatgpt and the like, calling it turbocharges predictive text makes it even less intelligent – Ja Da Mar 26 '24 at 08:11
  • i have found myself using LLM @JaDa for brainstorming, kinda throwingh paint into the screen until i see something that tweaks my interest, then trying to work out something from it. in time, you can maybe use it for checking stuff, and i look forward to that. for now LLM is so unreliable, and that really comes through when you ask it e.g. for references. the last philosophical claim i asked it to cite pointed me toward the entirely wrong subject area (molecular biology) – user66697 Mar 26 '24 at 08:46
  • 8
    Depends on whether you use it intelligently. For example if you rely on AI such as copilot to write code for you when you are learning how to program, then your use of AI will be self-defeating (it would be like going to the gym and watching other people pump iron and wondering why your muscles aren't growing). AI is a tool, it is how you use it that defines its value. – Dikran Marsupial Mar 26 '24 at 12:26
  • @user4894 it is also an inappropriate term for LLMs or DNNs in general as there is no intelligence (reasoning) there, just a store of knowledge. This is demonstrated by some of the mathematical proofs that it gives, which are obviously wrong, but are similar to proofs of similar statements where the error does not arise. – Dikran Marsupial Mar 26 '24 at 12:29
  • 1
    Does access to a calculator make you more intelligent? Does access to the internet (and thus heaps of information) make you more intelligent? – Tvde1 Mar 26 '24 at 16:35
  • 3
    Sooner or later these AI engines will start to consume their own output, at which point everything will go to hell, if it hasn't already. – user207421 Mar 26 '24 at 22:38
  • Garbage in, more garbage in, more garbage in... – Scott Rowe Mar 27 '24 at 02:55
  • You (and the "AI") also have access to all the incomplete, outdated, misinformed, purposely misleading, and otherwise UNintelligent pieces of information on the internet.... – frIT Mar 27 '24 at 09:24
  • @user207421 human society has been doing that at an ever accelerating rate, especially since the introduction of the WWW ... ah, I see your point! ;o) – Dikran Marsupial Mar 27 '24 at 10:25

6 Answers6

13

This depends on your definition of Intelligence

There are two competing (categories of) ways to define intelligence, Internalism and Externalism. Neither of these are "more correct" than the other - they're semantics.

Internalism

Under Internalism, intelligence is defined as occurring entirely within the brain. Someone who has learned to takes notes and refer back to their notes is not better (and may actually be worse) than someone who takes bad notes or no notes at all, because the content of their notes is considered to be input from the senses. They aren't remembering anything - they're rediscovering it from physical cues.

I am not well versed in internalist theory, so I can't provide any references to this. I find it to be a very intuitive definition of intelligence, though.

Externalism

Under Externalism (and especially under the Extended Mind Thesis), intelligence is a phenomenon that arises from a system containing more than just a brain. A system containing a person, a pen, and piece of paper is better at remembering information than a system with just the person, therefore the former system can be said to have better memory.

While this definition of intelligence seems tortured and unnatural, I think it is extremely pragmatic. If I want to get better at remembering things, the best way to do that is to carry a notebook - which is a solution that an Internalist worldview rules out.

Applications to Artificial Intelligence

Under internalism, the only effect AI has on intelligence is that many skills are learned incompletely, because the brain can't perform them without also receiving input from the AI (for example, asking it for reminders about how to do things). Under this definition, AI almost certainly makes its users less proficient at many tasks.

Under externalism, the system of a person, a computer, and an AI program might be more intelligent than the system containing only the person and the computer. When faced with a task, the person could ask the AI for suggestions. The larger system (including the AI) will outperform the smaller system (without the AI) on many tasks.

Might AI still make people stupider, even under externalism?

Yes, but not for all tasks. AI is much better at seeming competent than it is at being competent in nearly every field, neither the human nor the AI can accurately gauge the AI's competencies. Thus, human+AI system will often attempt to let the AI solve every hard problem for it, with the human performing cursory checks at most. For problems where the human's competency exceeds the AI's, this will result in the human+AI system performing worse than the human alone.

This phenomenon is sometimes referred to as The Jagged Frontier.

Might AI still make people smarter, even under Internalism?

(Based on a discussion in the comments; credit to Dubu for this argument).

Cognitive tools can also be used in an educational setting - for example, students work out math problems on paper, or write essays for their instructors to evaluate. Even under an internalist definition, using cognitive tools in this way often leads to these students becoming more capable - a student who writes an essay on a topic (on paper, with a computer) will develop knowledge and the ability to reason about that topic (even without paper, without computer). Similarly, a student who uses the internet to research a topic will learn something about that topic which they can take with them even when they are not seated at an internet-connected computer.

It is likely that some AI technology (either existing LLMs or a future cognitive tool) will end up being integrated into education in some way - either used by teachers to educate students more effectively, or used by students to study more effectively - resulting in greater student learning even in ways which do not make the students dependent on the tool. In this way, using AI likely will make those students more intelligent.

Tim C
  • 364
  • 1
  • 5
  • 3
    +1 For the OP, consider how even a blank book and a pencil can make a thinker more capable. It extends one's memory, allows one to collect input from others, and can provide a scratchpad for complex thoughts and calculations. There's a reason that hunter gather societies were much less complex and intelligent: they lacked tools to transmit knowledge and relied on oral tradition, which is a far less effective means of passing on knowledge than the written word. Today, with Copilot, for instance, one can quickly arrive at a coherent understanding of a new topic. – J D Mar 25 '24 at 20:25
  • 2
    The OP might also be interested in the SEP's entry: https://plato.stanford.edu/entries/embodied-cognition/#ExteCogn – J D Mar 25 '24 at 20:27
  • @JD thanks i should read some of those articles. it was probably what i meant – user66697 Mar 25 '24 at 22:00
  • 3
    Be aware that the definition of AI used here is largely based on the software currently labeled AI (with its shortcomings concerning hallucinations. This is not comparable with systems called "AI" from years ago (expert systems, chess computers), some of which were provable correct in their answers, and probably neither with future AI systems which might be based on totally different architectures than the currently prevailing deep learning models. – Dubu Mar 26 '24 at 13:48
  • @Dubu - The last section perhaps depends on the current conception of AI, but I think the rest of the answer would hold regardless. Using tools to assist cognition (writing, calculators, AI, StackExchange, etc.) can make someone "smarter" if their intelligence is defined as their tool-assisted capabilities, but it will also make them more dependent on those tools, and thus "stupider" if their intelligence is defined as the capabilities of their brain alone. This is true regardless of the cognitive tool in question. – Tim C Mar 26 '24 at 15:57
  • @TimC I like your answer and mostly agree with it, but I would not exclude potential future AIs that might be able to improve human thinking and reasoning. Software like ChatGPT is focused on "simply" answering questions, but why should there be no AI that tries to be more like a teacher and just asks the user the right questions to make them think in new ways? – Dubu Mar 26 '24 at 17:43
  • @Dubu - That's fair. If we count AI-assisted education as people becoming more intelligent by using AI, then even the current LLMs (flawed as they are) could be argued to make people more intelligent, if we find ways to use them so that they make teachers more effective. – Tim C Mar 26 '24 at 19:03
  • I have added a section addressing AI in education. – Tim C Mar 26 '24 at 19:12
  • This is a good philosophical answer, referring to psychology (yes?) and providing options and outcomes. – Scott Rowe Mar 27 '24 at 03:39
  • When I can't use google I feel like half my brain is gone. Fewer adds though. – candied_orange Mar 27 '24 at 08:23
  • @candied_orange maybe it was never there? Perhaps fewer ads makes it worth it, it does for me. – Scott Rowe Mar 27 '24 at 10:35
  • if i use a robot to tidy up my room, then i tidied up my room, but i may not be very good at tidying – user66697 Mar 27 '24 at 19:45
  • 1
    @user66697 - The extended mind thesis (or in this case, it would be a physical equivalent, like an extended body thesis), would say that a system containing me and a broom is better at sweeping the floor than a system containing just me, so I improved my ability to sweep the floor by adding a broom to my body. It's a bit of a tortured definition (all I did was pick up a broom; I didn't modify my "body"), but if I can always expect to have a broom available whenever I'm sweeping, then it's meaningless to consider my sweeping ability without a broom. – Tim C Mar 27 '24 at 20:03
  • as i just implied in my answer @TimC maybe intelligence is not just intelligent behaviour – user66697 Mar 27 '24 at 20:05
  • @user66697 - I see that is a difference of definitions: "intelligence" means different things to different people or in different contexts. I find internalist definitions of intelligence to be tantalizingly intuitive but ultimately useless. It's hard for me to imagine a situation where I would care about becoming "more intelligent" and not mean something like "better at arriving at correct decisions," nor determine who of two people is "smarter" and not mean something like "better at recalling information and applying it to solve problems." – Tim C Mar 27 '24 at 20:43
  • 1
    This debate might be better served by spinning it off into its own question. These comment sections can be rather cramped. – Tim C Mar 27 '24 at 20:45
7
  1. Do you become more intelligent when you leave all decisions and reflections to your partner? Obviously not, but you can learn from your partner.

    Similary you can always ask ChatGPT or a chess computer about your next decision or your next move. You learn, but you do not become more intelligent.

  2. I consider tools from AI like ChatGPT as a big database. It contains nearly all documented knowledge from all cultures and provides a well-adapted interface for queries of human users.

Up to now the intelligence is within the user, not in the tool.

Jo Wehler
  • 30,912
  • 3
  • 29
  • 94
  • You can learn or you can become dumber - both paths are possible. I've seen several people pull up a pocket calculator to punch in a multiplication by 10. Their reflex to rely on the calculator was so ingrained that if they need to know what is 67x10, they will pull up the pocket calculator without even realising that they don't need it and that they should be able to see that the result is 670 directly. – Stef Mar 26 '24 at 12:17
  • Likewise many people rely on GPS so much that they will blindly follow the "left" and "right" directions from the computer and have no idea of where they are or where they are going, even in a city that they know. This is not specific to ChatGPT - it happens with every new technology and every "partner", human or mechanic tool or computer. But ChatGPT makes it even worse because language is universal - pocket calculators can only make you dumb related to calculating; gps can only make you dumb related to orientation; but chatGPT can make you dumb related to everything. – Stef Mar 26 '24 at 12:18
  • @stef I haven't really used GPS, I learned long ago to look at a map, write down the critical turns in a cryptic shorthand, and follow the notes while driving. This has taken me on 500 mile routes without a problem. A memorized impression of a paper map got me through a foreign city on foot, dragging my luggage to the hotel at 7 am. I worry about this, but people can't saddle a horse any more (I can) and it doesn't seem like an existential threat so I don't know :-) – Scott Rowe Mar 27 '24 at 03:29
5

Not really an answer to the question just a bunch of things to think about...

As the comment already mentions AI is largely a buzzword and what "fuse with AI" means is yet to be determined. Though why don't you take a step back and use tech that we already have and which is maybe analog to that. So take something boring like: a calculator.

We know it's not really smart on it utilizes known algorithms, in fact you probably learned variations of these algorithms yourself at school or at least you were supposed to. Now it can greatly increase the skills of people who can't for their lives do a calculation without messing up, but does that increase in skill make make them more intelligent? Do people become more likely to think oh I can feed that into my calculator and make use of that? Maybe I'm biased but I've not seen that. It's more or less the other way around people who were already inclined to make quantitative analysis and who would have applied the algorithm now just phase out that part to a machine.

But by doing so the machine does the skill, so effectively they are losing a skill in favor of efficiency. So in a sense they become dumber? Like previously you were the machine doing the thing, but in becoming the master of the machine, actually means becoming it's slave as you are now just the person feeding it information and relying on it's god like skills and it's benevolence to serve you.

So the question of how skills and application of skills relate to knowledge, capabilities, power and subservience in those relations and so on is likely going to be an interesting debate with the advance of new technology and whether we gain new knowledge or whether we make ourselves obsolete and dumb until we can't even use our wisdom.

haxor789
  • 5,843
  • 7
  • 28
  • 1
    I keep thinking of a bulldozer, but it's late and the impression isn't resolving itself into a snarky comment, so maybe tomorrow... – Scott Rowe Mar 27 '24 at 03:34
  • 1
    Ok, augmented human here: I googled "snarky comment about a bulldozer" and got a nsfw saying and this gem: "The bulldozer and not the atomic bomb may turn out to be the most destructive invention of the 20th century." Not too bad. But I came by my font of weird quotes honestly. I wonder if we will ultimately find AI boring? – Scott Rowe Mar 27 '24 at 10:43
  • 1
    @ScottRowe Most accidents happen in people's homes, we're more scared about the extraordinary threats than about the ordinary ones that happen more often. Yeah it's very likely that we won't think of AI much. But atm the biggest threat is not so much AI but the people not realizing that it's essentially full on riding the ćum hoc ergo propter hoc wave fallacy. Which works quite well, lots of things that happen consecutively or at the same time ARE connected causally, but not all and I have my doubts people riding the AI hype train pay close attention to that. – haxor789 Mar 27 '24 at 13:46
2

Humans are tool using creatures, and intelligence is selecting the right tool for the job

Humans are not the only species to use tools, but we are the best at it. (At least if we disregard the question of whether aliens exist anyway.) The thing about tools though is that they always do one job very well, but they rarely work for different problems. As the famous saying goes, the best screwdriver makes a very poor hammer.

Humans being creatures of habit, the "old school" generally also tend to look down on new tools when they appear. Chefs looked down on electric whisks and microwaves when they first appeared. When I was at school in the 1980s, using calculators was looked down on. These days, they're very much standard tools, and people are taught how to use those tools to do their work more effectively.

AI engines are exactly the same. It's not really "intelligent" - it's just trained on a particular dataset to pick out patterns. There are some things it can do really well when it comes to pattern recognition, and it's just a matter of using those tools effectively to do your job better.

The intelligence remains in the human using the tool, not the tool. It may allow less skilled people to do a better job, sure - but that doesn't make them more intelligent. And the highest form of intelligence is selecting the right tool for the job in the first place.

Graham
  • 2,174
  • 11
  • 15
  • My experience with calculators at school is that students are taught how to use these tools, not so they can use the tools to do their work more effectively, but rather to avoid the sad pitfall of doing their work less effectively because of a badly-used tool. – Stef Mar 26 '24 at 20:45
  • @Stef Well, there's always that too. Even the best screwdriver doesn't help if you're turning it the wrong way. :) – Graham Mar 26 '24 at 22:22
  • Research has shown that advanced tools in the workplace, such as "AI" assistance for Call Center workers, helps the lowest performers the most, and best performers little, or possibly hinders them. Something to consider with all the questions about AI. – Scott Rowe Mar 27 '24 at 03:23
2

(Because it works in my favour, when self-assessing)... I like to think of intelligence as "ability to figure things out" or "solve things" or recognize false from true.

Applied. Effective.

  • Grades are one thing.
  • Being a grand winner on Jeopardy is another thing. Collected information.
  • Designing circuitry that advances smart-phone technology... or figuring out the calculus of gravity... those are another thing.

"Does the use of AI make someone more intelligent?"

I offer for consideration...

The use of AI can absolutely make one more effectively intelligent, if used well.

Search engines use AI. We use search engines. We become more "effective" from an applied intelligence perspective. Try troubleshooting a computer, or a network, or wireless install... without access to the results of search engines.

And then there is the whole graphic and image side of things. Consider a couple of sample terms like: Birefringence, or general relativity... in milliseconds, AI gives me back...

Birefringence

... and...

Relativity

... each image a link to an article or web page some perspective or another.

It is so fast. Like having a visual index of (a percentage/sampling of) a good chunk of our collected knowledge as a species.

If you imagine (or better, if you are old enough to remember) the days before computers, software, the internet, and AI had been invented... and consider wanting to fix a washing machine, or properly hang a door... you'd be getting in your car and going to the library to see if they had a book that would probably be generic and not specific to your make and model, and may or may not do the job.

Now, you find your model, find a video, figure out the part you need, replace that roller, voila washing machine ain't rocking like the Rolling Stones anymore.

You were more "effectively intelligent"... capable... able. Results.

And AI helped in that process.

You aren't necessarily "more intelligent"... but you have a tool at your disposal (you may not realize you do), and it makes you more "effectively intelligent"... if you put it to good use.

Can a person with a sledgehammer bust more rocks than a person without a sledgehammer? Yup. More effective at bustin rocks.

Its not exactly more intelligent, but its sorta more intelligent.

Alistair Riddoch
  • 754
  • 7
  • 16
  • 1
    i get that. sorta more intelligent maybe – user66697 Mar 26 '24 at 14:18
  • 1
    "Search engines use AI" << Google's search engine had been working extremely well for 4 decades without AI... If anything, their performance has degraded since they tried to incorporate AI into it. – Stef Mar 26 '24 at 20:49
  • 2
    +1 for birefringence, a have a calcite crystal on my desk. Max Tegmark said, "Intelligence is the ability to accomplish goals." While it is true that we often need to get reference material to solve a problem, it will take longer without the right background knowledge. 30 years ago, I was writing a complex program, and bought a book because that was the only way to get all the info I needed. But the book had an error so I bought another book. I never studied engines, but I started a car with a pencil once, a broomstick another time, and fixed a generator by rapping it with a screwdriver. – Scott Rowe Mar 27 '24 at 03:14
  • 1
    Looking under the lamppost because it is brighter there won't necessarily find your lost keys. – Scott Rowe Mar 27 '24 at 03:18
0

Britain bombed Dresden, but America was not dropped on Hiroshima. America had nuclear capacity, but is not a nuclear weapon. I'd guess it depends less on 'individuality' and 'agency' than whether 'intelligence' belongs to the former or the latter.

I remember reading before that intelligence in animals means intelligent behaviour.

user66697
  • 639
  • 11