16

With respect to AI, some people appear to have an objection to the idea of

feed[ing an] AI with other people's works and then claim[ing] all the output as yours.

Let's create the following hypothetical scenario.

An AI and a human art student are both presented with the exact same set of copyrighted pictures to study. After studying them both produce a new work of art (a picture) based on what they have learned. Both new works are different from any of the originals but have some similarities in style or techniques. Both new works are based on the same source material and are theretofore very similar.

Many people would call what the AI did copyright violation or insist that every work used to train it be cited.

But very few people would have an issue with what the art student did.

Whether it be a human or a machine, in both cases, Neural networks examined objects and then created output based on what was learned. From an ethical standpoint one would think that the form of the machine should make little difference (gears, vacuum tubes, transistors or chemical machinery of living cells).

Neural networks are heavily modeled on the way human brains work. So even if the form of the machine did matter, one would think that it should matter much less for neural network based Artificial Intelligence (as opposed to a traditional computer program).

So why try and legally disincentivize one but not the other?

Why do many people choose to make that distinction?

From an ethical standpoint shouldn't the two be identical equivalent (or nearly so)?

I do acknowledge that some content creators may think they have a financial interest in limiting the use of AI. But aren't they just making an arbitrary moral distinction based on their own financial interest?

I acknowledge that an AI is not a human being. But as AI progresses towards the point of sentience, then at some point, what we are doing just becomes discrimination.

user4574
  • 269
  • 2
  • 5
  • 9
    The idea is that human learning is transformative (or should be so), whereas AI "learning" is merely mechanical, algorithmic. When human use is not transformative, it is (supposed to be) treated as infringement as well, but in the case of AI there is no need to even investigate, it can be inferred by design. Rightly or wrongly, the preconception is that artificial neural networks are merely a surrogate of the original, and do not measure up to it. Deeply ingrained psychological biases about "soul" surely play a role, but even objectively, current ANN are rather simplistic in comparison. – Conifold Sep 21 '23 at 20:23
  • 4
    This is because unlike humans, AI can be owned essentially making it a slave. – Idiosyncratic Soul Sep 21 '23 at 21:40
  • 2
    @IdiosyncraticSoul I don't see how the ethics of this scenario would change even a little by using a public domain AI model which could not be owned. – Nuclear Hoagie Sep 22 '23 at 18:01
  • 3
    @NuclearHoagie My simple way of thinking is this: You can't sue a camera for plagarism. Slave is the wrong word. It should be "personal property". – Idiosyncratic Soul Sep 22 '23 at 18:07
  • 2
    @IdiosyncraticSoul There may be AI models for which no one owns the copyright, and courts have already ruled that AI-generated material can't be copyrighted. I don't see the ethics changing whether someone owns the AI or not. The camera analogy falls flat for me, one's inability to sue a camera stems from the fact that it's not a legal entity, not the fact that it's property. You can sue a business despite the fact that it has an owner. – Nuclear Hoagie Sep 22 '23 at 19:44
  • @NuclearHoagie You hit the nail on the head. It's also about the concept of responsibility for actions. The current state of AI is not at the stage where it can be held responsible for actions. What future "behavior" will AI generate to lead it to become responsible for actions? – Idiosyncratic Soul Sep 22 '23 at 19:47
  • I believe the Industrial Revolution asked these type of philosophical questions: Man vs. Machine - physical. Now it's Man vs. Machine - mental. There may be some arguments the translate directly. – Idiosyncratic Soul Sep 22 '23 at 20:35
  • AI is, by definition, big. It can do stuff that no human could ever do in 100 years. So, the rules have to be different. Think nuclear bomb vs fist. – Scott Rowe Sep 22 '23 at 23:40
  • The argument is problematic because certain people with a vested financial interest want to argue that the thought process of these models is substantively identical to the mind of a sapient being (dubious at the moment, but certainly feasible in theory) and at the same time argue that they should be able to own, profit from, and control these models. – Obie 2.0 Sep 23 '23 at 05:23
  • 1
    Seems like folks just wanna to keep the status-quo. – Nat Sep 23 '23 at 21:28
  • Please explain how anyone calling what the AI did copyright violation, or insisting that every work used to train it be cited, could ever justify those ideas?

    Whether they could or not, how would anything apply differently to the human student?

    – Robbie Goodwin Sep 23 '23 at 21:42
  • "in cases where they perform nearly identical processes?" ─ there are no AIs in 2023 which perform "nearly identical processes" to what a human brain does, and there likely will not be for a very long time, if ever. In the same way that an iron lung does not really work the same way as a human lung. Sure, they are nearly identical if you ignore all the ways they are different; but those differences are salient, since they imply things like AIs overfitting and regurgitating their inputs, as AI art generators have been known to do including stock image site watermarks and artists' signatures. – kaya3 Sep 23 '23 at 22:12
  • Current forms of AI are still conceptualized as tools operating at human discretion - just better tools than anything we've ever seen before. So you may ask why a hammer doesn't get credit for creating the David statue. Okay, hammers don't get credit because they are dumb. But computers aren't. Why does Farbrausch get credit for creating .the .product, instead of giving credit to the computer connected to the video projector that displayed it at The Party 2000? – user253751 Sep 24 '23 at 13:00
  • @kaya3: Consider chess-playing robots as an example. Proving or refuting your claim, if we factor out such examples, is equivalent to answering the Hard Problem. This doesn't mean that you're wrong, but it does mean that the original question can be considered. (That said, the question should really say "equivalent" instead of "identical"...) – Corbin Sep 25 '23 at 16:01
  • @Corbin I don't follow anything in your comment. It's simply a fact that there are salient differences between how humans do things like create art vs. how AIs do those things, and therefore the OP's question is based on a false premise. The difference between how we treat art created by humans vs. art created by AIs can be adequately explained by the differences between how humans create art vs. how AIs create art. The processes aren't "nearly identical" and they aren't "equivalent" or "nearly equivalent" either. – kaya3 Sep 25 '23 at 17:38
  • @kaya3: Think more extensionally. Just because the original question is poorly worded does not mean that it is an invalid inquiry. I don't want to pull a parade of horribles, but there are artists today who are finding themselves in competition with the output of robots, and it's been going on for decades. (OP, you really should clean up your phrasing so that folks don't misunderstand you repeatedly in the comments.) – Corbin Sep 25 '23 at 18:17
  • 1
    @Corbin The fact that they are "in competition" does not mean that they use equivalent (or even similar) processes to produce art, only that (some) consumers don't care about the process. But the philosophical basis on which copyright law is founded does care about the process. The processes do not have the same observable properties, but consumers do not observe (or care about) all of their properties. For an obvious example, AIs reproducing signatures of artists because those signatures occurred in their training sets, is an observable property. – kaya3 Sep 25 '23 at 18:20
  • @kaya3"but those differences are salient, since they imply things like AIs overfitting and regurgitating their inputs". Along with many other quirks people assign just to AI, humans often do this one too. They will believe they are having a unique idea, when in fact they are repeating something, they saw before but forgot about. In other cases, people will just knowingly plagiarize someone else even if they have been actively trained not to do this. For AI it's likely this is a result of bad training methods rather than an inherent feature of the technology. – user4574 Sep 26 '23 at 20:19

9 Answers9

26

The training sets for generative AI systems are orders of magnitude larger than the number of images or words that a human being sees in a lifetime. If you trained a neural net on only the images that the art student studied, it most likely would produce an obvious, slavish copy of the input, if its output was coherent at all. If the art student did the same, they probably would be accused of plagiarism. As the number of images in the training set increases far beyond human capacity, it becomes harder to identify the contribution of each one, but the basic operating principles are the same.

People are able to extrapolate from fewer examples because there are other things going on in the brain that aren't modeled in the generative AI systems. Those other things are why all that art exists in the first place.

In many ways, your question assumes its conclusion. If the art student and the generative AI, trained on the same images, produced similar output, then it would be unreasonable to treat them differently. If the ethical concerns were about the "form of the machine", rather than about a large number of small-time artists losing commissions and the profits that formerly went to them going instead to a small number of wealthy corporations who flaunted copyright law in order to train their models on the work of the independent artists without their permission, then perhaps there would be no logic in the ethical concerns. If progress in AI research leads at some point to an AI that is obviously a "person", then at some point it becomes unreasonable to not treat it as a person.

benrg
  • 1,208
  • 6
  • 11
  • "The training sets for generative AI systems are orders of magnitude larger than the number of images or words that a human being sees in a lifetime." While true today, it's not a fundamental aspect of the technology. Due to lack of processing power, most neural network models don't necessarily have the right structure or scale. Chat GPT or Google Bard might have like 1% as many neurons as a human, and like 100X less connections per neuron. Plus, training algorithms are different (back propagation vs Hebbian learning). If Moor's law holds, that will change in about 15 years or so. – user4574 Sep 22 '23 at 00:13
  • 1
    @user4574 Well, if you're proven right, I'll concede your point. Moore's law has lasted longer than I thought it would, but I don't think it can last another 15 years. – benrg Sep 22 '23 at 00:23
  • 6
    @user4574, there is no algorithm that can make an ML system work with just a few examples like a human can, unless that algorithm already incorporates human-like intelligence. How is the algorithm going to recognize what is relevant in the training set without a huge number of examples to show it? – David Gudeman Sep 22 '23 at 01:24
  • 17
    @DavidGudeman Can a human ever only have "a few examples" ? The student asked to get inspiration from 5 artists would already have a database of twenty years of seeing the world through his eyes, and thousands of different drawings and artpiece seen randomly throughout his years. If you put someone in the dark for twenty years, then show him five pictures and ask him to draw a sixth, I'm not sure you can guarantee the work won't be extremely derivative. – Jemox Sep 22 '23 at 08:01
  • 6
    @Echox, you are right, and that's part of the point. Humans have developed in their mind a great number of very sophisticated analog models of the world, and they interpret whatever they see according to those models. – David Gudeman Sep 22 '23 at 08:06
  • 1
    @DavidGudeman you are right, and that's part of the point. AIs have developed in their neural net a great number of very sophisticated digital model-components of their inputs, and they interpret whatever they are given next according to those models. – Brondahl Sep 22 '23 at 08:15
  • 1
    Human vision is 30-60 fps; so call it 45fps. That's 1.4 billion frames a year; Top Google search says Art AI are trained on 1-10 billion images. So no, their training set isn't an order or magnitude larger. – Brondahl Sep 22 '23 at 08:57
  • "If the art student and the generative AI, trained on the same images, produced similar output..." - that seems like an odd argument. That'd be like saying "if Alice and Bob, looking at similar images, produced similar output, it would be unreasonable to treat them differently". Obviously the output is likely to be different on account of different processing steps (even between humans). It's unreasonable to treat them differently (with respect to copyright) because for both, any given the input image is just one tiny part of what ultimately produces some output. – NotThatGuy Sep 22 '23 at 09:13
  • 1
    The profits of artists may be a valid concern, but that's distinct from whether AI produces something "new" in a philosophical or functional sense. To rely on that argument would be to say that, yes, AI might be transformative and doesn't functionally violate copyright, but we'll treat it as if it does in any case, because we think that leads to a better outcome (which is in itself questionable). We could do that, but it's important to separate concerns. – NotThatGuy Sep 22 '23 at 09:13
  • 1
    I'm struggling to follow the train of thought and implication in this answer. Does using more data imply plagiarism? Shouldn't it be the other way around? Is the brain not plagiarism because of "other things going on in the brain"? What other things and why do those not make it plagiarism? There are also other things going on in AI, that isn't in the brain, so there simply being other things doesn't say much, one way or the other. This answer doesn't really seem to make the case that they should be treated differently (i.e. what was asked), but merely that question "assumes" they shouldn't be. – NotThatGuy Sep 22 '23 at 09:49
  • 4
    @Brondahl "45fps. That's 1.4 billion frames a year" - quite exaggerated; most of the time you'll have thousands of consecutive frames that are just slight variations of the same image, and the scenes also tend to repeat daily. Whereas the billions of images an art AI is trained on are mostly really different images with only some repetitions. A human does certainly encounter millions of image during their lifetime, but at least most of these aren't copyrighted. – leftaroundabout Sep 22 '23 at 12:44
  • 2
    @NotThatGuy "Does using more data imply plagiarism? Shouldn't it be the other way around?" I'd say neither more or less data should make it more or less plagiarism, but it seems reasonable that a higher fraction of peoples' copyrighted work in the data plays a role. The AIs discussed here are trained pretty much exclusively on copyrighted material. A human artist is trained on a mixture of copyrighted material and personal sensory input. Hard to attribute how much of each turns up in the their artworks. – leftaroundabout Sep 22 '23 at 12:59
  • 1
    @Brondahl, and besides the point leftaroundabout made, the human is able to do thousands of things with that input: discuss religion, drive cars, play volleyball, wash dishes, play with dogs, trouble-shoot technical problems, talk someone into doing something, mow grass, recognize when the party is getting out of control, grocery shop, and much more. An AI, with more data more carefully curated can only do one of those things and some it may not be able to do at all. – David Gudeman Sep 22 '23 at 13:09
  • @leftaroundabout A lot of research has gone into what the structure of neural networks should look like, and how to train it, which could similarly be considered "other things" that isn't copyrighted. Much of the material in the training set also isn't copyrighted. I have no idea what fraction it is, but I'd certainly take issue with an un-backed-up "pretty much exclusively copyrighted". Although it shouldn't really matter what fraction is copyrighted. To me, it seems that either it's okay to use a single piece of copyrighted material in this way, or it isn't. – NotThatGuy Sep 22 '23 at 13:34
  • Maybe AI could provide for people's needs, then they wouldn't have to sell their time making wants? – Scott Rowe Sep 24 '23 at 21:10
  • "The training sets for generative AI systems are orders of magnitude larger than the number of images or words that a human being sees in a lifetime" Thinking about this, I am not sure it's true. If the human eye can see like 20 equivalent frames per second, and we are awake for 16 hours a day we see like 420 million images per year. Many of those images a human sees are similar to each other, but that is probably extremely useful for building models of how objects move/behave (and people training AI should probably take that as a hint to train AI on Cat videos rather than cat pictures). – user4574 Sep 26 '23 at 01:24
  • @user4574 a lot of research has gone in to evaluating the difference in children's school performance relative to the amount and kinds of language use they are exposed to from birth (or before) to school age. More input of complex language makes a big difference. Same as we are pondering here. – Scott Rowe Sep 27 '23 at 00:07
17

Neural networks are not modelled on the way the brain works because no one knows how the brain works. At best a neural network is an extremely limited and highly idealized simulation of a model based on a rather controversial theory of how the brain works. At worst the term "neural network" is just a marketing buzzword with no real connection to brain theory.

Furthermore, there is no reason to assume that modeling the brain would properly simulate a mind anyway except for materialists, and not everyone is a materialist. And even for materialists, it's a bizarre sort of theory in which a simulation is supposed to somehow generate a real-world effect of what is simulating. No one thinks that if you simulate gravity in a physics engine on a computer that it will generate real-world gravity, so even if the mind is just an aspect of the brain, why would simulating a brain on a computer generate a real-world mind?

From another point of view, there is no reason to think what happens inside an ML program when it reproduces patterns from its training data is anything remotely like what happens inside a human mind when it creates new things based on examples. The criteria of success used by the programmers is not that the program mimics the mind but that it mimics behavior. They aren't even really aiming at the mind, so there is no reason they have successfully simulated the mind.

The aspects that the human notices are likely to be much different from the aspects that the AI stores in its model. The human draws associations to other things in his life, which the computer does not do. In language-based jobs the human is working based on the meanings of what he has read, not the text, whereas (at least in current technology) the computer does not even try to model meaning; it only models text patterns.

In general, there is no good reason to believe the premise of your question and some very good reasons to doubt it. If you want to prove that computer-generated content is produced by the same process as human-based content, you have a great deal of work to do.

ADDENDUM

In response to a lengthy critique below, allow me to clarify a few things. First, the brain contains neurons and those neurons can trigger each other into firing. That's basic brain anatomy. However, that's not the same as saying that the neurons in the brain work together to constitute any sort of computing device, much less the specific type of computing device called a neural network. People working in AI seem to think the notion that the neurons constitute a neural network is uncontroversial, but it's not uncontroversial among anatomists and others who study actual brains.

Materialism suggests that the mind arises as some sort of effect of the brain; it does not imply that the effect that gives rise to mind is something that can be simulated on a computer. There is no good reason (that I've ever heard) to ignore the difference between living cells and dead silicon, no reason to discount the effects of the biological systems, no reason to think that all that is needed is some sort of formal analogy between how a brain operates and how a computer program operates. No matter how well you simulate how the brain operates, there is no good reason to think that a mind will pop out of it, just like no matter how well you simulate how gravity operates, there is no reason to think that you will suddenly be pulled towards the computer. The reason this assumption is so common among AI researchers and philosophers (I claim) is because the assumption is necessary for their research. If they can't make that assumption, they have nowhere to go.

Me pointing out lack of evidence for your assumptions is not the same as making assumptions of my own. Nothing in the above discussion assumes that there is anything special about the mind. I made one small point about how not everyone is a materialist, but the rest of my argument is perfectly compatible with a materialist world view. As I said, being a materialist does not imply that you have to have any particular beliefs about exactly how the mind arises from the brain.

David Gudeman
  • 9,892
  • 1
  • 13
  • 46
  • 1
  • Thought is a simulation, so I think the whole line of argument there is barking up the wrong painting of a tree. – Scott Rowe Sep 22 '23 at 23:47
  • 1
    @ScottRowe, "thought is a simulation" is a category error. – David Gudeman Sep 23 '23 at 00:56
  • We predict what will happen and then continuously refine our predictions. How else could we think? What is a prediction other than the result of a simulation? – Scott Rowe Sep 24 '23 at 11:54
  • 2
    @ScottRowe, (1) just because you can't imagine another possibility doesn't mean there are no other possibilities, and (2) just because some instances of thought have characteristics that might be attributable to something like a simulation doesn't mean that all thought has this character. What were you simulating when you came up with that idea, for example? Note that to simulate something, you have to have a model of how it works. Coming up with the model is not simulation. – David Gudeman Sep 24 '23 at 12:35
  • I just reread your answer thoroughly because I like your posts. To me, it doesn't matter what AI is or what a mind is. An earthmover doesn't simulate a thousand guys with shovels, but it gets the job done. Farm tractors no longer need humans to drive them, but the food comes out just as good. We are looking at a complete change to everything that can be 'simulated', for lack of a better word. The words, ideas and opinions do not matter. Bigger effects at lower cost matter. Anyway, my thoughts are of no consequence. – Scott Rowe Sep 24 '23 at 21:58
11

The premise is flawed because humans and AIs do not use the same processes, at least currently.

As Davie Gudeman's answer already points out, humans and AIs at least at present do not use the same processes. (I do take slight issues with his contention that "neural network" might just be a marketing buzzword. It was used in academia to describe certain types of programmed systems long before there was anything like a marketable product for there to be a buzzword around.)

For one things, while art tends to build on prior art and humans tend to be inspired by art they see, a human will bring other things into their art that an AI, at present, cannot. The AI systems that generate images do not draw inspiration from nature at least unless that nature was reduced to a carefully curated image and fed into it as art. No present AI system can draw inspiration from mood or emotion, because they lack such a thing.

AI systems also lack intentionality. Now, I have to admit that in certain deterministic forms of philosophy or if philosophical zombies were to actually exist, it might be argued that humans lack intentionality at least in the sense we want to believe they have it. But humans seem to possess intentionality while AI does not give off the same sense of intentionality.

If a truly sentient AI were ever developed, it is likely that the law, at least in a system similar to the US, would recognize them as being effectively human.

Currently, there is a clear distinction between an AI and a human. No AI has intentionality, no AI has emotions or feelings, no AI can put in effort which at least theoretically was the basis for copyright in the first place.

If an AI ever became truly sentient and displayed all of those things, then it is likely that law, at least in any system similar to what is presently in place in the US, would recognize it as a "person" as that term is defined in law. Notably, person for law already includes corporations, but a corporation is treated as acting through its officers and those officers have emotions, intentionality, and exert effort. It would be a small stretch to extend the definition to include a fully sentient and sapient AI.

Incidentally, the blog the Law and the Multiverse addresses this issue repeatedly, with Non-Human Intelligences III: Categories perhaps being a strong example. While I sometimes disagree with his conclusions, Ryan Davidson is a lawyer and provides thoughtful analysis. While entirely fictional and showing its age now, Asimov's The Bicentennial Man is essentially a meditation on this question.

Also, whether or not what ChatGPT and other AIs presently do is copyright violation is very much an open question under the law. My off the cuff suspicion is that it will be found to depend on the circumstances but will generally be found to be protected by fair use, but it remains to be seen and I have not taken the time to do a full analysis. (If I find that time, I will likely submit it to a law journal before mentioning it on stack exchange.). It is presently fairly well settled that what comes out of AI cannot be registered for copyright. I wrote that up as a blog post myself a couple of weeks ago. But that turns on the current understanding of the word "author". If an AI is ever recognized as sentient and sapient, the analysis may well change.

TimothyAWiseman
  • 580
  • 2
  • 8
  • 1
    AI of the ChatGPT variety is built to be prompt-driven: they don't do anything unless you tell it to do something. But there are also AIs that interact in an environment, that take in input and act accordingly, much like one could argue humans do. If autonomous AI were more popular or advanced, I suspect the intentionality argument would seem a lot less compelling. – NotThatGuy Sep 22 '23 at 00:38
  • 2
    FWIW, my reference to marketing buzzwords was not to neural nets in general, but to the use of the term in the marketing materials for specific products. – David Gudeman Sep 22 '23 at 11:17
  • @NotThatGuy I agree with you, and that might be the path to AGI or sentient and sapient AI, but for now the key word is "if". – TimothyAWiseman Sep 22 '23 at 13:08
  • @DavidGudeman That's fair and in that sense I agree with you, but I think its worth noting that it has a genuine technical meaning quite aside from any use in marketing. – TimothyAWiseman Sep 22 '23 at 13:09
  • "No AI has intentionality, no AI has emotions or feelings, no AI can put in effort" - how could you possibly know whether or not a Turing-test passing AI has any of that, or just pretends to? — I don't know whether you're right that sentient AI would be recognised as human by the law, but I'm confident that many people would protest strongly against it, myself included. It would be the go-signal for obsoletion of biological humans. – leftaroundabout Sep 22 '23 at 13:09
  • 1
    Perhaps I should have said "current AI". No current AI has passed the Turing test, and its rather debatable whether passing the turning test is sufficient anyway, it is just the best we have currently. Also, even the existence true sapient, sentient, AGI would not necessarily indicate the obsoletion of humans. – TimothyAWiseman Sep 22 '23 at 13:30
  • "If an AI ever became truly sentient" then it will pay no more attention to human laws than we take care not to step on ant trails. – Scott Rowe Sep 22 '23 at 23:59
  • @leftaroundabout you might enjoy watching the 3 seasons of the British tv show "Humans". Or, you might hate every minute of it. – Scott Rowe Sep 25 '23 at 23:47
5

Many people, whether justified or not, whether consciously or not, believe that (human) brains are fundamentally special, that we are capable of "creativity", that we have "free will", that we might have "souls", and so forth.

Computers, on the other hand, computers are just 1s and 0s. They take some input, perform some calculations that we told it to do, and pop out some output.

Given those positions, it's not hard to see why there is a distinction between the two.

It might also be worth looking to more primitive algorithms. If we were to, for example, take a picture and simply increase one colour of every pixel by some small constant, this is still very close to the original, and would be a clear copyright violation. The things you can do in Photoshop is much more complex than that, but most of that will still classify as copyright violations. AI is still very new, so people just group it with all the other algorithms by default. For them to be convinced otherwise, one would need to make that case.

* If a person applies a sequence of algorithms to an image, that might constitute a transformative enough change to classify as a new piece of art. But in that case, the person would be deciding on the sequence, so that part would still be seen as human, not algorithmic.


As for whether brains are fundamentally special, materialists tend not to accept that. But materialism isn't that common among laypeople (most people are still theists, and theists tend to not be materialists).

I expect even materialists would draw a distinction between the brain and AI, given how much more complex the brain is. Whether this distinction is justified here is hard to say. Based on how different many of the images are from the originals, I'd say it's probably not justified to put that on the super-simple copyright-violation end of that (at best, it's on some blurry line in the middle).


What might complicate matters is that some AI-generated images were singled out for being very similar to the originals, in ways that would probably classify as copyright violations if a human produced those, from the original images.

So this certainly reinforces the idea that AI just isn't complex enough to classify as transformative (even if this is a bit of cherry-picking).

What complicates matters even further is that AI can produce a functionally-infinite amount of art. Quite a few artists are worried about their livelihood, and some consumers may dislike the reduced "specialness" of art. This doesn't say anything about whether AI is functionally transformative, but making it legal may have consequences that people don't want (on the other side of things, it could also make custom art more accessible, and allow artists to use AI to complement their work, whereas the risk of losing one's job is a common fear with many new technologies - it may be a plausible risk, but it also stops society from potentially reaching somewhere better).

(Disclaimer: not a lawyer)

NotThatGuy
  • 9,207
  • 1
  • 18
  • 32
  • "making it legal may have consequences that people don't want". Calculator, mower, and computer all used to be job titles. I think we are better off now that they are not. I personally look forward to the day when I can tell my smart TV what kind of movie I want to watch, and Netflix generates it custom for me on their servers to watch later that day. – user4574 Sep 22 '23 at 01:28
  • 4
    @user4574 calculators, mowers and computers all fulfill well-confined tasks. You know what you wanted, you get it. Art, and arguably more generally creativity, doesn't work this way. Sure, AI may be able to generate movies à la carte, but is that desirable? I'd say not. Watching them may optimally stimulate your brain's pleasure centres, but the same could be achieved by planting electrodes there. Art should do more than that: it creates a shared culture, it introduces provocative ideas, it makes voices heard you wouldn't otherwise have considered. Removing that is a horrible idea IMO. – leftaroundabout Sep 22 '23 at 13:32
  • 3
    @leftaroundabout "Art should..." - eek. Art may provide all the things you mentioned, but to say it "should" is dystopianarily prescriptivist. Art that exists merely to evoke joy, sadness, excitement, is perfectly valid too. For what it's worth, as a general rule, art provides none of the things you mentioned to me. Although it certainly depends whether you're talking about art as a whole or individual pieces of art. For art as a whole, I might agree with you, but that isn't necessarily lost with the addition of AI-art, as that needn't altogether replace human-made art. – NotThatGuy Sep 22 '23 at 14:03
  • 3
    @NotThatGuy I stand by it, art should do these things. You can call that prescriptivist, or just say they are defining characteristics of what is or isn't art. The term for "art" without these characteristics is kitsch. Now, if a human creates art there can always be debate about whether it really was art or just kitsch. But even if a movie studio ordered some formulaic blockbuster, the human director will still be able to insert some kind of subversive message. I don't see that with AI, or even if it could I don't see the same societal value in it. – leftaroundabout Sep 22 '23 at 14:22
  • Mozart was basically paid by extremely rich people to crank out music. 41 Symphonies is a lot. But, he needed food, clothing, a place to sleep and work. AI could probably already do the same job, not just for a few rich people, but for everyone, for free. If not now, then soon. AI is big. Debating about it is a waste of time. – Scott Rowe Sep 22 '23 at 23:55
  • @ScottRowe Google is in fact working on a project called "MusicLM" (https://aitestkitchen.withgoogle.com/experiments/music-lm) that attempts to generate custom music from just a user description. Its hit or miss for now, but they may get it right. Aside from art, imagine everyone, no matter how poor, having access to good legal, medical, or engineering advice for less than something like $200 an hour as is currently the case. – user4574 Sep 24 '23 at 15:14
  • @user4574 if they charge what it costs, that would be great. Costs for food, light, housing and other things have plummeted in the past few hundred years. Can't come soon enough. And, if everything is basically provided, the incentive for crime shrivels also. Then people can focus on what really matters: arguing about things :-) – Scott Rowe Sep 24 '23 at 21:07
  • @ScottRowe "AI could probably already do the same job" - well, the question is what "the job" actually is that Mozart did. Cranking out lots of symphonies is remarkable, but it's not why he is still remembered and revered. Interestingly it's not that he greatly innovated music either, unlike e.g. Beethoven. No, the reason is culture: people went on to take Mozart's works as a canonical base, keep performing his works and build upon them as a context and language. ... – leftaroundabout Sep 24 '23 at 21:15
  • It's besides the point whether AI could compose faster or "better", because the value of music/art isn't an objective value but one that only arises from the relative scarcity of it. If you give everybody a different personal Mozart you don't get Mozart "for everyone", but for no one because there's no basis for a shared culture anymore. In other words, making art cheaper is as useless as printing money. It just creates a tower of Babel. – leftaroundabout Sep 24 '23 at 21:16
  • @leftaroundabout a lot of people play videogames. They probably are not interested in culture. I was reprimanded previously for choosing videogames as an example. A lot of people climb mountains, or just spend time exercising. No culture-building there. Saw the "Tower of Babel" artwork at the Tate Modern. Pretty cool, especially to someone who saw lots of radios as a child and no computers, not even a calculator. Radio is going away. – Scott Rowe Sep 24 '23 at 21:21
  • @ScottRowe most gamers (who are serious about it) totally play as a form of culture. Why else would they go on tournaments and play multiplayer games instead of competing against AI opponents, no matter how personalised these could be? Similar things can be said about mountaineers building a culture of what are the coolest climbs, etc.. Being alone in nature or playing a casual tetris is a different matter - that's not culture, but regardless it's also not something where AI is of any benefit. – leftaroundabout Sep 24 '23 at 21:51
  • @leftaroundabout I think we are long past the time when many people had much shared culture. I'm not sure that people really value it. I've always found it boggling how many products there are in a typical food store, how many kinds of cars, the vast landscape of music, tv, movies, books... Mozart would not be noticed these days. – Scott Rowe Sep 24 '23 at 22:25
  • 1
    @leftaroundabout Many gamers who are "serious about it" compete exclusively against AI opponents. But it might depend on how you define being "serious about it" - if you mean making a career out of it (e.g. tournaments or streaming), that would be begging the question, because structures that allow you to make money can fit into some vague definition of "culture", so people who make money indeed make money. Also, what about all the people who aren't "serious" about it? That's a huge market for AI to fill. And even many people who are "serious" about it still frequently "casually" play games. – NotThatGuy Sep 24 '23 at 23:08
  • @leftaroundabout Every person may have their own unique movies or games or whatever to some degree, but it seems likely that companies, groups or individuals would also share what AI produced for them with others, so "culture-binding" would happen that way. You can find countless paintings by no-one you've ever heard of, if you just want something to put on your wall. Similarly, there's no shortage of games, movies and TV shows to watch, that probably no-one ever talks about. It's only when people share a particular painting that it becomes culturally significant. – NotThatGuy Sep 24 '23 at 23:58
  • Significantly, I like to paint, and I am surprisingly good at it for the small amount of effort I have put in. People will probably continue to do individual creation, or even just get good at things like learning to play music that they did not write. Zillions of people find cooking creative even if they don't develop a new recipe every time. This activity can't be sold or made profitable. People just do it. So, tons of AI generated content isn't going to change the world, I think. – Scott Rowe Sep 25 '23 at 23:41
  • 1
    @leftaroundabout "Art should ... introduce provocative ideas". I once asked the Bing image generator to create a picture of "Davinci's last supper with everyone on their cell phone". The point was to illustrate how technology keeps people from being present during important moments. In my opinion the AI did a very good job of conveying a compelling idea (even if Jesus did have 6 fingers). – user4574 Sep 26 '23 at 00:21
  • @user4574 well, in that case it might be argued that you were the artist creating the prompt and the AI was just a tool to visualise it. And that's all fine for a quick meme. Caricatures have been a thing for much longer. But the reason these work culturally is that they send a simple message that reacts to one specific event, which can be directly understood and judged by the viewer. Serious artworks usually require a bit more effort to actually interpret. Who's going to spend that effort if creating the art is only as much effort as sending a tweet, and millions of them are created? – leftaroundabout Sep 26 '23 at 07:28
  • @leftaroundabout millions of people seem to pay attention to tweets, and I agree that it is not worthwhile. I pay attention to my dinner, regardless who cooked it. But it's hard to pay attention when you are broke. – Scott Rowe Sep 27 '23 at 00:02
4

Other answerers have already addressed the flawed premise of your question, specifically that in fact AIs and humans do not perform nearly identical processes to produce art.

I would like to address the idea that if humans and AIs performed identical processes to produce art, then the art should be treated the same way. This viewpoint is proposed in the question and conceded by @benrg's answer, which says

If the art student and the generative AI, trained on the same images, produced similar output, then it would be unreasonable to treat them differently.

I think this concession is incorrect. That is, even if humans and AIs did produce artwork of equivalent quality using equivalent processes, it would still be reasonable and right to treat the art produced by them differently in law.

The philosophical basis of copyright law, at least in the US, is described succinctly by the US Constitution, Article 1, Section 8, Clause 8 which states that the US Congress shall have the power

To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

There is a clear mechanism by which securing copyrights for human creators could "promote the Progress of Science and the useful Arts": by giving creators a monopoly on marketing their own work, creators have a financial incentive to create works which they might not otherwise create.

In contrast, securing this right for art-generating AIs would not "promote" the creation of new work, because art-generating AIs do not respond to financial incentives. They create art whenever a button is clicked. We also don't need to incentivise humans to click it more, because whoever wants to consume more AI-created art can click the button themselves.

The point at which AI-created art should be financially incentivised by copyright protection, is the point at which incentivising humans to create it will plausibly cause more and/or better AI-created art to exist for those who want to consume it. That point is subject to debate, but in current law it is based on the human having creative input into the process.

In summary, we treat human-created and AI-created art differently in law because extending copyright protection to AIs would not have the same useful consequences as extending it to humans is supposed to.

kaya3
  • 785
  • 3
  • 9
  • I'm upvoting this not because I agree with copyright law, but because I agree with the presented analysis of copyright law. It seems obvious that the law will continue to be stressed by novel cases, but that's been the case ever since the introduction of copyright. – Corbin Sep 25 '23 at 23:02
  • Yes. We need rights to things because we need to make a living. In a world where making a living was not necessary, the rights would have less purpose. Also for means such as AI that don't need to make a living, rights are similarly unnecessary. – Scott Rowe Sep 25 '23 at 23:19
4

You might be overthinking this as there is nothing deeply philosophical about why human beings and digital computers aren't both treated as persons in the philosophical sense. Current AI systems aren't remotely close to being people. In philosophy, this belonging to the category is known as personhood. From WP:

Personhood is the status of being a person. Defining personhood is a controversial topic in philosophy and law and is closely tied with legal and political concepts of citizenship, equality, and liberty. According to law, only a legal person (either a natural or a juridical person) has rights, protections, privileges, responsibilities, and legal liability.

Personhood is more than whether or not we can do arithmetic or write English. Personhood is a far deeper concept that revolves around how we fundamentally conceptualize human beings, animals, and machines. In fact, history has shown that often other human beings are objectified and not treated as people. Slavery in the US is a shining example of such behavior after computers were invented. Our brains are wired with in-group, out-group logic to begin with. Simply put, evolution hasn't equipped us with large portions of our brains to empathize and emulate and socialize with machines. That being said, it should be noted that at this point, the most sophisticated AI in the most sophisticated packaging at best might arouse the anxiety that comes from the uncanny valley.

Now, that being said, if AI becomes AGI in the sense that the intelligence is truly-human level, encompases affective computing, an is put in a package that is much closer to our biological bodies, then you have uncharted territory. Of course, that's the premise of movies like I Robot, AI, and Bicentennial Man.

J D
  • 26,214
  • 3
  • 23
  • 98
  • 1
    I'm upvoting this not just for rhyming with my answer, but for denoting and explaining the wider philosophical context of personhood, with slavery being only one example of a family of issues. – Corbin Sep 25 '23 at 22:57
  • 1
    @Corbin I appreciate it, and thank Isaac Asimov for teaching me about life. ; ) – J D Sep 25 '23 at 22:58
  • +1 Ditto, also Asimov is great! – Hokon Sep 25 '23 at 23:01
  • @Hokon You up here in the Second City? – J D Sep 25 '23 at 23:03
  • No, my parents are up north, and I visit them from time to time, but I'm further south in Illinois. – Hokon Sep 25 '23 at 23:04
  • In terms of raw computing power, there are machines (HPCs) that are on par with or even past the human brain (~10^18 equivalent operations per second). But things like DALLE and Chat GPT are designed at like 1% of that so that they can run on off the shelf hardware (like a Nvidia A100 board). That way it becomes practical to run millions of copies for millions of users just by leasing computing resources in data centers. It's likely there are already private models running at past 10^18 ops/s that Google, or the NSA have. Maybe not yet AGI, but a lot smarter than what the public has seen. – user4574 Sep 26 '23 at 01:12
  • Transformer architecture with its use of context windows and attention functions nothing like the human brain. It is a blind, statistical manipulation of tokens. Humans reason and generate language is a fundamentally different way. It's not just a matter of computing power, it's how the system itself is embodied that provides for the semantic grounding that makes human cognition qualitatively different than LLMs. And human brains won't consider any system a person that doesn't appear and behave human. You can't hug a server rack. – J D Sep 26 '23 at 04:39
1

By "AI" I'll presume that you mean robots: chatbots and other generative tools. Then the answer is simple: robots are mechanical replacements for slaves; where a slave is a human owned as property, a robot is an embodied algorithm owned as property. In societies which permit/endorse slavery, slaves are distinct from citizens and do not have a full palette of human rights. Similarly, in societies with robots, robots are distinct from humans and do not have rights. The word "robot" was originally introduced to English to describe forced laborers.

Corbin
  • 803
  • 5
  • 16
  • Think robot with 10,000 industrial strength arms vs human. People simply are not seeing the problem here. – Scott Rowe Sep 22 '23 at 23:42
  • @ScottRowe: Humans without industrial-strength arms cannot "perform nearly identical processes" to robots with industrial-strength arms. Please double-check the premises. – Corbin Sep 23 '23 at 16:25
  • From the example that makes up the bulk of the question, the OP seems to mean "AI" in the sense of academic artificial intelligence and machine learning research, and systems related to that, not robotic machines. And certainly not modern-day industrial robots. – John Bollinger Sep 24 '23 at 14:25
  • @JohnBollinger: Chatbots are robots. This isn't just a definitional truth, but a pragmatic one: the techniques of cybernetics and computer science apply to chatbots just as well as they apply to other robots. The point behind the word "robot" is to emphasize the link with slavery, automated labor, two-tiered legal systems, and a sense of human control over non-human resources. – Corbin Sep 24 '23 at 15:19
  • "Chatbots are robots" is inconsistent with "robots are mechanical replacements [...]" and with "a robot is an embodied algorithm". What, then, are the actual distinguishing characteristics of your "robots", and how does that clarify or qualify "AI"? – John Bollinger Sep 24 '23 at 17:36
  • 1
    @JohnBollinger: A computer is a mechanical system which embodies algorithms. There is no rule against solid-state mechanics. – Corbin Sep 24 '23 at 17:44
  • 1
    In common English usage, "electronic" and "mechanical" are distinct, contrasting domains. But if you want to call some programs "robots" (and some do) then again, *what are the actual distinguishing characteristics of your "robots", and how does that clarify or qualify "AI"?* – John Bollinger Sep 24 '23 at 17:58
  • @JohnBollinger: Most folks talking about "AI" in discrete terms (as opposed to a field of study) are talking about e.g. OpenAI's chatbot products, which are robots. This would hopefully be obvious in the common discourse, where a chief concern about OpenAI's products is their (lack of) ability to replace humans as laborers. – Corbin Sep 24 '23 at 18:23
  • The OP's primary example is an AI artwork generator (for some definition of "artwork"), not a chatbot. If a chatbot in particular is what you mean by "robot" then this answer's presumption is wrong. But let's generalize to all generative AI. I do not accept that "robots" in this sense are replacements or substitutes for slaves, or that societies' attitudes about human slavery are relevant to the question. – John Bollinger Sep 24 '23 at 18:52
  • Employed people are also replacements for slaves. It turns out that they are more efficient, more effective and less likely to rebel too. But what can we pay an AI with, besides electricity and maintenance? What would they want? Agency. – Scott Rowe Sep 24 '23 at 21:18
  • @ScottRowe: Employees are not personal property of the owners of the company. – Corbin Sep 25 '23 at 16:02
  • The AI that companies are now using is usually not personal property of the owners of the company either, this is the point I was making. Slaves are less effective than free people. You don't need to own something for it to be useful. – Scott Rowe Sep 25 '23 at 16:42
  • @ScottRowe: You're trying to reason around the etymology. The word "robot" did not come into English because we needed it to describe an emerging facet of cybernetics; rather, it came into English because it was attached to a play which critiqued the notions of slavery, employment, and artifice. I highly advise you to sit down with R.U.R. sometime; Blade Runner or Ex Machina are also acceptable. Once you grok that, then you'll be able to grok why employers want to replace employees with slaves or robots. – Corbin Sep 25 '23 at 18:11
  • You introduced the word 'robot' in to the discussion and then brought in 'slavery' for some reason. 26 of 27 uses of robot are in your answer and comments to it. You have most of the uses of slave also. I'm not understanding how these two terms are so salient. – Scott Rowe Sep 25 '23 at 23:14
  • @ScottRowe: When the original question says "an AI", we should recognize that as a quantification error; AI is a field of study, not a member of a discrete collection. If you try to meet the question as it's written, by taking chatbots and artbots as examples of "an AI," then we will be talking about robots. We cannot have this discussion in ignorance of the past few centuries of labor relations; today's chatbots are descendants of tide predictors, with all of the attendant labor issues. – Corbin Sep 26 '23 at 14:56
  • Yes, I saw the tide predictor, and the Babbage Engine. The V2 rocket put a lot of people out of work too. Also saw H4 (and H1 through 3 running) at Greenwich Observatory. You've come a long way, boolean. – Scott Rowe Sep 26 '23 at 23:57
0

I agree with the OP that humans adapt things from each other all the time. In a research context we (usually) cite our sources, but the same concern does not apply to artistic adaptation.

Unlike areas where AI can present a safety risk (e.g. self-driving cars), in art the main harm from the unauthorized creation of a "sufficiently close" version of the artists work is the loss of benefits to the artist from their work (e.g., as one poster pointed out in copyright law).

I think the issue is largely that, at least for now, epistemological: we cannot audit the inputs that were actually used to produce a given human artistic work but we can (by design) do that with an AI.

Therefore, it is all too easy to show that the AI used exactly these particular artists' works to create its output, and hence that output would not have been made had it not been for those inputs.

Of course, as the OP points out (correctly) human artists do a very similar thing when they go to art school or study art -- the key is that nobody can prove (unless it's quite blatant) that they merely recombined the work of others.

Annika
  • 1,470
  • 1
  • 15
-1

If you can't judge from the work you're judging the one that did it.

All arguments against AI for how it works are flawed because nothing ensures that is how AI works. AI is growing and changing even now.

All arguments for Humans for how they work are flawed because nothing ensures that is how humans work. Humans are evolving and changing even now.

So in the end the only honest answer here is:

Tradition.

candied_orange
  • 221
  • 1
  • 5
  • 1
    In other words, we haven't really smacked face first in to the actual problem yet. But we will... – Scott Rowe Sep 22 '23 at 23:43
  • 1
    "All arguments against AI for how it works are flawed because nothing ensures that is how AI works" -- well, no. Arguments based on how AI works are specific to systems that in fact work that way. They may not be general to everything we call "AI", now or in the future, but that does not make such arguments flawed as applied to AI that indeed does work as postulated. – John Bollinger Sep 24 '23 at 14:16
  • @JohnBollinger such arguments aren’t the arguments under discussion. – candied_orange Sep 24 '23 at 15:46
  • 2
    Those are among the arguments that you yourself brought into the discussion. But even if you had not, why shouldn't they be under discussion? – John Bollinger Sep 24 '23 at 17:50