## Colophon tags:: url:: https://archive.is/cMQbj date:: [[]] %% title:: Why A.I. Isn’t Going to Make Art | The New Yorker type:: [[clipped-note]] author:: [[@archive.is]] %% ## Notes > Is there anything about art that makes us think it can’t be created by pushing a button, as in Dahl’s imagination? Right now, the fiction generated by large language models like ChatGPT is terrible, but one can imagine that such programs might improve in the future. How good could they get? Could they get better than humans at writing fiction—or making paintings or movies—in the same way that calculators are better at addition and subtraction? — [view in context](https://hyp.is/S99J2OIbEe-qDOf9_0y-4Q/archive.is/cMQbj) Yes, at the very least, they will be indistinguishable.> Art is notoriously hard to define, and so are the differences between good art and bad art — [view in context](https://hyp.is/UGfmNuIbEe-GtteGA1aogg/archive.is/cMQbj) > Some commentators imagine that image generators will affect visual culture as much as the advent of photography once did. Although this might seem superficially plausible, the idea that photography is similar to generative A.I. deserves closer examination. — [view in context](https://hyp.is/Gv_93uIcEe-Lah8Q8jDklQ/archive.is/cMQbj) > Why A.I. Isn’t Going to Make Art — [view in context](https://hyp.is/OvlrjuIcEe-ialdTj5U8RA/archive.is/cMQbj) date:: [[2025-02-03]] > When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. — [view in context](https://hyp.is/UTDw_OIcEe-Hzmvd5o8G1g/archive.is/cMQbj) > But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices. — [view in context](https://hyp.is/WXWoKuIcEe-w5rdB0sUtcQ/archive.is/cMQbj) > We can imagine a text-to-image generator that, over the course of many sessions, lets you enter tens of thousands of words into its text box to enable extremely fine-grained control over the image you’re producing; this would be something analogous to Photoshop with a purely textual interface. I’d say that a person could use such a program and still deserve to be called an artist. — [view in context](https://hyp.is/cEoTGuIhEe-xmZslF8FnSQ/archive.is/cMQbj) > The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists. — [view in context](https://hyp.is/rlC-cOIhEe-qQ2tFdtwO0g/archive.is/cMQbj) ✅ > The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. — [view in context](https://hyp.is/wKWYNOIhEe-HUPd0drAh1g/archive.is/cMQbj) > I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception. — [view in context](https://hyp.is/yfr32uIhEe-5d5tKUsE5MQ/archive.is/cMQbj) > the interrelationship between the large scale and the small scale is where the artistry lies. — [view in context](https://hyp.is/0kUcGOIhEe-SC3_Fc5MkRQ/archive.is/cMQbj) > But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art. — [view in context](https://hyp.is/IK01IOIiEe-SDLuRwDl1aQ/archive.is/cMQbj) > Of course, most pieces of writing, whether articles or reports or e-mails, do not come with the expectation that they embody thousands of choices. In such cases, is there any harm in automating the task? Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it. — [view in context](https://hyp.is/9GdwVuIiEe-UBv8nUYgiUA/archive.is/cMQbj) Agree. > The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying. — [view in context](https://hyp.is/oapWeuIjEe-joQ-pPhl9Cw/archive.is/cMQbj) > Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting — [view in context](https://hyp.is/7OqiyuIjEe-N6mfq6XvUag/archive.is/cMQbj) > It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something. — [view in context](https://hyp.is/TWDr5uIkEe-hEgsYiWqzeg/archive.is/cMQbj) 💯 > Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; — [view in context](https://hyp.is/c65uwuIkEe-5Sy8vJz8x_Q/archive.is/cMQbj) > As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. — [view in context](https://hyp.is/oqdDtuIkEe-5TXeOphgulw/archive.is/cMQbj) > Not all writing needs to be creative, or heartfelt, or even particularly good; sometimes it simply needs to exist. Such writing might support other goals, such as attracting views for advertising or satisfying bureaucratic requirements. When people are required to produce such text, we can hardly blame them for using whatever tools are available to accelerate the process. But is the world better off with more documents that have had minimal effort expended on them? It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill those requirements, the greater those requirements will eventually become. We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement? — [view in context](https://hyp.is/8AFpeuIkEe-ntk_4GZEl8Q/archive.is/cMQbj) highly plausible. > computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. — [view in context](https://hyp.is/HLcTrOIlEe-r4ZeWBjLGGA/archive.is/cMQbj) > By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills — [view in context](https://hyp.is/T_gK8OIlEe-ucZukAvb-Jw/archive.is/cMQbj) > More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete. — [view in context](https://hyp.is/WCC9yuImEe-1MSsxM0rPNA/archive.is/cMQbj) > It reduces the amount of intention in the world. — [view in context](https://hyp.is/cvCv6OImEe-9EZfHkUzF2Q/archive.is/cMQbj) Ok, this is good. > Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty. — [view in context](https://hyp.is/h9va9OImEe-7SWNmPyojCw/archive.is/cMQbj) > Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable — [view in context](https://hyp.is/lHQ8DOImEe-qakPOXqG1jg/archive.is/cMQbj) > We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise. — [view in context](https://hyp.is/mXLrpOImEe-eA0ecjoa-xA/archive.is/cMQbj) > Is there anything about art that makes us think it can’t be created by pushing a button, as in Dahl’s imagination? Right now, the fiction generated by large language models like ChatGPT is terrible, but one can imagine that such programs might improve in the future. How good could they get? Could they get better than humans at writing fiction—or making paintings or movies—in the same way that calculators are better at addition and subtraction? — [view in context](https://hyp.is/S99J2OIbEe-qDOf9_0y-4Q/archive.is/cMQbj) - Annotation: Yes, at the very least, they will be indistinguishable.> Art is notoriously hard to define, and so are the differences between good art and bad art — [view in context](https://hyp.is/UGfmNuIbEe-GtteGA1aogg/archive.is/cMQbj) > Some commentators imagine that image generators will affect visual culture as much as the advent of photography once did. Although this might seem superficially plausible, the idea that photography is similar to generative A.I. deserves closer examination. — [view in context](https://hyp.is/Gv_93uIcEe-Lah8Q8jDklQ/archive.is/cMQbj) > Why A.I. Isn’t Going to Make Art — [view in context](https://hyp.is/OvlrjuIcEe-ialdTj5U8RA/archive.is/cMQbj) - Annotation: date:: [[2025-02-03]]> When photography was first developed, I suspect it didn’t seem like an artistic medium because it wasn’t apparent that there were a lot of choices to be made; you just set up the camera and start the exposure. But over time people realized that there were a vast number of things you could do with cameras, and the artistry lies in the many choices that a photographer makes. It might not always be easy to articulate what the choices are, but when you compare an amateur’s photos to a professional’s, you can see the difference. So then the question becomes: Is there a similar opportunity to make a vast number of choices using a text-to-image generator? I think the answer is no. — [view in context](https://hyp.is/UTDw_OIcEe-Hzmvd5o8G1g/archive.is/cMQbj) > But let me offer a generalization: art is something that results from making a lot of choices. This might be easiest to explain if we use fiction writing as an example. When you are writing fiction, you are—consciously or unconsciously—making a choice about almost every word you type; to oversimplify, we can imagine that a ten-thousand-word short story requires something on the order of ten thousand choices. When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices. — [view in context](https://hyp.is/WXWoKuIcEe-w5rdB0sUtcQ/archive.is/cMQbj) > We can imagine a text-to-image generator that, over the course of many sessions, lets you enter tens of thousands of words into its text box to enable extremely fine-grained control over the image you’re producing; this would be something analogous to Photoshop with a purely textual interface. I’d say that a person could use such a program and still deserve to be called an artist. — [view in context](https://hyp.is/cEoTGuIhEe-xmZslF8FnSQ/archive.is/cMQbj) > The selling point of generative A.I. is that these programs generate vastly more than you put into them, and that is precisely what prevents them from being effective tools for artists. — [view in context](https://hyp.is/rlC-cOIhEe-qQ2tFdtwO0g/archive.is/cMQbj) - Annotation: ✅> The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. — [view in context](https://hyp.is/wKWYNOIhEe-HUPd0drAh1g/archive.is/cMQbj) > I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception. — [view in context](https://hyp.is/yfr32uIhEe-5d5tKUsE5MQ/archive.is/cMQbj) > the interrelationship between the large scale and the small scale is where the artistry lies. — [view in context](https://hyp.is/0kUcGOIhEe-SC3_Fc5MkRQ/archive.is/cMQbj) > But the creators of traditional novels, paintings, and films are drawn to those art forms because they see the unique expressive potential that each medium affords. It is their eagerness to take full advantage of those potentialities that makes their work satisfying, whether as entertainment or as art. — [view in context](https://hyp.is/IK01IOIiEe-SDLuRwDl1aQ/archive.is/cMQbj) > Of course, most pieces of writing, whether articles or reports or e-mails, do not come with the expectation that they embody thousands of choices. In such cases, is there any harm in automating the task? Let me offer another generalization: any writing that deserves your attention as a reader is the result of effort expended by the person who wrote it. Effort during the writing process doesn’t guarantee the end product is worth reading, but worthwhile work cannot be made without it. The type of attention you pay when reading a personal e-mail is different from the type you pay when reading a business report, but in both cases it is only warranted when the writer put some thought into it. — [view in context](https://hyp.is/9GdwVuIiEe-UBv8nUYgiUA/archive.is/cMQbj) - Annotation: Agree.> The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying. — [view in context](https://hyp.is/oapWeuIjEe-joQ-pPhl9Cw/archive.is/cMQbj) > Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting — [view in context](https://hyp.is/7OqiyuIjEe-N6mfq6XvUag/archive.is/cMQbj) > It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something. — [view in context](https://hyp.is/TWDr5uIkEe-hEgsYiWqzeg/archive.is/cMQbj) - Annotation: 💯> Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; — [view in context](https://hyp.is/c65uwuIkEe-5Sy8vJz8x_Q/archive.is/cMQbj) > As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. — [view in context](https://hyp.is/oqdDtuIkEe-5TXeOphgulw/archive.is/cMQbj) > Not all writing needs to be creative, or heartfelt, or even particularly good; sometimes it simply needs to exist. Such writing might support other goals, such as attracting views for advertising or satisfying bureaucratic requirements. When people are required to produce such text, we can hardly blame them for using whatever tools are available to accelerate the process. But is the world better off with more documents that have had minimal effort expended on them? It would be unrealistic to claim that if we refuse to use large language models, then the requirements to create low-quality text will disappear. However, I think it is inevitable that the more we use large language models to fulfill those requirements, the greater those requirements will eventually become. We are entering an era where someone might use a large language model to generate a document out of a bulleted list, and send it to a person who will use a large language model to condense that document into a bulleted list. Can anyone seriously argue that this is an improvement? — [view in context](https://hyp.is/8AFpeuIkEe-ntk_4GZEl8Q/archive.is/cMQbj) - Annotation: highly plausible.> computer scientist François Chollet has proposed the following distinction: skill is how well you perform at a task, while intelligence is how efficiently you gain new skills. — [view in context](https://hyp.is/HLcTrOIlEe-r4ZeWBjLGGA/archive.is/cMQbj) > By Chollet’s definition, programs like AlphaZero are highly skilled, but they aren’t particularly intelligent, because they aren’t efficient at gaining new skills — [view in context](https://hyp.is/T_gK8OIlEe-ucZukAvb-Jw/archive.is/cMQbj) > More than our ability to solve algebraic equations, our ability to cope with unfamiliar situations is a fundamental part of why we consider humans intelligent. Computers will not be able to replace humans until they acquire that type of competence, and that is still a long way off; for the time being, we’re just looking for jobs that can be done with turbocharged auto-complete. — [view in context](https://hyp.is/WCC9yuImEe-1MSsxM0rPNA/archive.is/cMQbj) > It reduces the amount of intention in the world. — [view in context](https://hyp.is/cvCv6OImEe-9EZfHkUzF2Q/archive.is/cMQbj) - Annotation: Ok, this is good.> Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty. — [view in context](https://hyp.is/h9va9OImEe-7SWNmPyojCw/archive.is/cMQbj) > Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable — [view in context](https://hyp.is/lHQ8DOImEe-qakPOXqG1jg/archive.is/cMQbj) > We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world. That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise. — [view in context](https://hyp.is/mXLrpOImEe-eA0ecjoa-xA/archive.is/cMQbj)