Episode 173: Should Indie Writers Stop Writing Because Of Generative AI


In this week’s episode, we take a look at whether or not writers should stop writing because of the threat of generative AI programs.

This week’s coupon is for the audiobook of CLOAK OF ASHES as excellently narrated by Hollis McCarthy. You can get the audiobook of CLOAK OF ASHES for 75% off at my Payhip store with this coupon code:

OCTASHES

The coupon code is valid through November 18th, 2023, so if you find yourself wanting to get caught up before CLOAK OF EMBERS comes out soon, why not start with an audiobook?

TRANSCRIPT

00:00:00 Introduction and Writing Updates

Hello, everyone. Welcome to Episode 173 of the Pulp Writer show. My name is Jonathan Moeller. Today is October the 27th, 2023 and today we’re going to talk about whether or not you should stop writing fiction because of the threat of generative AI. Before we get into that, we will have a Coupon of the Week and an update on my current writing projects. First up, Coupon of the Week. This week’s coupon is for the audiobook of Cloak of Ashes, as excellently narrated by Hollis McCarthy. You can get the audio book of Cloak of Ashes for 75% off at my Payhip store with this coupon code: OCTASHES and again, that is OCTASHES and you can also see that in the show notes. This coupon code is valid through November 18th, 2023. So if you find yourself wanting to get caught up before Cloak of Embers comes out soon, why not start with an audiobook?

That does seem thematically appropriate to go from Cloak of Ashes to Cloak of Embers, even though Cloak of Ashes will be book three of the series and Cloak of Embers will be book ten. As you might guess, my current writing project is still Cloak of Embers and as of this recording I’m about 68,000 words into it, though I really want to get to 70,000 by the time I am done working on it for the day. I’ve had two different 10,000 word days working on this book, which is a very good thing because it’s going to be a long one. As I mentioned before, I’m 68,000 words into it and I’m not even at the halfway point of my outline yet and some of the previous chapters are so long, I’m going to have to split them up into smaller chapters. So I am confident in saying that while I don’t know exactly how long Cloak of Embers is going to be, I am entirely certain that it’s going to be the longest book I will write in 2023.

For audiobooks, right now Brad Wills is recording Dragonskull: Wrath of the Warlock, and we are hoping to have that out by December or so. As for what I want to write once Cloak of Embers is done, I have not decided. I knew Cloak of Embers was going to be a long book. I didn’t realize how long, so whatever I write next, it depends on how long it takes me to finish Cloak of Embers and how things look at that point in time, but I’m still hoping to have Cloak of Embers out in November, though it does look like there is a good possibility that the book might slip to December.

00:02:26 Main Topic: Should You Stop Creative Work Because of Generative AI?

So on to our main topic this week. Should you stop writing or pursuing creative efforts because of generative AI? Without major spoilers, the chief villain of the new Mission Impossible movie from back in May was an evil artificial intelligence. That makes it timely to do another podcast episode about generative AI. I recently saw a long, somewhat maundering social media post arguing that since soon AI would advance to the point that it could spit out a fully completed novel at the press of a button, there was no point in attempting to write any longer. The post’s author claimed it was a black pilled post, though my experience the term black pilled is usually Internet shorthand for “I will use my fears as an excuse to avoid action.” I also saw a New York Times article about a father worried about encouraging his son’s creative interest because he feared that AI would soon replace all of that. So that leads to the question, should you stop writing fiction because of AI or engaging in any creative pursuit at all?

Short answer, no. Get a hold of yourself. Maybe splash some cold water on your face. The longer, more elaborate answer: One, using fear of AI as a reason not to do something is excuse making. In fact, this is a formal logical fallacy known as the nirvana fallacy, which states that if conditions are not perfect or the outcome of an action is not perfect, then it is not worth doing. The usually cited example of this is that people wearing seatbelts can die in traffic accidents, therefore, seatbelts are not worth wearing. The counterpoint to this is that has been well proven that seat belts reduce fatality traffic fatalities and injuries and an improved but imperfect outcome is better than no improvement at all.

Writers in general seem to be strongly prone to the nirvana fallacy. You will see many, many, many excuses for why writers do not want to write. Some of those excuses are, of course, perfectly valid, such as an illness, a life crisis like a death in the family, or a car accident, or something of that nature. But quite a few of those excuses boil down to the nirvana fallacy. Conditions are not perfect or the outcome will not be perfect, so therefore it is better not to start at all. Fear of AI is really the latest excuse to slot into the nirvana fallacy.

Two: AI is worse than you think it is. It is regrettable that the various image generations and generators and large language models get saddled with the term AI because there’s nothing terribly intelligent about them. They’re basically fancy autocomplete, whether for pictures or for words. Granted, further refinements in the technology have made it into very super-duper fancy autocomplete, but there’s still nothing particularly intelligent about it. AI is also a lot harder to use effectively than many people think. If you want to get a decent result out of an AI, you need to spend a lot of work refining the prompts. People can make some beautiful images in Midjourney, but for every beautiful image that comes out of Midjourney, there’s like 40 billion terrible ones. Every really good image you see that was generated with an AI probably took like a 400 word prompt after several hundred iterations. Getting acceptable fiction out of a chatbot is so much work that it’s easier simply to write it yourself. Ironically, if you want to fix it out of a chatbot, ask it about something factual.

Also, whenever people try to rely on AI to do something important, bad things seem to happen. A nonprofit website devoted to treating eating disorders got rid of its volunteer counselors and replaced them with a chatbot, only for the chatbot to start dispensing bad diet advice. A couple of months ago, some lawyers in New York got in big trouble when they used ChatGPT for legal research, only for it to invent cases that had never happened. To be fair, the lawyer in question apparently failed to double check anything and ChatGPT repeatedly said in its answer it is a large language model and not a lawyer. As an amusing aside, the morning I wrote this paragraph, I got a text from a teacher I know complaining how much he hates ChatGPT. It’s incredibly obvious when his students use ChatGPT to do their homework because the answers are so similar. As it turns out, ChatGPT isn’t even good at cheating. The point is that whenever there are situations that involve personal or criminal liability, using AI is a very bad idea. Obviously, writing a is a much lower stakes endeavor, but that leads directly to our next point.

Number three: you can’t see in the future. Just because everyone says AI is the next big thing doesn’t mean that it is. The problem with a lot of tech CEOs is that they all want to be Steve Jobs. Steve Jobs was unquestionably a major figure in tech history, but he has been mythologized. His keynote presentations were masterpieces of showmanship, which means that people remember his career that way, like Steve Jobs strode onto the stage, dramatically unveiled the transformative next big thing: The iPod, the iPad, the iPhone, changed the world, and made billions of dollars in front of an applauding crowd. To be fair, I typed this paragraph when I wrote it on a MacBook Air. But that overlooks the actual history, which is that Jobs failed at a whole lot of stuff. He got booted from Apple in the 1980s. His subsequent company, Next computer, didn’t do all that great. And when Jobs returned to Apple in the late ‘90s, the company was in such dire straits that it needed a deal from Microsoft to stay afloat until the eMac and the iMac came along. The triumphant keynote phase of his career was in many ways his second act as an older, wiser man after a lot of setbacks and a lot of obsessive work went into all the Apple products mentioned above. The iPad and the iPhone in particular went through prototype after prototype and were the work of large and skilled teams of engineers.

The trouble with remembering the mythology instead of the actual history behind Steve Jobs is that people tried to copy the mythology without doing the mountains of work that inspired the myth. These tech CEOs all want their products to be the next big thing, but the problem is that the product one, often isn’t very good and is less of a product and more of an excuse to extract money from the customer and two, isn’t actually all that useful. Like regardless of what one might think about an iPhone or an iPad, it cannot be denied that they are useful devices. I refused to use Apple devices at all in the 2000s because they are so expensive (a criticism that, in my opinion, remains valid), but in the mid 2010s, a combination of job changes (since I’d suddenly become responsible for a lot of Mac computers after a layoff) and just the sheer usefulness of many Apple devices meant that I started using them. I still have an iPod Touch I use when I go running or when I do outdoor work, and since Apple doesn’t manufacture iPod Touches anymore, I will be sad when it finally dies.

By contrast, a lot of new products aren’t that good or that useful. The CEO has forgot that to extract money from the customer, you actually have to provide value in exchange. An iPad is expensive, but it does provide value. NFTs are a good example of this phenomenon of failing to add value for the customer. For a while, all the big brains in social media were convinced that NFTs are going to be the next big thing. The idea was that NFTs would create digital collectibles and artificial scarcity. People talked endlessly about minting their NFTs and how this was going to revolutionize online commerce. But I think it is safe to say that outside of a few niches, NFTs have been soundly rejected by the general public. They don’t add value. If you buy, for example, a collectible Boba Fett figure, it is a physical object that you own, and if anyone takes it without your permission, you can charge them with theft. By contrast, if you buy an NFT for a JPEG of Boba Fett artwork, you have an entry on a blockchain and there’s nothing to stop people from copying the JPEG of Boba Fett. What’s the point of the NFT, then? Even if you don’t keep the Boba Fett figure in its packaging and give it to a child as a toy, it still provides value in the form of entertaining the kid.

Cryptocurrency was another next big thing for a while. Some people were sure that crypto was going to end central banks and government issued fiat currency. Of course, while there are many legitimate criticisms to be made of central banks and fiat currency, it turns out they do a good job of slowing down a lot of the scams that infested the crypto space. The late, great science fiction author Jerry Pournelle used to say that unregulated capitalism inevitably led to the sale of human flesh in the market, and crypto seems to have proven that unregulated securities trading leads inevitably to FTX and crypto marketplace collapses.

The Metaverse is a much more expensive version of this. Mark Zuckerberg, worried about the future of Facebook, decided to pivot to his virtual reality Metaverse. Likely, Mr. Zuckerberg thought that the rise in remote work during the peak of the pandemic would permanently change social dynamics and Facebook, if it acted right now, could be to virtual reality what Microsoft was to the personal computer and Google was to search engines. Facebook changed its names to Meta and burned a lot of money trying to develop the Metaverse. However, this plan had two major flaws. One, while some people preferred the new social arrangements during COVID, a vastly larger majority hated it and wanted things is to go back to normal as soon as possible and two, Meta spent like $15 billion to build the Metaverse, but ended up with the worst version of Second Life that required very expensive virtual reality goggles. Meta ended up wiping out like 2/3 of its company value. So while right now generative AI might be the next big thing, but as the examples above show, this might not last.

Number four, public derision. Generative AI could also be following a similar track as NFTs and cryptocurrencies: an initial surge of enthusiasm followed by widespread disdain and mockery and retreat to smaller niche. For a while, several big gaming companies were very excited about NFTs and a smaller number were interested in cryptocurrency. They would roll neatly into the growth of microtransactions which the gaming industry really loves, like you could buy a new skin or avatar for your character, and you’ll also get an NFT is saying that you had #359 out of 5000, that kind of thing. Digital collectibles, as mentioned above, except the backlash was immense and people widely mocked every effort by game companies to insert NFTs into their product. It’s an act too much of previous extract money efforts like microtransactions and lootboxes. Cryptocurrency likewise experienced an increasing level of public disdain. See how crypto bros have been mocked after the collapse of FTX and other large crypto companies.

Generative AI is very popular in some quarters but is beginning to experience a growing level of public disdain as well. One recent example was fantasy author Mark Lawrence’s self-publishing contest. An AI designed cover won the competition and the outrage was high enough that Mister Lawrence cancelled the cover competition in future years. To be fair, part of the problem was that the artist lied about using the AI on his application form. The Marvel show Secret Invasion used a bunch of AI generated images for its title sequences, and there was a backlash against that. Various professional organizations have come out against generative AI, and apparently one of the key points in the Hollywood writer’s strike and the ongoing actor’s strike is restrictions on AI, though one of the sticking points here is less about AI and more about using AI to enable irrational greed. It seems like these studios want to be able to use an individual actor’s likeness in AI generation forever without payment. It’s too soon to say how it will turn out, but it appears that a significant portion of public opinion is on the side of the actors on this.

It probably helps that the CEOs of major media companies invariably managed to come across as cartoon villains. David Zaslav of Warner Discovery seems like he’s there just to loot the company as efficiently as possible. And Bob Iger of Disney is currently dealing with all the very expensive mistakes he made during his previous tenure as CEO. So if these guys are excited about AI, why should anyone else think it’s a good idea? So it’s possible that the public derision against AI might push into niche uses, which would be bad news for the companies that spent billions on it. I’ve found that people in general are not that upset about using AI to get out of unpleasant tasks like writing cover letters or answering emails, but if they are consuming media for entertainment, then they get very annoyed if AI was used and it’s gotten to the point where “it seems like an AI created it” has become an insult in negative reviews of various programs.

Number five: synthesis. Despite all that I just said about cryptocurrency and NFTs, generative AI is objectively more useful than NFTs and less likely to use all of the money than crypto, though it might handle on the same low level risk of being sued if you use Midjourney for commercial purposes. I mean, most kids who are cheating on their homework, if they had thought about it a little more, rewritten, ChatGPT’s response just a little bit, maybe throw in a couple of typos, they probably would have gotten away with it. To use a less unethical example, imagine you’re applying for jobs and you crank out thirty different customized cover letters. You can spend all day sweating over a handcrafted letter that some HR drone will go in set for a second before throwing away, or you can use ChatGPT to generate them. There are lots of tedious documents which no one enjoys writing, but are necessary parts of daily life and something like ChatGPT is ideal for them or for that matter, specialized chat bots, ones are specifically designed to rate marketing copy and nothing else.

AI Audio will probably end up at a point where it’s simply another feature integrated into e-readers. Hit play and an AI voice will read in an accent of your choice while the human narrated version will be a premium product. I think that generative AI will probably settle into a halfway point between AI will transform everything hype and AI will destroy civilization doomer-ism. That’s how these things usually go. A new idea comes along: thesis. A backlash to it arrives: antithesis. After some struggle, they settle into a halfway point: synthesis. Then it becomes just another tool. Photoshop and Adobe offers some evidence for this position. Adobe has been integrating its Firefly generative AI stuff into Photoshop with the generative fill tool. If you know anything about Adobe, you know that they are as corporate and litigious as it gets. The company isn’t exactly into taking big, bold swings with its products. They’ve been incrementally updating Photoshop and the other Creative Suite products forever. So if Adobe feels safe integrating generative AI into its products, it’s probably not going anywhere for a while. But here’s the important point. On social media, you see a lot of impressive images generated with generative fill in Adobe and Photoshop, but if you try it yourself, 99% of what it generates is not very good. Refinement, iterations, and testing are vital. If AI doesn’t go away, I think that’s where it’s going, providing the raw materials for further refinement and improvement.

Six: conclusion. As you might guess from the tone of my podcast episodes on the subject, I don’t like generative AI very much, and I don’t think it adds very much of value, though this might be just my overall grumpiness. If overreacting legislation came along that crippled AI research, I don’t personally think much of value would be lost. No one can see the future, as many examples above demonstrate. But overall, I think generative AI is going to be just another tool and one that will require practice to effectively use, actually will probably require more practice to effectively use than people think. Stopping writing or preventing a child from engaging in creative pursuits is a bit like stopping carpentry because someone invented the electric saw and think about how many people you see every day, who obviously don’t think things through at all. Encouraging the child in creative pursuits will definitely serve him or her well later in life, regardless of the actual career.

So that’s it for this week. Thanks for listening to The Pulp Writer Show. I hope you found the show useful. A reminder that you can listen to all the back episodes on https://thepulpwritershow.com. If you enjoyed the podcast, please leave a review on your podcasting platform of choice. Stay safe and stay healthy and see you all next week.

Jonathan Moeller Written by: