Goodbye Dickinson, hello AI-generated literature
The same celebrated AI tools that save time for one group of people will make things harder and prompt plenty of headaches for others. Among other things, the tools will force us to rethink how literary works are published. We are facing an avalanche of books written by machines, and already editors have been forced to pause submissions of new works to try and find a way of managing this going forward.
Currently, many people are experimenting with using ChatGPT and similar tools to write everything from novels to science fiction short stories to children's books, complete with illustrations by other online generative tools. For those who harbor dreams of becoming an author, it is possible to produce a finished e-book in less than a day and publish it on Amazon.
You can literally become a published author overnight.
There is a term that describes the feeling of something not being quite right when the appearance of humans is replicated in humanoid objects. Uncanny valley aims to describe how there can be a dip in trust – and feeling of unease – when what you see in front of you lacks something you can't quite seem to put your finger on. According to the hypothesis this 'valley' is most obvious in the space between something that is "almost human" and "perfectly human".
We now need a term for the literature that will flood the market, works that only imitate literary writing but don't quite manage this feat to its full extent. For experienced editors, these stories create a clear feeling of discomfort.
Clarkesworld is a prestigious monthly magazine in the USA that for almost 20 years has received and published short stories in the science fiction and fantasy genres. The publication has always responded quickly, often within 48 hours, and published writers have been compensated for their work. On February 20, 2023, they had to close submissions of new short stories after receiving a record number of 50 new pitches already before lunch, most of them quite obviously produced by computers.
Despite his frustration, editor and publisher Neil Clarke has a sober view of AI-generated literature. Short stories will not disappear as a phenomenon, he writes on his blog. That is not the risk to worry about. His concern is mainly that new writers and international authors will find it significantly more difficult to find an audience when greater barriers must be erected to ward off the computer-generated texts. Concise fiction needs an influx of new voices, he believes, and that flow will now be slowed down.
The shortcut to authorship
To become a published author, however, there is no longer a need to go through a magazine or publisher. You don't even have to pay money to publish your book. With Amazon's Kindle Direct Publishing service, anyone can create an account to publish e-books and sell them through the site. If you want to offer a paperback issue, it will be printed on-demand when ordering.
This is something Brett Schickler of New York realized in January. Using ChatGPT, he created a 30-page children's book about Sammy the squirrel who learns to save money after finding a gold coin. Sammy becomes the wealthiest squirrel in the forest. Even the pictures in the book The Wise Little Squirrel come from generative AI.
At the time of writing, the book can no longer be purchased for your Kindle. Perhaps Amazon felt compelled to review the publication given the attention Schickler received, but we just don't know yet.
Another person who pursued this shortcut to authorship is Frank White, who recently published the 119-page novella Galactic Pimp, Vol 1 . In a 15-minute video, he talks about the whole process and encourages others to generate their own books.
His review is undeniably instructive. After he describes the fictional universe in which the book is set, ChatGPT doesn't want to generate the entire story for him all at once. But when he asks for chapter-by-chapter examples of certain things happening, the tool writes them out for him. White also talks about the types of changes he himself made to the text. He estimates that ChatGPT gave him 17,000 words and that he wrote 3,000 words himself.
In a post on Reddit , the "author" talks about how surprised he himself is that some scenes in the book went further than he thought the AI tool was allowed to do:
Yes the Chat GPT is able to depict graphic violence and even write erotic passages. I seemed to have been able to bypass some of the filters through sheer repetition of the themes.
White also gives us a prediction:
Within five years I think probably 50% of books are gonna be written in
the exact same technique that I've just described. how I wrote this one. Because I wrote it basically in 24 hours, and I created a fucking good book cover from a $10-subscription to a service that can basically generate a million of these a day if I sat down and did it. This is the future and this kind of technology is gonna come to cinema and filmmaking as well. Novels are just the first stage. […] I think you could create over 300 novels a year
if you took it seriously... and did this every day..
Clip from White's video:
How many books written using AI tools that currently exist on the market is impossible to determine, as there is no requirement that whoever publishes a book must declare it. However, a few hundred books list ChatGPT as a co-author. And yes, of course there are books to buy on how to use ChatGPT, written by ChatGPT.
"Do as you will, we swear ourselves free of responsibility"
The question now is what the outside world's response to this will look like. If there is an exponential increase in the number of books, will new filters be needed to help us sift through all of them? Will there be a requirement to state ChatGPT as the author? What tools will be created to reveal computer generated books and what tools will be created to evade this detection?
What do we want? What do you want? What more problems will arise?
The trend we have seen in recent years is that digital tools are created that assume very little responsibility for the consequences their use leads to. They often do not even have a stated purpose or intended use that describes what problem they are trying to solve or what need they want to satisfy. A circumstance which, of course, facilitates the escape from responsibility.
Effects can surface quickly and radically when it comes to technology that has exponential growth as an inherent characteristic. It is being left to someone other than the tool's manufacturer to take care of these effects. In the first order, the services may save time, but the impact in the second and third order often means increased costs in both time and money for someone else. Not infrequently, those who are already struggling the most, who are rarely allowed a voice, are hit the hardest.
And we haven't even talked about encoded biases, falsehoods expressed with unabashed self-confidence, or the energy consumption that machine learning requires and encourages.
Today's problems come from yesterday's solutions [1]. When we invent new solutions today, it is reasonable to think about, and account for, what problems we are creating for tomorrow to deal with.
Surely some form of responsibility should be demanded of actors who enable and reinforce new troubles. Surely someone must have written a book about this.
This phrase is made famous in The Fifth Discipline. ↩︎