Gatekeeping Against the Robots
"Half a dozen “authors” of AI-generated material could flood the self-publishing markets in a matter of weeks..."
Welcome to our weekly column offering perspectives on lit mag publishing, with contributions from readers, writers and editors around the world.
When you buy eggs in the UK, you see a red lion stamped on the shell of each one. This is a food safety marking, and it certifies them as salmonella-free. You can use lion-stamped eggs to make fondant icing or eat them soft-boiled, and (provided you keep your own kitchen to a hygienic standard) you won’t get food poisoning.
Now, I’d like to have something like that in the world of writing. I’d like to see genuine authors awarded some sort of badge certifying their work as AI-free: created by humans without the aid of artificial content-generators.
I would argue that the need for this is both real and urgent. Within the last year or two, AI content-creators have made superficial but spectacular advances. If an automated app is fed enough human-written content, all you need to do is type in a few specifications and it will scan through its database and put together sentences and paragraphs on whatever topic you choose. It will spit out a “poem” in seconds or a “novel” in minutes. Half a dozen “authors” of AI-generated material could flood the self-publishing markets in a matter of weeks if they chose to do so.
And this is a bad thing. For several reasons.
1. AI-generated reading material pushes readers towards shallow reading and could discourage them altogether. I’m already meeting occasional articles on topics that I’d normally find interesting, but then I find I struggle to get through three paragraphs. Everything feels eerily familiar and far too obvious, and yet at the same time the argument will feel poorly marshalled, as though it had been strung together by word-association rather than logical progression. It’s hard to put my finger on what’s wrong, because often, it’s not about what’s there; it’s about what’s not there. Relatability, consistency, originality of idea or expression—it’s all missing. I give up reading and (if it’s open for reader comments) I tell them what I think of their content.
That’s me, an experienced reader who can to some extent see where the fault lies. I won’t decide I don’t enjoy reading just because I keep meeting poor reading material. But is this true of everyone? A less experienced reader might find it personally discouraging if they can’t get into an article or can’t enjoy a story. AI text-generators produce grammatically accurate but low-quality prose that does not read well aloud and does not satisfy the reader. Clanky dialogue, cliched descriptions, stereotyped characters, hackneyed storylines, trite conclusions. That’s all it can do, because by its very nature it cannot produce anything fresh. It can only chew up larger and larger swathes of what is already written, pulp them together and spit out cardboard. Readers will experience a subtle sense of dissatisfaction and may decide that reading books isn’t for them.
AI text-generators produce grammatically accurate but low-quality prose that does not read well aloud and does not satisfy the reader.
2. Ethically, AI-generated content is plagiarism. The system will carefully duck below whatever technical boundaries it’s been given. It will avoid copying “5 consecutive words” or “3 significant words” within any single phrase. Well, it can get around that. It can open its thesaurus and swap out a word or two, then spit out altered copy. That’s still plagiarism, morally speaking. A few well-respected authors whose work has been fed into AI content-banks are suing the AI companies for banking their work without permission. Even if writers have sold their permission, there’s still something ugly about the idea of someone claiming authorship that isn’t theirs—like someone getting a friend’s permission to falsely publish the friend’s poem under their own name. It still means taking credit for producing something they didn’t produce, and it’s still wrong.
3. AI content presents yet another obstacle to authors—especially to self-publishing authors. It’s already hard to build sales in a market that lacks any baseline threshold of quality, where “bestseller” rankings may be massaged and reviews are often faked. Buyer confidence in self-published books will only fall lower still if nobody knows which books are written and which are regurgitated by AI.
AI content is a threat to traditional-style publishers and literary magazines too. There are a few—a very few—lit mags that welcome submissions that include AI content; nearly all lit mags are there to propagate creative work done by humans, and are actively looking for ways to defend their slush piles from AI incursion. Flooding their submission queues with AI material costs them time, at the very least—and then there’s the uncomfortable possibility that something they have published could turn out to be the work of a cheat. How can publishers be sure they’re not letting AI-generated content through?
A number of anthology/lit mag editors say they have tried running stories through AI content checkers, which give out a percentage estimate of whether a piece is fake or not. They don’t work reliably, according to editors whom I have questioned. Instinct is still a publisher’s best guide to whether a piece is genuine or not, one after another has admitted.
“Obviously that isn't fool proof, but usually that kind of writing won't pass muster even if it is AI because it's just not that good,” says Jade Wildy, an author and short story contest judge.
“In my experience,” says Dean Shawker of Black Hare Press, “the AI's tend to be quite dry, repetitive, and have very dull plotlines (and rarely, if ever, subplots) and flat endings.”
“I trust my instinct first. I will always know that something feels off. If I get that instinct, I run it through a checker. I'm always right,” says Jessica Bell of Vine Leaves Press.
Unfortunately, one or two other respondents (not quoted here) have said that publishers cannot necessarily take the time to follow up on any piece with suspicious features—and will just send out a quick rejection. As things stand, this may become inevitable, but it isn’t going to be fair, and it will almost certainly hit new writers harder than established authors.
I don’t think any of my own writing—creative pieces or articles—has ever yet been mistaken for AI-generated material by real readers. But there’s always the possibility of failing the automated tests of those who use them. I tried running some of my 100-word “grammar horror” stories through a free content checker, and one story that played out on a tongue-twister came up as a suspect. I suppose that’s not surprising, because a content-checking app can only judge on its own level—the mechanics of words and sentences—so how would it know what I was really doing? But it still bothered me. I need some sort of certification that I am a real writer whose work is all my own. I want publishers to consider my stories on their own merit, without wondering if I might have filched some of my words from the macerated works of authors greater than I. If I write something that echoes the opening sentence of a Dickens novel, I want it to be understood as a meaningful literary allusion and not as a sign that I’ve accepted the robots as overlords.
This is why I’m arguing that we need a system in which books, poems and short stories written by genuine human beings get a badge that says they are, blemishes and all, the work of a member of our species. And it needs to become common knowledge until any random member of the public will know the difference. We need a sort of Writers’ Union. Something along the lines of SFWA, but more universal.
This means gatekeeping, and gatekeeping has become a pejorative term related to subtle discriminations against historically underrepresented groups. But writers from all backgrounds need to know that their work isn’t going to get drowned in a tidal wave of AI-generated slush. Publishers also need a way to sort the wheat from the chaff that isn’t costly or time-consuming. As time goes on it’s not going to be enough for literary magazines and book publishers to blacklist the fakes as they spot them, because someone who’s secretly using AI can always switch identities and carry on as before. In my opinion, we need some system of authentication to protect all genuine writers.
We need a system in which books, poems and short stories written by genuine human beings get a badge that says they are, blemishes and all, the work of a member of our species.
It’s hard to envision precisely how this could be administered. You’d have to have something accessible enough for a new writer to aim for and something that’s not easily taken away unfairly. It can’t be automated. I keep failing the Captcha on my own online column because the pictures aren’t distinct enough for me to know how far the bicycle extends or what’s part of a pedestrian crossing. Basically, we can’t trust robots to know we’re human. So, on some level, it would have to be administered by humans. But plumbers, medics, teachers and childcare workers are certified, belong to unions, and undergo regular inspection. I feel there should be some equivalent for writers.
This is not about stopping people using AI. Anyone would still be able to post AI-generated articles as freely as before, but without the badge of originality. Anyone would still be able to self-publish AI-generated fiction, but it would lack the statement of creative authenticity. Potential readers would then be able to make an informed choice. Human authors found to have resorted to AI would obviously lose their credentials—and justly so.
There will be controversy over how such a system can be implemented with fairness and consistency, without discriminating against any particular demographic—novice writers, overseas writers, or those who have been required to generate AI articles within their line of work but who would not make it a part of their creative endeavors. I cannot single-handedly design a working model of this concept, but I think it is high time to open discussion on the matter.
It’s a quandary that was always going to happen some time. As soon as AI entities could string words into sentences, someone was going to say, “Hey! If I can make it do ten words, then I can make it do 60 000 words! That’s a book, isn’t it?”
That is not a book. We need books (real books) to be marked as real books, so that it isn’t just one more tripwire of Buyer Beware.
excellent point but I'm stumped trying to imagine a certification label that couldn't easily be counterfeited. ..
Speaking as someone who has managed certification schemes and programs in multiple jurisdictions, I have some thoughts.
Certification schemes put the onus of proof on the producer, and they work when the buyers of their product are convinced that they are necessary.
Example: the Roundtable for Sustainable Palm Oil only exists because the companies (e.g. cosmetics and food companies) who buy palm oil specify that the oil needs to be certified sustainable; therefore the growers are willing to get certified. Assuming that the egg example works, it does so because there is presumably a group of buyers (makers of fondant icing) who will *not* buy the product that is *not* certified (i.e. doesn't have the stamp) ... and therefore it is worth the cost of certification for the egg producers to get the stamp.
What makes it so sad is that there is no example of a certification program in which the buyers (i.e. the ones with the money) are the ones paying for the certification; it's always the producers who end up having to pay. This is unfortunate because the "bad guys" (producers of bad products, like rainforest-destroying palm plantations) are the ones who make the program necessary in the first place, and they get off without paying anything! Meanwhile, the bad guys still somehow manage to get customers who don't care about buying a certified product.
Alas, this is life. (Note: the exception is where regulators step in and say that it is illegal to buy uncertified products; but this usually happens only after decades of lobbying.)
Hence, I'm very pessimistic that authors, who are not rich and often do their work part-time, will be willing to pay to get their work certified. And again, it will inevitably be the authors who will have to pay for any such certification, not the buyers.
I'm also very pessimistic that there will be enough buyers who actually care about getting a certification to make it worth the authors' while to get certified.