Same. I think its a great idea but I don't know how it could be made to work, especially since once someone is certified, down the road they could switch over to AI easily enough.
Speaking as someone who has managed certification schemes and programs in multiple jurisdictions, I have some thoughts.
Certification schemes put the onus of proof on the producer, and they work when the buyers of their product are convinced that they are necessary.
Example: the Roundtable for Sustainable Palm Oil only exists because the companies (e.g. cosmetics and food companies) who buy palm oil specify that the oil needs to be certified sustainable; therefore the growers are willing to get certified. Assuming that the egg example works, it does so because there is presumably a group of buyers (makers of fondant icing) who will *not* buy the product that is *not* certified (i.e. doesn't have the stamp) ... and therefore it is worth the cost of certification for the egg producers to get the stamp.
What makes it so sad is that there is no example of a certification program in which the buyers (i.e. the ones with the money) are the ones paying for the certification; it's always the producers who end up having to pay. This is unfortunate because the "bad guys" (producers of bad products, like rainforest-destroying palm plantations) are the ones who make the program necessary in the first place, and they get off without paying anything! Meanwhile, the bad guys still somehow manage to get customers who don't care about buying a certified product.
Alas, this is life. (Note: the exception is where regulators step in and say that it is illegal to buy uncertified products; but this usually happens only after decades of lobbying.)
Hence, I'm very pessimistic that authors, who are not rich and often do their work part-time, will be willing to pay to get their work certified. And again, it will inevitably be the authors who will have to pay for any such certification, not the buyers.
I'm also very pessimistic that there will be enough buyers who actually care about getting a certification to make it worth the authors' while to get certified.
Thank you for this. I agree. However: remember when ChatGBT first appeared and OpenAI and others said they would develop "watermarks" to identify work as generated by AI? Well, I keep seeing new and more powerful AI products being introduced. But I haven't seen any "watermarks."
Have you heard of the Human Created symbol? It was developed by Dana Chandler and Kari Lineberry of KD Resources, a publishing services provider. They wanted to find a way for authors and other creatives to certify their work as the genuine product of their own hearts and minds.
I interviewed them about it recently for an upcoming edition of the Alliance of Independent Authors member magazine. The symbol is free for anyone to download, with no requirement to give an email or other contact information.
Dana and Kari say: 'AI has swarmed the book world and we notice a quality difference between what is artificially created and what is original. Also, as avid readers ourselves, it's important for us to know whether we are reading an original work or not. We feel it is necessary to distinguish the difference for the reader. In addition, we are both published independent authors and want to protect our hard work. We assume that other authors would want to do the same. As a result, we devised the Human Created Symbol of Distinction, which we use in all of our work. It is free for anyone to download, with no requirement to give an email or other contact information. We believe this symbol should be available for anyone to use in order to distinguish original text, photography, art, or other human created work.'
Find it by searching for Human Created Symbol.
And yes, I suppose an AI could also use it, but at least it's a start. And it helps make the point and keep the point in the public consciousness.
I share Jan Lee's pessimism, but your overall assessment of AI writing is spot on. As the Academic Director of a graduate program at the University of Denver's University College, I've tried to drive home both the ethical challenges of AI use and its danger to the creative powers of writers who use it. You've insightfully laid out the ethical case, but the erosion of creativity itself, for both writers and readers, is a more pernicious danger. Can you imagine Simone Biles winning gold if she stopped practicing and replaced it with watching AI generated videos? Writers who lean on AI at all are letting their own creative muscles atrophy. How foolish.
'Gatekeeping' is not a bad thing. It is a bad thing if it is used in any way to block people based on their inviolable personhood, that is self-evident. It is not a bad thing as a means towards quality control. Never has been. I'm a journalist for a living, in many ways that really is gatekeeping as a profession, it is the application of general knowledge and a degree of expertise and experience to filter chaff and zero in on what is 'news'. Lit mags do the same to art. A guild, a union, a craftsperson's license... they do the same. The bar for lawyers... etc. etc. Very often the cry of critique around 'gatekeeping' tends to be solipsistic writers, who write confessional work about themselves ad nauseum, and I've yet to see a coherent reason why the world is better for their writing to be out there in the first place, it's the literary equivalent of getting repeatedly beaten with a maudlin fish.
So as to your suggestion, 100% agree.
Hopefully the AI loons will get bored. They're like a child at school trying to convince everyone to play with a barbed football. Nothing good will come of them. Their apologists are even worse.
Unfortunately they will not as a whole get bored, in part because it is made up of several demographics of intent, some of which is a numbers game to flood the market for money. These are long term and only get bored if they end up with more consistent lucrative avenues. Other of which is to feel good about getting something done, or sent out, or published or just seeing something they feel like they were a part of get created; these will rotate out, so if they get bored there will always be new ones. Others are simply testing it out and are most likely to get bored and not be replaced faster than they do. Then there are the early adopters who will acclimate and love it so much that they intentionally use it to happily spite everyone in the spirit of normalizing it. There there are the ones who will acclimate (possibly all they will ever know since their entry into the pool) and feel its benefits far outweigh the negatives, and its here to stay so why is everyone so pressed about advances of technology. Both of these won't get bored and go away because it will be as much of their expectations and background use as grammarly spellcheck or smart phone keyboards.
I've been watching this and wondering if I'm just getting too old to adapt, and am going the way of the dodo, but in reality, from my experience, nothing really good comes of AI as it exists today of any sort of net.
Thank you for this. Very well thought out. I agree with others that certification has its own problems. Obviously, it’s subject to abuse, like “organic” or “free-range.” For certification to be properly incentivized from the top it would have to be attached to the money, as Jan Lee suggests. In the universe of self publishing, that would mean Amazon would need to care about proper AI disclosure. Not likely. I see it working from the bottom if the writers/creators community agree on the need for a stamp of authenticity, but it will take time and a third-party to conduct independent verifications along the way. Maybe Lit Mag?
I just had this “AI is ruining the world” talk with my computer genius brother last week. He said he was in a McDonald’s on the West Coast last month that only had one employee. That guy was in charge of monitoring all the AI, the robots who took the orders, made the food, and cleaned the store. I’ve been having “Terminator” nightmares ever since he told me that.
I’m the polar opposite of my computer geek brother. I have a Mac and have no clue how to use a PC. I wouldn’t know where to begin to use AI. And I don’t want to know. I agree with most of what you said in this article about AI being a threat to us writers but you lost me at the idea of a union. Unions create more problems than they solve and
Great article. I totally agree. We need a badge/watermark perhaps, something we purchase that we can imbed into our work to prove it's ours. Could we design our own watermarks? A personal signature of sorts. Let's NOT ask an AI what we can do! This needs to be writer-to-writer until we can solve it.
I'm going to start out that I detest AI. I think that most models that are in play today can't be separated from their early abuses (which include traumatizing farm center workers), though I do appreciate that now they are paying workers more to put in original stuff. Whatever I repeat here is me trying to think through it, not a support in *any* way.
I've heard some writers say they love it because they use it for inspiration, or they run things through it and ask it to clean it up (and no a lot of this isn't just fixing spelling, it will change words, sentences etc), or they ask AI to write something and then they as a human clean it up, and according to them sometimes extensively. They don't consider this AI written, they consider it AI assisted.
Out of curiosity, do you lump these in with what you're talking about above?
I very much agree with this system- it could be similar to the ratings systems of film and television re: how much of the text was done by AI, or authors systematically using AI.
excellent point but I'm stumped trying to imagine a certification label that couldn't easily be counterfeited. ..
Same. I think its a great idea but I don't know how it could be made to work, especially since once someone is certified, down the road they could switch over to AI easily enough.
Speaking as someone who has managed certification schemes and programs in multiple jurisdictions, I have some thoughts.
Certification schemes put the onus of proof on the producer, and they work when the buyers of their product are convinced that they are necessary.
Example: the Roundtable for Sustainable Palm Oil only exists because the companies (e.g. cosmetics and food companies) who buy palm oil specify that the oil needs to be certified sustainable; therefore the growers are willing to get certified. Assuming that the egg example works, it does so because there is presumably a group of buyers (makers of fondant icing) who will *not* buy the product that is *not* certified (i.e. doesn't have the stamp) ... and therefore it is worth the cost of certification for the egg producers to get the stamp.
What makes it so sad is that there is no example of a certification program in which the buyers (i.e. the ones with the money) are the ones paying for the certification; it's always the producers who end up having to pay. This is unfortunate because the "bad guys" (producers of bad products, like rainforest-destroying palm plantations) are the ones who make the program necessary in the first place, and they get off without paying anything! Meanwhile, the bad guys still somehow manage to get customers who don't care about buying a certified product.
Alas, this is life. (Note: the exception is where regulators step in and say that it is illegal to buy uncertified products; but this usually happens only after decades of lobbying.)
Hence, I'm very pessimistic that authors, who are not rich and often do their work part-time, will be willing to pay to get their work certified. And again, it will inevitably be the authors who will have to pay for any such certification, not the buyers.
I'm also very pessimistic that there will be enough buyers who actually care about getting a certification to make it worth the authors' while to get certified.
Thank you for this. I agree. However: remember when ChatGBT first appeared and OpenAI and others said they would develop "watermarks" to identify work as generated by AI? Well, I keep seeing new and more powerful AI products being introduced. But I haven't seen any "watermarks."
Have you heard of the Human Created symbol? It was developed by Dana Chandler and Kari Lineberry of KD Resources, a publishing services provider. They wanted to find a way for authors and other creatives to certify their work as the genuine product of their own hearts and minds.
I interviewed them about it recently for an upcoming edition of the Alliance of Independent Authors member magazine. The symbol is free for anyone to download, with no requirement to give an email or other contact information.
Dana and Kari say: 'AI has swarmed the book world and we notice a quality difference between what is artificially created and what is original. Also, as avid readers ourselves, it's important for us to know whether we are reading an original work or not. We feel it is necessary to distinguish the difference for the reader. In addition, we are both published independent authors and want to protect our hard work. We assume that other authors would want to do the same. As a result, we devised the Human Created Symbol of Distinction, which we use in all of our work. It is free for anyone to download, with no requirement to give an email or other contact information. We believe this symbol should be available for anyone to use in order to distinguish original text, photography, art, or other human created work.'
Find it by searching for Human Created Symbol.
And yes, I suppose an AI could also use it, but at least it's a start. And it helps make the point and keep the point in the public consciousness.
I share Jan Lee's pessimism, but your overall assessment of AI writing is spot on. As the Academic Director of a graduate program at the University of Denver's University College, I've tried to drive home both the ethical challenges of AI use and its danger to the creative powers of writers who use it. You've insightfully laid out the ethical case, but the erosion of creativity itself, for both writers and readers, is a more pernicious danger. Can you imagine Simone Biles winning gold if she stopped practicing and replaced it with watching AI generated videos? Writers who lean on AI at all are letting their own creative muscles atrophy. How foolish.
'Gatekeeping' is not a bad thing. It is a bad thing if it is used in any way to block people based on their inviolable personhood, that is self-evident. It is not a bad thing as a means towards quality control. Never has been. I'm a journalist for a living, in many ways that really is gatekeeping as a profession, it is the application of general knowledge and a degree of expertise and experience to filter chaff and zero in on what is 'news'. Lit mags do the same to art. A guild, a union, a craftsperson's license... they do the same. The bar for lawyers... etc. etc. Very often the cry of critique around 'gatekeeping' tends to be solipsistic writers, who write confessional work about themselves ad nauseum, and I've yet to see a coherent reason why the world is better for their writing to be out there in the first place, it's the literary equivalent of getting repeatedly beaten with a maudlin fish.
So as to your suggestion, 100% agree.
Hopefully the AI loons will get bored. They're like a child at school trying to convince everyone to play with a barbed football. Nothing good will come of them. Their apologists are even worse.
Unfortunately they will not as a whole get bored, in part because it is made up of several demographics of intent, some of which is a numbers game to flood the market for money. These are long term and only get bored if they end up with more consistent lucrative avenues. Other of which is to feel good about getting something done, or sent out, or published or just seeing something they feel like they were a part of get created; these will rotate out, so if they get bored there will always be new ones. Others are simply testing it out and are most likely to get bored and not be replaced faster than they do. Then there are the early adopters who will acclimate and love it so much that they intentionally use it to happily spite everyone in the spirit of normalizing it. There there are the ones who will acclimate (possibly all they will ever know since their entry into the pool) and feel its benefits far outweigh the negatives, and its here to stay so why is everyone so pressed about advances of technology. Both of these won't get bored and go away because it will be as much of their expectations and background use as grammarly spellcheck or smart phone keyboards.
I've been watching this and wondering if I'm just getting too old to adapt, and am going the way of the dodo, but in reality, from my experience, nothing really good comes of AI as it exists today of any sort of net.
Make AI cite its sources, like the rest of us.
Thank you for this. Very well thought out. I agree with others that certification has its own problems. Obviously, it’s subject to abuse, like “organic” or “free-range.” For certification to be properly incentivized from the top it would have to be attached to the money, as Jan Lee suggests. In the universe of self publishing, that would mean Amazon would need to care about proper AI disclosure. Not likely. I see it working from the bottom if the writers/creators community agree on the need for a stamp of authenticity, but it will take time and a third-party to conduct independent verifications along the way. Maybe Lit Mag?
I just had this “AI is ruining the world” talk with my computer genius brother last week. He said he was in a McDonald’s on the West Coast last month that only had one employee. That guy was in charge of monitoring all the AI, the robots who took the orders, made the food, and cleaned the store. I’ve been having “Terminator” nightmares ever since he told me that.
I’m the polar opposite of my computer geek brother. I have a Mac and have no clue how to use a PC. I wouldn’t know where to begin to use AI. And I don’t want to know. I agree with most of what you said in this article about AI being a threat to us writers but you lost me at the idea of a union. Unions create more problems than they solve and
What a great idea, and a necessary one. I just can't think of how it could be done.
It should be built into AI. Any AI-generated artwork should have an indelible 'stamp' that is detectable via an app but not possible to be removed.
This is a fantastic idea. I also share your pessimism, as well as those who have concerns about it being faked. I also wonder who will bestow it.
I feel that this is the most important article that I have read in a long time. Again and again, I here the refrain: "Prove that you are not a robot!"
Great article. I totally agree. We need a badge/watermark perhaps, something we purchase that we can imbed into our work to prove it's ours. Could we design our own watermarks? A personal signature of sorts. Let's NOT ask an AI what we can do! This needs to be writer-to-writer until we can solve it.
I'm going to start out that I detest AI. I think that most models that are in play today can't be separated from their early abuses (which include traumatizing farm center workers), though I do appreciate that now they are paying workers more to put in original stuff. Whatever I repeat here is me trying to think through it, not a support in *any* way.
I've heard some writers say they love it because they use it for inspiration, or they run things through it and ask it to clean it up (and no a lot of this isn't just fixing spelling, it will change words, sentences etc), or they ask AI to write something and then they as a human clean it up, and according to them sometimes extensively. They don't consider this AI written, they consider it AI assisted.
Out of curiosity, do you lump these in with what you're talking about above?
I very much agree with this system- it could be similar to the ratings systems of film and television re: how much of the text was done by AI, or authors systematically using AI.