As someone who has spent decades editing documents of all kinds, often looking for evidence of plagiarism, I have to confess that I am genuinely amused by the general panic about ChatGPT and artificial intelligence in “content generation”. Perhaps the whole process should now be called “discontent generation”.
The very first content I access each morning is https://spaceweather.com/ for the solar system forecast. On Saturday 27 May, a vast coronal mass ejection or CME blew off the far side of the Sun. It is by far the largest flare I’ve ever seen, and I’ve followed solar weather since the 1970s, when I was an active amateur radio operator or “ham”. Solar activity crucially affects shortwave radio propagation.
Researchers are not sure exactly what caused this vast eruption, but whatever it is is rotating to face Earth in a few days. If another flare like this blows off, our planet could be in deep, deep trouble.
So I’m keeping a closer eye on www.spaceweather.com than usual, and I was very happy to see the following message go up at the top of their homepage a few days ago and stay there:
Text created by ChatGPT and other Large Language Models is spreading rapidly across the Internet. It's well-written, artificial, frequently inaccurate. If you find a mistake on Spaceweather.com, rest assured it was made by a real human being. This is an AI Free Zone!
The reason I’m actually amused by the widespread advent of AI in content generation is because I know from long experience that, in the end, it will just mean more and more work for me, as a professional scientific editor. They will always need at least one competent human being at the end of the line, to make sure there are no absolutely horrendous errors in their texts.
So as a public service to the Substack community, I have launched this platform, to help people who might want to get into editing, are struggling with it, or just want to share ideas about it. The more we share, and the more we show the value that genuine human beings can add, the more we can maybe claw back the process of writing.
Please make no mistake, my main intention has always been to be a writer. Editing is my day job. However, it is an endlessly fascinating and creative endeavour. I will make a confession: I actually love editing, I am very happy that someone else has done all the horrendous fieldwork and has their byline on that story or their name on that academic paper. I am totally happy to be invisible. I honestly feel it is a privilege to work on other people’s documents.
And this is why I am The Light Editor, a title that just popped into my head the other day. As much as possible, I want my authors to recognize their own documents, really to feel “I wrote that.” Occasionally I get an instruction from a Japanese author specifically asking for “heavy editing”, because the author doesn’t speak English and wants the document to look good. Even in these cases, I do my very best to let the original phrasings come through, so the author can feel: my English is maybe not so bad.
I spent 11 years as a subeditor on various national newspapers, magazines, business wires and news websites in Johannesburg. Especially working on daily newspapers, you learn to work extremely fast and extremely accurately. When your mistakes are going to be in huge headlines all over the country the next morning, you just make very sure that you don’t make any.
However, one of the first things I was told as a newspaper sub was the following: “Your job description is to take all of the blame and none of the credit.”
I’ve been quite content with this until now—I always told my journalists, it’s your name on top of that story, not mine, so make sure you’re happy with it—but finally, I think ChatGPT may change this rather dark scenario. Editors are at last going to get some credit, because people are really going to want to know that there is at least some human agency to what they are reading.
I’ll close with a story from a major business wire, with tickers on all the business screens in South Africa. The developers were working on the system, as they do, and made some changes. One of them was that if the system found an article without a date, it would insert that day’s date and publish the story.
There was one undated story in the system, from three years previously. A huge mining company had retrenched thousands of workers very suddenly, causing massive trauma in the whole industry. The company insisted the retrenchments were essential for the enterprise’s survival.
We put the whole story out again, with a current dateline. You can guess what happened. There was genuine widespread panic at the company, thousands of phone calls, stories generated in other media, absolute fury at us. Trust that took decades to establish was destroyed forever, because of one act of artificial intelligence.
This is why you need a human being to watch that news feed, to make sure there’s no actual total disaster being propagated. The smarter you make the machines and the more functions they handle, the more you need some sort of proper human oversight.
It may not be the most glamorous profession, but rest assured, there will always be a need for Autocorrect Correctors.