Problem number one
Nearly a third of U.S. college students say they use ChatGPT to complete written assignments, according to a survey published in January 2023.
Three quarters of these students believe they are “cheating”, but use ChatGPT anyway.
A Guardian article reports that an annual survey of instructional challenges faced by American teachers rated “preventing students cheating” as 10th in their list of problems for 2022. In 2023, this had become problem number one.
The Guardian subheading is revealing:
ChatGPT is creating headaches for schools while giving rise to a growing cohort of companies that say they can ‘tell’ human from machine
We are now in the “telling” industry. We are not editors, we are deception detectors.
The article ends: “Whether AI content becomes indistinguishable or the human touch proves impossible to replicate, one thing is certain—there will be power for those who can tell the difference.”
My emphasis. This is exactly what I said in my first-ever Substack article, Correcting the Autocorrect: while the bots may threaten every other job on the planet, their proliferation will create an increasing demand for specialist editors—experts able to survey text (as opposed to surveil it) and ensure that no outright atrocities are being committed.
An arms race
“It” is a “tell”, you should know. According to MIT Technology Review:
Because large language models work by predicting the next word in a sentence, they are more likely to use common words like “the,” “it,” or “is” instead of wonky, rare words. This is exactly the kind of text that automated detector systems are good at picking up.
In case you missed the obvious, the article ends:
That’s the crux of the problem: the speed of development in this sector means that every way to spot AI-generated text becomes outdated very quickly. It’s an arms race—and right now, we’re losing.
So it’s bot versus bot, with the good guys counting the “the” occurrences and the black hats jinxing the frequencies by introducing more wonky words from the dataset de jour.
One wonders how they handle Russian, which does not have the definite article “the” in the language. Physicists in Novosibirsk corridors are very profound, they ask you, please, what is time?
My lovely Brendas!
I predicted that editors would be hired to make AI texts look more human. However, they are already being paid to make robots more authentically robotic. Behind the veil of “AI” you will often find very genuine human beings pulling the strings, carefully hidden by chatbot personas. “I was a person pretending to be a computer pretending to be a person” says a very amusing Guardian article by Laura Preston.
For a year, she acted as backup for Brenda, a real estate chatbot, dealing with thousands of tenants and their problems.
When Brenda went off the reservation, or just couldn’t cope with a language string, she would flag the query for: HUMAN_FALLBACK.
There was a whole crew acting as “operators”, aka human fallback. On their Slack channel, the team leader would greet them every day with “Top of the morning, my lovely Brendas!”
Brenda’s corporate clients were satisfied knowing they had not replaced their phone lines with a customer-service bot. What they were using, instead, was cutting-edge AI backed by PhDs in literature.
The cool part is that Brenda would learn from successful interventions on the part of her human operators and incorporate them in her future rather shaky dealings with the public, which were mainly concerned with pinning them down to a time and place to view a property.
Her forward behaviour led to the following exchange:
> we can meet at my beach house
> I’m interested in you Brenda I’m married so we have to be discreet
Paging HUMAN_FALLBACK…
You know the situation is bad when the machines are calling in tech support to deal with harassment by the humans. I’m sure the backroom operator with a PhD in Shakespearean drama found a way to make that rendezvous at the beach house work for Brenda. She’ll handle the next arduous suitor all by herself.
An AI manifesto
I have fortunately yet to encounter anything that looked like artificially generated content in my work, apart from all-too-frequent dealings with autotranslated material, which is often a nightmare.
Encountering machine-generated content raises vast ethical dilemmas for the editor. Does one fix the text up so that it looks as though it was written by a competent human? That’s your job description, but when you’re disguising the fact that an author has blatantly cheated, this becomes highly problematic.
My bottom line remains the same: even if an entire document was produced by an uncomprehending machine, I will seek out whatever good this text might do, in order to bring it out and make it easily accessible to the reader. There is no reason why a bot can’t put two and two together from the Internet that no one else has noticed and reach a profound conclusion. Kudos to the “author” who entered that search string.
However, if the algorithmic origins of the text are too obvious, or the search results too clearly biased, I will flag the assignment as being suspect and urge the author to come clean and admit upfront which application was used to generate it.
You can study a field for years and still feel that your literature review merely scratches the surface; yet you can ask ChatGPT to “summarize” all this research and get the result in a few seconds.
A half-asleep editor will spot the difference between these two products in a few milliseconds.
In the end, though, if a text says “The cat sat on the mat”, it’s impossible to tell from the words alone (“the” = 33.3%) ) whether this was generated by a human or a machine. You can only look in context and make the best sense of it you can. The final ethical issues rest with the author. That is their name at the top of the document. If they are obviously cheating and lying about its provenance, I can only point out the visible issues and note that this karma is entirely on their own heads.
Anyone who reads the final document should feel: “At least this has been properly edited.” I will have earned my pay, even if the author was really a machine.
My personal ‘tell’
However, I have my own “tell” these days, in looking for worthwhile work produced by genuine human beings. I keep an eye open all the time for a sparkling insight, a deft turn of phrase, a fine argument, a rueful admission, a genuine laugh, that makes me certain: this could never have been written by a machine. Then I say: let’s see what good this person is trying to do.
You can actually see this content clawback process happening already, with authors openly striving to find an authentic voice that no one could mistake for a chatbot. In the long run, this is a really positive phenomenon. The good writing is only going to get better; otherwise, it stands no chance.
I like the optimism of your final paragraph. But if authors strive to write better, AI won't be far behind, gobbling up those voices, learning how to sound "more authentic" as our understanding of authenticity evolves.