3 min read

We Have Every Right to Be Furious About AI-Detection

OpenAI has a 99.9% reliable method to detect text generated by ChatGPT but never released it
Sculpture of a narrow head, roughly hewn
My head blew clear off when I learned what OpenAI has been up to | Picture by Author

Greetings friends!

As a publication editor, I am royally pissed. My fellow editors and I spend a lot of time reviewing stories. Although some are glaringly AI-generated, others cleverly straddle the line, raising suspicions but not leaving sufficient telltale clues to tip us off.

As a writer, I am disgusted. I have put nothing but pure human effort into every story I’ve ever published. That also takes time. I hate that opportunists and scammers can flick a prompt at ChatGPT and post the results as their work.

As a reader, I am dismayed. Far from ushering in a better internet, the proliferation of AI-generated work has made reading new authors a fraught walk through a minefield. Is that clever phrase I see from another human mind or merely an amalgamation of other writers whose words were misappropriated?


Thanks for nothing, OpenAI

I’m mad because it’s been reported that OpenAI developed a reliable method of watermarking text generated by ChatGPT that is invisible to users but 99.9% reliable in detecting such text.

According to the Wall Street Journal, OpenAI developed the watermarking technology more than two years ago but never released it in any form.

It seems that OpenAI was concerned about how watermarking would affect the use of ChatGPT, which was then in its infancy. They apparently commissioned two studies to explore user attitudes and found potentially conflicting information:

  • The general public prefers having AI-detection tools by a factor of 4:1
  • Among ChatGPT users, however, about a third (29%) would use ChatGPT less if the output was watermarked and competitor products were not

Here’s how this looks, in my own words:

OpenAI knew that the detection of cheating was an important societal goal that was massively preferred by honest people. OpenAI nonetheless decided not to release a tool to do just that because scammers might use ChatGPT less than competitor products.


Maybe there are more charitable interpretations

OpenAI itself gives various reasons why they have not released the watermarking tool. Let us consider what OpenAI recently wrote:

  • The watermarking method is “less robust against globalized tampering; like using translation systems, rewording with another generative model… making it trivial to circumvention by bad actors.”
  • “[T]he text watermarking method has the potential to disproportionately impact some groups. For example, it could stigmatize use of AI as a useful writing tool for non-native English speakers.”

For those not versed in corporate weasel speak, let me translate these two points for you:

  • There are ways to get around our watermarking. No company in Silicon Valley ever rolled out an imperfect product. And if it’s not perfect, there’s no point in trying to make it better. Duh.
  • The scammers, er paying customers, who most want to use ChatGPT to generate text without being caught would be hindered by a tool that detected their use of ChatGPT. We can’t have that!

OpenAI goes on to say that they are “still in the early stages of exploration” of other methods that might be more reliable than watermarking.

Corporate weasel translation: Don’t hold your breath. This thing is too good to let widespread cheating and fraud get in our way.


OpenAI seems to be putting money ahead of morals

I say this with a heavy heart because I understood OpenAI was filled with effective altruists. These people know best what humanity needs and everything will be fine if we just trust them to get on with it.

Recall OpenAI’s commitments from its charter:

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.
Our primary fiduciary duty is to humanity. We … will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

I’m no lawyer or corporate governance expert … Oh wait, I’m both of those things! But you don’t need to be an expert to see that OpenAI might have strayed from its noble purpose.

  • How does allowing acknowledged and rampant cheating to continue unabated fit with avoiding “enabling uses of AI or AGI that harm humanity”?
  • A large majority of the public says it wants AI detection while OpenAI fears a sizeable percentage of ChatGPT users would defect to competitors if they enable such detection. Does this sound like Open AI is “diligently act[ing] to minimize conflicts of interest among our … stakeholders”?

What this means for content platforms and you

Lest I leave you with a negative impression of OpenAI, I’ll note that no generative AI company has released a reliable AI-detection tool. Despite now ample evidence that the problem is solvable.

The good news is that the proliferation of AI-generated stories is only a problem on content platforms for a few select groups of people, such as writers and readers.

But not all of them.

Only the honest ones.

Be well.

PS – If you want to take a stand against AI-generated text and in favor of clear thinking, keep reading my stories. Always human, all the time. Note that in the new year, I'll be sending this newsletter to you via Substack (A Fine Idea). You don't need to do anything.