top of page

Welcome to Our Blog

EminentEdit is a dynamic content writing and editing service that offers proofreading and editing services for 1. Academic Writing; 2. Literary Analysis; and 3. Blog Content Writing. Plus, we offer 1. Content and 2. Grant Writing Services. Read our blog for advice on editing and content writing or get in touch directly.

AI Detection Reliability: A Thorny Issue

AI detection reliability is only one of the many ethical issues that academia and content marketing has been faced with since the explosion of AI on the scene in November, 2022. AI detection tools have arisen as a solution to the “ethical” issues associated with using AI to generate content, especially in academic settings; however, these tools have been criticized for being unreliable. 


What exactly are these AI detection reliability issues? Professors and universities have worried about students “cheating” by using AI to write essays and other assignments. This new generation of chatbots that rely on large language models or LLMs allow students to simply write in a prompt (typically the words of a given assignment), and in a few minutes or even seconds, they can get a whole essay. 


Most professors would prefer to have the opportunity to judge students’ ability to write essays on their own without any assistance from AI. And it’s not just a problem in academics, it is also a problem in the world of copywriting and content writing. 

Suspicious looking robot with a fake human face over metal parts.

What is AI detection? 

AI detection refers to software that has the ability to determine the extent to which an essay or article has been written using AI. But it’s not reliable. A number of studies have shown that AI detection software is frankly inaccurate, with texts written entirely by humans being shown as being written by AI. 


These AI detection software typically rank a piece of writing using a percentage. Unlike traditional grading, the higher the percentage scored, the more the student has relied on AI to deliver the essay. There are a number of tools that claim to be able to measure the likelihood of a piece of content relying on AI writing. 


They include the following: 


—ZeroGPT,

—GPTZero,

—Crossplay, and


Ironically, a tool like QuillBot allows students to generate writing using AI and to detect the extent to which the generated content has been written by AI. Remarkably, it also provides students to alter their writing using AI tools to ensure that it passes detection. This begs the question: Are these tools simply a ruse focused on passing arbitrary tests or do they actually mean anything in terms of accurately detecting the usage of AI?


Students and professional content writers think AI detection reliability is bunk

University administrators and content managers may well put their confidence in this AI detection software. And who could blame them? The use of AI has been ubiquitous in both academia and content marketing. Many students have been unashamed about the use of AI in helping them to make it through their university programs. 


This was exemplified when a UCLA student publicly showed off the ChatGPT prompts he used to help write his essay to complete his course of study during a graduation ceremony. However, not all students feel this enthusiastic about AI. Some of them even complain that their work is being falsely labelled as AI generated. 


The New York Times has reported on how anxious students are about being falsely accused of using AI. On social media platforms such as Reddit and LinkedIn, content writers also complain about content managers and editors falsely accusing them of using AI to generate content after running their articles through AI detection software. One LinkedIn user called Samantha Lord wrote:


I'm being punished for being a fast writer! A client (on Upwork) just accused me of using AI because I wrote a 1,000-word article (on a very simple topic) in three hours. I've been a writer for 13 years. That is easy for me!

AI use has led to all sorts of paranoia when it comes to judging the originality of content. For example, writers who are accustomed to using rare punctuation marks such as the em dash have been wrongly accused of using AI to write. The idea that Ai is the answer or solution to a problem that it created remains unconvincing to many people. 


Cite this EminentEdit article

Antoine, M. (2025, July 30).  AI Detection Reliability: A Thorny Issue. EminentEdit. https://www.eminentediting.com/post/ai-detection-reliability




bottom of page