Generative artificial intelligence (AI) has now become part of the everyday lives of academics, students, editors, and content writers. The ChatGPT article. ChatGPT is moving fast. So fast, there are now consequences in the real world. Students are already using it to write essays; however, students have already found ways to catch students who use it to write essays.
The question is "Should you use ChatGPT or any other AI to do your academic writing?" There seem to be two camps on the question. One camp says, "No, definitely not. You should never ever use ChatGPT." Another camp says, "These old-fashioned educators need to be up with the times and allow students to use ChatGPT. " So which is it?
The introduction of new technology has always led to these types of debates. Even the introduction of the humble calculator inspired teachers to protest. Before that, the Luddites in 19th-century Britain organized to destroy factory equipment that they believed threatened their jobs.
In my opinion, students should take a cautious approach to generative AI. Treat it the same way that most professors advise students to use Wikipedia. Use it for references and to spark ideas.
It can even be made part of your writing process. However, the whole point of writing academic essays at teh college or university level is to demonstrate your ability to absorb and synthesize information to come up with new ideas or at least present them in a new way.
What is generative AI?
Generative artificial intelligence (AI) refers to large language models (or LLMs) that have been "trained to follow an instruction in a prompt and provide a detailed response." What is meant by training? Training AI refers to a process where a program is fed millions of units of information and data to come up with meaningful patterns that humans can relate to or make meaning out of.
In the case of ChatGPT and other similar AI tools, it was trained primarily by the information available on Google. It is something of a hivemind. Just think of the millions of answers generated by the millions of questions asked in a Google search. ChatGPT has been trained to access, assess, and synthesize all that information to give you a single coherent answer.
In short, it's a Google answer without the several thousand results on a page. It does the job of sifting out the useless fluff and giving you the answer most likely to be correct. What is and what is not correct, however, is a debate left for another time.
The divided camps on AI use
Attitudes toward the use of AI in academia and higher education can be divided into two camps. Those who see it as a learning tool and those who see it as a way of "cheating." Both attitudes are affecting policy in academia toward AI.
If AI is moving fast, then academic institutions affected by it are moving just as fast. For instance, a number of academic journals have made it clear that they would not allow articles authored or co-authored by ChatGPT to be published within their pages without disclosure.
Nature, the JAMA Network, and the Committee of Publication Ethics among others recommend full disclosure of text-generating tools while disallowing listing ChatGPT as a co-author. Other journals have taken a harder line. For example, Science has announced a complete ban on generated text.
There are many who don't see generative AI or chatbots like ChatGP as a threat in academia or education. Some believe that ChatGPT could be used as a teaching aid for students. According to their argument, instead of playing a game of cat and mouse where you try to beat students trying to cheat, educators should be thinking of ways to incorporate ChatGPT in their teaching.
For example, ChatGPT and other AI tools could be used as a prompt for ideas that students can use to write about. It has also been shown that ChatGPT can produce wrong answers. Even that can be used as an opportunity for students to probe. They can explore what social or human factors or biases are responsible for this supposedly superhuman AI producing outright errors.
The Diffusion of Innovation Theory
The future of generative AI can probably be well predicted by considering the diffusion of innovation theory (Ajzen, 1985). This theory explains the pace at which new technologies are eventually adopted by a broader section of society after being pioneered by early innovators.
Figure 1 shows the process of new technologies being adopted by the wider society. It features "the chasm". The chasm can be explained as the gap between early adopters and the early majority.
In the education and academic world, there remains too much fear among administrators about the ease with which students can cheat the system. After all, what is the point of university or even teachers or lecturers when simply typing a question and pressing enter can produce all the answers? This is what defines the chasm between those who see new and exciting possibilities with ChatGPT and those who remain skeptical.
It's the same trend as all other new technologies in terms of writing. This includes the printing press, the typewriter, and the Word processing program. First, there will be resistance. However, as early adopters use and prove its usefulness, the more skeptical parts of the population will eventually follow and adopt the trend.
Considering the widespread use of AI in academics, business content writing, and among students in higher education, policymakers in academia likely won't have a choice but to allow its use to some extent. What about ChatGPT itself? What does it have to say about the issue? Well, let's ask the collective wisdom of humanity itself:
Well, from the horse's mouth itself. As we mentioned earlier, ChatGPT is moving fast. It would be best for educators and publishers to have a more open mind and attitude toward this new technology. They should ask the following questions:
How can the technology be used as a teaching aid?
How can students be encouraged to use the tool to conduct independent research and learning or independent thinking?
How to prevent abuse of the system to ensure fairness and equity
If you are a student or scholar, on the other hand, should take a more cautious approach while remaining open to exploiting this new tool. By all means, use ChatGPT to generate new ideas or as a prompt to juice up your writing, but expect the worst penalties for simply copying and pasting answers from it.
Get our proofreading and editing services for your academic essay |
Why be wary of AI?
The main issue of AI in academia is credit. This means acknowledging where you get information from. AI, so far, has not been that great in citing sources. There have been some improvements; however, there is still a long way to go yet.
Another problem with AI is the tone of the writing. AI so far is nothing more than a spinner of content. Worse than that the tone is off. As of now, your professor or even an ordinary reader would be able to tell if your essay was written using AI if you do not edit it.
A business owner who owns a website would have to worry about someone reading about their product and being turned off because it was obviously written by AI. In your case, s a student, you would have to worry about a professor reading your essay and immediately wanting to mark it down because it was obviously spat out by AI.
So before you think about using AI, think of using traditional academic writing resources and methods. Like you know, the library and doing your work manually in advance. Good luck with your academic writing.
Cite this EminentEdit article |
Antoine, M. (2025, January 09). Is it Ok to Use AI to Write Essays? EminentEdit. https://www.eminentediting.com/post/is-it-ok-to-use-ai-to-write-essays |
コメント