[Op-Ed] Academic Writing with Generative AI: New Technology, Old Problems

By Conor Lowery (Editor-in-Chief)

Throughout the history of higher academia, one of the biggest challenges among students, faculty, and administration has always been keeping up with contemporary technology. In 2024, generative AI has become one of the biggest challenges and has left a question on everybody’s minds: Is this new technology a great advance that will let students skip unnecessary steps in their academic careers, or is this controversial generative technology a risk?

            To examine this topic honestly, we have to dig deeper than these simple questions of morality and tackle the actual flaws of generative AI. Many professors advocate for the use of AI. UWGB itself has seen its administration sign a deal with the generative program Microsoft CoPilot, which professors are encouraged to teach students to leverage. The outlook of the administration appears to be that the school must change with the times and that if this technology exists, then students must be expected to learn to use it properly. Despite this, the division around generative AI has only grown, and there are very good reasons to be concerned.

            Generative AI is often touted as the technology of the new generation, but these programs have many problems. Generative AI is a mechanical program that, therefore, shares the flaws of computers. Generative technology is known to make up entirely false information, colloquially called “hallucination,” which around 86% of AI users have mentioned experiencing, and can range from simple misconceptions to providing someone their own entirely falsified obituary. If Generative AI cannot be trusted to produce accurate results during research or writing, then it is an unreliable tool to use when writing. If the facts provided cannot be trusted, academic writing can easily distribute misinformation, and the central use of AI for said writing is to provide accurate results while saving time. If a student has to spend just as much time fact-checking their paper only to likely find that the Generative AI they wrote it with has simply made up the information they are using, it makes the use of the AI more of a risk than a tool.

The training of Generative AI is controversial in itself and carries a risk of prejudice. Generative AI is trained on “training data,” which provides information and references that it can base itself on. However, as humans have errors, the training data provided can be outright biased, with an estimated 38.6% of AI answers in a study showing that the “facts” provided by AI, based on their training data, have their basis in societal biases or stereotypes thanks to the information it was trained on. Using AI means that academic writing can easily end up seeing the biases of AI slip through, even unnoticed by the students utilizing it for said writing. AI isn’t just controversial as a tool for potential cheating; its use in academic writing could easily perpetuate prejudice in the educational system.

            The power grid is a major concern when it comes to the use of Generative AI. Generative AI programs are speculated to potentially use 1.5% of global energy over the next five years, and MIT research indicates that making a single image with Generative AI requires as much energy as charging a phone. These energy concerns are daunting. If AI is fully embraced by university systems and expected to be used in the long-term future, the potential energy usage could be severe and have lasting consequences. As students are encouraged to use this controversial power-consuming technology, it becomes likely that the waste of power while using these tools will increase.

            In addition, AI’s growing ubiquity in the classroom has left professors’ jobs more difficult. While students using AI have to work to find a way around a market of unreliable AI detection technology designed to figure out who is using AI, with an estimated 68% of teachers utilizing AI detection in the 2023-2024 school year, students being caught seemingly plagiarizing has risen in that time as well, from 48% to 64% between the school year beginning in 2022 and the school year beginning in 2023. The time that should be spent grading papers is instead being spent trying to figure out whether said papers were written by a human at all, and a large number of false positives means there’s always a possibility of innocent students being flagged as using AI and punished for it, with the AI detection tools seeing accusations of bias against students who do not natively speak English. In many ways, attempts to detect AI have become as much of a problem as AI itself; professors are left scrambling to find it, while students who use AI consistently work on ways to evade detection, leaving students who do not use AI to be left with the burden of cheating accusations.

            Ultimately, though, all of these things are objective issues that add to the greater debate around AI’s usage. Statistics support the idea that AI creates false information rather than finding real information, perpetuates prejudice, and uses an abnormally large amount of energy to complete simple functions. However, for AI proponents, these are all issues that can either be amended or are part of the appeal, as AI is often exploited to create and spread disinformation.

            The major debate around AI in academia is around the question of whether it is cheating to use AI at all. This is a question that calls to mind perspectives around higher education. If someone views higher education as a primary method of getting a better job, it is acceptable to use Generative AI. After all, it is another tool that can be leveraged. However, among people who believe that the purpose of academia is to help educate people and develop critical thinking and academic writing skills, Generative AI represents something else entirely. Objectively, Generative AI does a large part of a student’s work for them when it comes to writing. The actual process of writing is left by the wayside in favor of putting in a prompt. Again, for people who view college as a simple path into the labor market, this is an upside. However, many people feel that students who do not use academic writing lose chances to use their own voices and to develop critical thinking and academic writing skills for themselves. This is one of the biggest intellectual dangers of Generative AI: When writing becomes a matter of putting in a prompt and letting a faulty machine do the work, the learning experiences that come with academic writing end up falling by the wayside.

            In the end, what happens with Generative AI in academic writing will always be up to college administrations alongside individual actors in the college system. However, while the future plays out, students and faculty alike should keep an eye on the risks that this technology poses and should always strive to ensure that, as its influence expands, it does not replace essential parts of the college experience.

Leave a Reply

Your email address will not be published. Required fields are marked *