As universities around the world continue to integrate technology into their curriculum, many are turning to powerful AI tools like ChatGPT to help students with their studies. However, as the use of ChatGPT becomes more widespread, concerns have risen about the ethics of using AI to complete academic work. One of the main concerns is the potential for students to cheat by using ChatGPT to write essays or complete homework assignments. Some argue that this is no different from students copying and pasting text from the internet, others argue that the use of AI makes it much easier for students to pass off someone else’s work as their own.
Stop reading for a second. That was a decent beginning to the article, but it definitely wasn’t all that great or inspiring. Why? That’s because I didn’t write that at all. Instead, I used the input of “Write an article about the ethics of using ChatGPT in academia” using ChatGPT, and that is what this faceless, invisible machine spat out. A very formulaic, but passable, beginning to an article on that very question – Pandora’s Box has been opened, and AI technology is here whether we like it or not. The only way you could avoid it at this point is to forget your old life and move out deep into the woods. But if you choose to stay with us here in the modern world, one thing you must familiarize yourself with is AI.
AI, or artificial intelligence, in its current state, is programmed by humans to teach and train computers on how to analyze and process information, which then generates new information based on the data that we feed it. By continually running through these generations, the results become more accurate for their intended uses, and the more these systems are “trained.” AI is being used increasingly to automate processes in all fields of modern life, from social media marketing algorithms, healthcare databases, and it may even be used in your toaster.
As with all new technology, there comes ethical questions of how and where it should be used, and especially with the nature of this technology, artificial intelligence poses many existential questions. Here at Sewanee, as AI technology progresses and begins to slip its tendrils into the academic world, it is time for the community to begin to reckon with the exciting possibilities of this technology, as well as the tricky ethical questions that come with it.
The form of artificial intelligence that has been getting the most attention at Sewanee and other universities recently has been ChatGPT for its capabilities to write entire essays. Dean of Students, Terry Papillon, recently made everyone aware that this was a possibility in an email on January 16. He stated, “Because the Honor Code says: ‘Plagiarism is a form of cheating because the plagiarist copies or imitates the language and thoughts of others and passes the result off as an original work,’ it will be the spring 2023 policy that, unless expressly allowed by an instructor for an assignment, submitting substantive work as your own that has been created through artificial intelligence programs (ChatGPT or others) violates the Honor Code’s notion of passing off work created elsewhere as your own.”
So what exactly is ChatGPT? In short, ChatGPT, which stands for Chat Generative Pre-Training Transformer, is an AI chatbot developed and released to the public in November of last year. It was released by the company OpenAI, founded in 2015 and funded by many technologists very eager to play with fire, one of those being Elon Musk. Since its founding, while OpenAI has been prolific in developing and improving artificial intelligence systems, it hasn’t been until this decade that the progress of this technology has accelerated to dizzying speeds, sending many, especially artists and professors, into a moral panic as AI generated results are becoming uncannily convincing. Particularly here at Sewanee and the field of academia as a whole, text generating AIs such as ChatGPT have major implications in being a potential threat to the Honor Code, as passable undergraduate essays can be created in seconds with a simple prompt.
It’s not far-fetched that we could potentially see AI become an integral tool of the classroom, automating certain tasks to speed up the learning process, just as computers and calculators have done before. We might see chatbots become an aid rather than a substitute. The fact of the matter is that most technology has been invented to automate processes and tasks that nobody wanted to do in the first place, and to make the living experience a genuinely more pleasant one. With the proper use of technology we can cut out tedious processes so that we can focus on even more interesting tasks and problems.
Perhaps if we apply this same mindset to text generative AIs, ChatGPT could cut out the need for extremely formulaic essays, allowing students to instead focus on writing specific subjects which are more interesting to them and that require more critical thinking. This, however, is highly optimistic. What about the extremely valuable skills that repetitive exercises generate over time? What about the reality of people whose livelihoods are threatened by automated technology? These are certainly valid issues that we must consider in the process of integrating AI into any aspect of life.
So how can Sewanee respond to this emerging technology which threatens the very structure of learning? To understand how some professors may view AI, I talked to Professor Hopwood of the philosophy department about his perspective on ChatGPT. He claimed, “As a professor it is easy to approach this technology defensively, but I think we need to resist that temptation.” It seemed to him that AI certainly has exciting potentials, but it is simply too early to tell what the future holds with AI. “Students and professors should be seen as collaborators in the learning process,” Hopwood continued. “For there to be a mutual trust between students and professors in the age of AI, there needs to be a campus wide conversation which includes everyone in the learning process, so that everyone can come to a mutual understanding of their responsibilities as collaborators in the classroom.” For students and professors to see themselves as collaborators is an important perspective to have in order to build mutual trust and facilitate the best possible learning environment.
On asking Professor Hopwood how AI has affected his current work as an educator, he said that it has actually positively affected the way in which he creates assignments, saying that with ChatGPT considered, he has begun “reformulating some assignments to allow students to show what they have learned from our discussions of the readings in class” I agree that this attitude is how AI should be approached generally. Critically, but with optimism that one can use AI as a tool for greater productivity. However, the capabilities of ChatGPT can be abused, and using AI for greater productivity could now mean students writing entire essays with the click of a button, so it is important that a clear statement be made now regarding the Honor Code so in the meantime the community has time to have an open dialogue on how to best approach this technology.
I personally believe AI is an important point for academia to reflect on its purpose, and this could be a very exciting time for academic institutions as a whole. I would like to think that most students genuinely want to enjoy writing and to express their thoughts about a topic they are passionate about, but oftentimes students see much of the writing in academic institutions as pointless and only complete them for the grade. Professor Hopwood commented on this issue of these tedious assignments in the age of AI, stating that “If [professors] are going to ask students to do something they know AI already can, we must explain why it is necessary, as there is much value in much of the tedious writing in building later skills. So it is now more important than ever to make this clear.” This statement makes a lot more sense to me now as a student. Having been given essays about things I did not care about for years in school, I felt like I never understood the point of them. So if I were a younger student now in the time of AI and I was assigned to write a five page paper about the life cycle of luna moths, a topic I am frankly not interested in, I would probably use ChatGPT to write that for me, because why would I put in the work if I thought it to be pointless? But now ChatGPT will be forcing educators to explain why they are assigning students these tedious writing exercises, and this actually will make the value of education and the purpose of school work more apparent to students.
AI may now be pressuring the higher education system to foster an atmosphere in which students are fully invested in their learning to where critical thinking skills and forming a constructive argument become the most important aspect of writing an essay, rather than formulaically regurgitating previously consumed learning material and receiving a grade. The integration of AI into the education system won’t be an easy one, but it could be possible that what ChatGPT can do for writing is what the calculator has done for mathematics. By using technology to automate the tedious processes, more critical skills can be applied. Students will still have to, of course, put in the proper work, and the value of writing essays on a topic that one might be so excited about must not be underestimated in building later skills which allow for the articulate expression of complex issues. ChatGPT has the potential to revolutionize education, but we must also be highly critical of its ethical implications in being a cheating super weapon. In the end AI will make us question why we do certain things if a machine can simply replicate the same results with little effort. This means that AI poses a difficult but important question of what it is only humans can do, and that is both an exciting, and/or terrifying realization.