The recent rise of AI chatbots, namely ChatGPT, equipped with the ability to partake in conversations, answer prompts, and aid in research has stirred many into a debate on whether student engagement with the AI causes more harm than good. The assistance of ChatGPT in completing assignments or answering exam questions raises questions about academic integrity in universities. Queens College is no different.
On Nov. 30th last year, OpenAI launched an artificial intelligence chatbot, ChatGPT. Revolutionary in its ability to understand and engage with users regarding a wide range of topics, ChatGPT is a large language model that uses machine-learning algorithms to respond to natural language input.
Created with the purpose to provide assistance and offer information on a variety of subjects such as science, history, literature, politics, and more. The chatbot has the ability to answer questions, as well as the ability to provide suggestions to problems or issues that have been presented to the AI.
The abilities of ChatGPT do not stop there, however, as the chatbot also has a creative side, able to generate — and even respond — to creative writing prompts, take part in conversations, and trade jokes. According to ChatGPT itself, the overall goal of the chatbot is to, “Assist users in obtaining the information and support they need, while also providing a personalized and engaging experience.”
While ChatGPT and what it offers — which is assistance to a wide variety of individuals, such as researchers, content creators, writers, etc. — is certainly novel, it is not the only AI to do so. There is the similar GPT-2 (the predecessor to ChatGPT), GPT-3, and GPT-4, the latest of OpenAI’s language models. There is also Google’s recently announced Bard, which aims to serve a similar purpose, among many other language models.
AI has quickly become a staple amongst the public, with ChatGPT having been reported to have over 100 million users within the first two months of its launch. Further reports show that, as of Mar. 18th, the chatbot has over 13 million visitors daily.
Of the 13 million visitors daily, unsurprisingly, students are among the users. A survey by BestColleges suggests that around 43% of college students have used ChatGPT (or other related AI applications) with 50% of those students (or 22% overall) having admitted to using the AI to complete assignments or exams.
The emergence of ChatGPT and other similar programs has sparked a fierce debate amongst educators and administrators across the country who remain divided on the approach that institutions should take towards AI and the long term impact that artificial intelligence will have on the education system.
However there are many proponents of integrating ChatGPT into the classroom and embracing the changes that generative AI will bring to education. As Petter Green argues in an article published in Forbes, “Pushed by the rise of rubrics and standardized test essays, high school writing instruction has drifted in the direction of performative faux writing.”
Green continued, stating that, “The five-paragraph essay is a perfect example of writing in which a student is expected to perform adherence to a composition algorithm, rather than develop an essay by starting with ideas and working out how best to express them.”
Professor Kate Schnur, an English professor at Queens College echoed a similar point when asked if she has made any changes to her curriculum as a result of ChatGPT:
“I already have long conversations with my students about how literary analysis is a creative act and the strongest analysis does read as personal to the writer. I do this because I want them to realize that they should not be looking for the ‘right answers’ online, but that literary analysis is about the process of how they come to read a text as they do,” Professor Schnur said. “I explain that a literature essay is like ‘showing your work’ in a math problem. This is something that Chat GPT does not seem to do well, just like this is the piece of an essay that’s missing in an assignment that is plagiarized from a study site.”
So how has Queens College responded to the rise of ChatGPT and other similar AI softwares?
In a statement to The Knight News, Patricia Price, PhD, Interim Provost and Senior Vice President for Academic Affairs said, “I have assembled a group of individuals at Queens College and have charged them with working with our deans, chairs, and faculty to develop resources in response to the advent of ChatGPT.”
On this assembled group of individuals Price said, “They are looking both at academic integrity concerns, as well as how to best leverage ChatGPT in the classroom. This group is based in the Center for Excellence in Teaching, Learning, and Leadership, and includes Kathie Mangiapanello, Emmanuel Avila, Amy Wan, and Rachel Lockerman.”
The use of ChatGPT by students results in the necessity of the question of whether or not using AI to complete an assignment or exam counts as plagiarism, or even as cheating — and if it is counted as such, what measures have been put into place to combat this phenomenon?
Professor Peter Liberman, who leads the Political Science Department’s pedagogy committee at Queens College, says that the Political Science Department had modified its plagiarism policy to include AI-generated text in January.
“The previous policy only banned copying other people’s words and ideas,” Professor Liberman explains, “…and AI still hasn’t become people — not yet anyway! Faculty are also using various methods of detecting AI-generated text in their grading, but there is no set department policy on that.”
There is no doubt that AI and programs like it are here to stay, as such programs become increasingly more integrated into the workplace. However it is still too early to tell how the education system will adapt to these changes, though Queens College administration and professors alike have shown they’re making headway.