RM reacts to ChatGPT


Graphic by Julianne Cruz

Some universities have enforced strict rules against the use of ChatGPT for essay assignments.

ChatGPT, the latest natural language processing model developed by the artificial intelligence research corporation OpenAI, has taken the digital world by storm. Within five days of its release on Nov. 30, 2022, it amassed one million users, which overloaded and temporarily froze the site.

Users have consistently been amazed by the chatbot’s human-like dialogue style, breadth of knowledge and ability to closely follow instructions. 

“It thinks in a lot of angles,” sophomore Andy Deng said. “Your perspective is set—but when you [use] ChatGPT, you’re like ‘oh wow, there are so many more perspectives on this that I haven’t even considered.’”

Other defining features of the model, according to OpenAI, include its ability to write computer code, remember what was previously said in a conversation, answer follow-up questions, admit its own mistakes and reject inappropriate requests. “I really love it overall,” sophomore Kevin Si said.

Some users have noticed limitations of the program, however, particularly in its ability to think abstractly and make connections. “My dad tried making it […] analyze a Michael Jackson song and how it pertained to modern societal problems,” junior Sofia Eisenberg said. “And it said ‘the song does not relate to modern societal problems.’”

GPT stands for “Generative Pre-trained Transformer,” which indicates that the model generates new content rather than simply processing existing data and that it was trained using Internet data prior to its launch and therefore only has accurate knowledge about the world up to September 2021. Additionally, ChatGPT’s neural network architecture is called a “transformer” because it has learned the relationships between the words in natural human dialogue and can thus “transform” its input into output. This technology is based on a recently discovered self-attention mechanism that allows it to focus on the most relevant components of an input in order to generate suitable output.

The model was trained using Reinforcement Learning from Human Feedback, which works in three stages.

First, human trainers demonstrated desired outputs in response to a multitude of human-written prompts. They then randomly selected model-written messages from conversations between human trainers and the chatbot, ranked them by quality and then fed that information to the model, allowing the chatbot to learn the most appropriate responses for a particular input. Finally, the model improved and taught itself via trial and error, adapting its own rules as to what output to produce and when.

Educators fear that ChatGPT encourages academic dishonesty because it can, for example, ‘write an essay in the style of a high school student’ in a way that is fully original and difficult for tools like TurnItIn to flag as plagiarism. The website has already been banned on MCPS WiFi. 

Even so, ChatGPT writing differs from human writing in complexity and variability (AI-generated text tends to have less of both), two characteristics which Princeton University senior Edward Tian has leveraged in order to create GPTZero, a program that detects ChatGPT writing. Since GPTZero was trained on ChatGPT output, the more familiar a text appears to GPTZero, the more likely it was written by AI.

In addition to ethical concerns, ChatGPT raises identity and existential issues. “The question is, ‘do we still need to write?’” English teacher Robin Strickler said. “Until now, writing has been a human skill. Is it going to become a machine skill?”

According to Mrs. Strickler, it goes beyond the writing itself: relying on ChatGPT also means out-sourcing thinking and self-expression. “If you don’t write—if you’re constantly medicating that with a machine—do you learn to reflect?” she said. “Writing is a big means of thinking. And if we forget how to think, we’re screwed.”