Across the world, there are three types of students: those who do not regularly use AI, those who cheat with it, and those who utilize it to enhance their learning. This article will discuss the first two.

While almost every student has tried ChatGPT, some refrain from using it. Some students find their ChatGPT Plus subscription not particularly useful. When doing independent research, they found Google to be superior to ChatGPT’s slow responses. In addition, they felt ChatGPT’s Deep Research, which searches hundreds of sources over several minutes and outputs detailed reports, was useless as their research method involved only short searches. Beyond lackluster outputs, many students were concerned about AI replacing their thinking. Another student had played with ChatGPT and DeepSeek, but felt they robbed them of the skill of searching the internet. When prompted, the student added that manually searching the internet and library books were essential skills, necessary for contextually based thinking. Furthermore, they also expressed a deep resentment against AI image generation, calling it soulless, theft from artists, and not authentic art. One student in particular shared how a fellow intern used GPT-5 to code a program to run stock trades, which resulted in a mistake that likely cost the intern their entire $10,000 budget, serving as a stark warning against blindly trusting AI outputs.

Students have noted that while they had submitted completely AI-generated essays without issue in their middle schools, ChatGPT’s outputs were insufficient for high school standards. However, AI has a new use: text summarization. And when asked whether they think the summaries harm their learning, they argue that the summaries are only consolidating what they have learned, for they usually always summarize books after reading it. However, they admit they sometimes do not have time to read. 

Beyond summarization, students have revealed that they have translated parts of their foreign language reading assignments. They acknowledged that they were cheating, but also emphasized that the assignments were unreasonably given — only given a day before the ensuing class — and that they were swamped. Indeed, the general plagiarism and academic integrity codes note that violations occur most often in time crunches.

Meters away from the offices of discipline, a few juniors sit at a table. When asked about how they use AI, mischievous smiles arch across their faces, and the allegations begin. After chatting briefly (and politely declining access to their chat history), one student points across the table. The others’ chuckling intensifies. Allegedly, using ChatGPT, students complete their homework within half an hour each night, a time that raises questions about the effectiveness of the AI-proofing efforts of various teachers and departments in high school. Then, after the information, they are willing to share their exhausts, students pointed me towards a peer who “really pushed the limits” with AI.

The opportunity to talk to that student presented itself later that day. At first, the student shared some chats where they used ChatGPT to help them study for the SAT math because alternative resources were not available, and discussed how they utilized AI for an independent research project. However, the grinning friends around them eventually exposed them as having been convoked for using AI to cheat on tests, homework, and essays. The conversation ended abruptly there. 

Evidently, as indicated by the laughter, grins, and the “knowing” expressions of the interviewed students and those surrounding them, students do not stigmatize cheating with AI. In school hallways, students can be heard openly declaring their cheating with AI, and in no instance was any of this reported. However, whether due to feelings of soullessness, a lack of functionality, or a desire to maintain their autonomy, some students choose to reject the AI for now.