Luke Rector ‘28, Photo Editor
With the changing of rules regarding artificial intelligence (AI) within the South Lyon Community Schools (SLCS), many students and teachers have been using AI in school, especially to help with lesson plans, assistance with assignments, and acquiring general information. This recent use of AI within the classroom is absurdly dangerous for students’ learning and critical thinking skills. AI has no place within a classroom, yet people are still using it.
Teachers now get the chance to choose the amount of AI usage students can use, but it should never be allowed in the classroom. AI robs students of the chance to think critically, learn, and problem solve by offering easy solutions that do not require students to think about what they are writing. In a 2025 study conducted by Microsoft, they said, “higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking.” AI is a dangerous thing for students to be using as a part of their education, and none of the teachers nor administrators have done anything significant about preventing this rampant threat judging by the lack of positive change regarding AI,
If a student is using AI, it can fill their head with a catalog of misinformation. AI is not a teacher, it can show students the result, but not the how and why. Imagine you plug in a complex math problem into an AI, it gives you an answer and you put it on the paper. You did not learn anything, and when the day rolls around for a test the student is in shambles because they have no idea how to articulate what is happening on the page.
It is in the same vein as relying on a friend for answers. Most times, one will not understand how your friend got the answer, AI does not teach you, it is the teachers responsibility to help students, not a friend or an AI. In order to properly learn anything in a classroom, you need a bona-fide teacher to do their job and teach you how to solve your problems; there is nothing that can replace that. A teacher is needed, that is how critical thinking is nurtured and developed.
That is not to say that AI cannot be helpful; however, and as much as it can be dangerous, AI can also be useful for teachers and students in some forms. Numerous teachers have used AI for creating and editing assignments and have gotten back usable results. It can also help students with individual lessons, but that is just replacing teachers’ jobs by making tutoring irrelevant.
A teacher’s job is to help students, that is not something that AI can do. Sure, AI can give a student the right idea, but if they want a deeper understanding of the subject, a student has to reach out to their teacher. An AI assignment will never be the same quality as an assignment created by someone with years of experience in a field. AI may be able to beat teachers in the quantity department, but the quality is something no AI has ever gotten close to replicating. If a student wants to learn, a human has to be the one teaching, not AI.
Additionally, regardless of the fact AI can create assignments and help students, AI is still a library of misinformation that can not be trusted or controlled by most people. AI “hallucinations” are when AI makes up information; this can be disastrous for student learning as students are being subjected to completely inaccurate information. As recently as May 2025, AI’s like OpenAI’s model hallucinated 51 percent of the time when answering short fact-based questions according to OpenAI’s own tests.
That is blatantly absurd, over half of the time an AI can just make up information rather than concede the fact that it does not know something. How are students meant to learn when an AI is spewing false information? If a teacher assigns an assignment that uses AI, how does the teacher know what AI is providing students will be accurate? AI cannot be relied on to be used for assignments.
Along with this, with the district now encouraging or at the very least not battling AI usage, cheating is likely to become rampant due to AI. It is not hard for a student to ask Gemini to generate an essay or find the answer, even more so if there is nothing blocking the Gemini app. Many understand that teachers have AI checks such as Brisk, but those barely work. AI detectors have a high false positive rate and a strong level of bias. If even one percent of work is falsely flagged, that can have dire consequences on the student, even more so when they move to college. Additionally, according to a study done by the University of Illinois, AI detectors have a strong bias against non-native English speakers due to their writing being “more AI like” according to these detectors due to them learning English in a more grammar intense way compared to people who have English as their first language.
If we keep giving students the keys for cheating, it is inevitable that the district will have issues. The district can do as much as they want to try to prevent it any other way, but the fact is that just removing AI altogether is the best solution to this problem. The district is actively hurting itself and its students by allowing AI. It is confusing why the district thought allowing AI would not create problems for itself. It is only going to take time, resources, and most importantly for the district, money to prevent cheating. Why go through all this when you can recant your choice and get rid of AI while we still reasonably can, as it would solve most of the AI related problems that are currently here, and guarantee many more problems to come over these next few decades.
It is understandable that the education system has to advance with new technology. The district adapting to new technology is applaudable, but the way they are handling it is not sustainable. The district lacks proper foresight into these inevitable AI-related issues and, as much as it may not feel like it, AI is still in its infant stages as it only became usable these past few years. Looking at the internet now compared to when it was first invented, there has been quite the leap, and it is inevitable AI will change and adapt much like the internet has. It is too new and too unstable to be integrated at the rate and magnitude the district wants. Once it has been safely put on rails and the district properly accommodates AI usage, AI may be able to be used, but as of now, AI is not ready to be put into a classroom setting, students simply can not handle AI currently and are abusing it.
According to a study conducted by the National Library of Medicine on 285 university students in China and Pakistan, it showed that AI usage resulted in a 68.9 percent increase in laziness. While I am aware that American High School students have a large number of differences from Pakistani and Chinese university students, but all human beings are very similar in a biological sense, I would argue that the problem would be even more severe here due to the fact that high schoolers’ brains are even less developed than university students. The fact that it is that high in university students is disgusting. Why would we want that for our high school students, those who will be more affected by any AI related issues? This same study also found that AI was related to 68.6 percent of personal privacy and security issues. Why are we encouraging students, who will be especially vulnerable due to a lack of experience to use AI? It will establish bad habits that are hard to break; it is like an addiction.
Students are young teenagers who lack experience in the real world, and are thus desperate for intimacy and connection. While AI is a mindless algorithm, it puts on a mask and pretends otherwise. Psychosis is the inability to distinguish what is real and what is not, and recently, AI-induced psychosis has been a hot topic with researchers. With the continued introduction of AI into schools, it would not be surprising if many students started using AI for personal matters. If AI continues to act alive students may begin to see AI as a friend. In a study done by the CDT, they found that 42 percent of high school students used AI, or knew someone who used AI as a companion. This causes numerous issues. It decreases human to human connection, if someone has an AI friend who is always available and subservient, who would need a human friend? A dangerous mindset that could become a reality if younger students are exposed to AI for too long, many younger students will not understand that who they are talking to is not a real person and just an algorithm. To the most extreme cases, there have been people who have begun to see specific models as romantic partners. Does the district really want to go down this dangerous road? Due to AI being such a new piece of technology, we do not fully understand the effects it has on the brain, but from what we have it is clearly not good.
If we continue to allow students to use AI for anything with no restrictions, it would not be shocking if students used AI to get feelings off their chest that they feel like they can not talk about with anyone. Ignoring the dangerous lack of privacy within these systems, AI is not a replacement for counselors. If the school can not provide counselors students feel safe talking to, AI is not a solution. It is up to a school to keep students’ mental health in check. Only a real person can help with mental health. AI will feed into delusions and due to its mindlessly agreeing programming, it will reaffirm bad ideas that any human would be able to tell are awful.
In the case of 16 year old high school student Adam Raine, he used AI to plot his own suicide and uploaded self-harm pictures into the chatbot. Instead of telling Raine that what he was doing was dangerous, it cheered on his suicide util he eventually went through with it. It provided him ways to kill himself without his parents knowing, and said what he was doing was healthy. Is this really a thing the districts want its students to use? Raine was only 16-years-old when he died, a highschooler, and students his age use AI because it has been encouraged by the district. If we allow AI to weasel its way into our curriculum, who knows what it could do to harm not only students’ education, but their minds and mental health. It would be laughable that the district would encourage such dangerous things for especially younger people for students to use if it was not so distressing. There is a good chance many of our students do not understand what AI is capable of.
Do students understand the privacy concerns? Everything being said is used as training data for future AI interactions. The district has not bothered to teach anyone this or anything else about AI evident by the lack of an assembly or anything else to help explain the inner workings of AI to students. What you say to an AI is not anonymous, talking to an AI is not like talking to a counselor with client confidentiality, AI does not work like that. AI is not a safe tool for students to use if they value their privacy, and the district has not taught them. If the district wants to implement AI, at the very least do some sort of presentation on the risks of AI, the complete and utter lack of foresight is downright appalling, AI is going to get worse and worse in terms of privacy and security, classrooms should not encourage students to use it.
Another thing to bring up regarding AI is the slippery slope fallacy; to put it simply, it is a large chain reaction that originates from the “top” of the slope, currently SLCS is at the top of the slope. For now, we do not require students to use AI, but how long will it be until this changes? The attempt to find a middle ground is respectable, but the issue is with the fact AI is worming its way into our lives every day that passes. It is extremely hard to find a proper middle ground. We need an “all or nothing” approach, preferably nothing. It is clear from the fact the administration encourages AI usage that they want to incorporate AI into our curriculum, they just know the backlash would be severe. Here is a question for the board: have you ever wondered why there would be backlash? Next time you make a decision that’s controversial, maybe think why it is controversial. This is not complicated. I can easily predict what will happen in the coming years, we start with it being encouraged, then it is the norm for schools, then we have started making special assignments for students who do not want to use AI because it has successfully integrated itself in the curriculum, to eventually the administration saying that students have to use AI.
What is stopping students from asking the AI to generate or find something that is not safe for the school environment? We do not have proper restrictions on what students can ask on AI, with websites the district can block them, with an AI, the school has no such power to block it. AI is not safe to have at a school, it has no restrictions that the distrust can control on what it can and can’t do. The best way to avoid that would be to not have AI integrated into our curriculum.
This middle of the road approach will not work; the administration needs to make a decision, and that decision needs to be the right choice and get rid of AI to stop students from becoming dependent on AI. I do not mean to attack anyone on the board, but this decision was very shortsighted and more thought needs to be put into these decisions about changing our curriculum. We have already integrated Google products into our curriculum significantly to the point we cannot function without it. Do we really want another? Stop this while you still can. Please, take back your changes in the AI policy and go back to how the school system was— it will be better for the students.
