Dr. Bry: Last week, the grade level meetings were about academic integrity and AI usage, and it seemed like that sparked some interest. How did you feel as you came out of this meeting?

Finn:  It was a bit too much, in my opinion. A lot of students’ AI use isn’t simply going to ChatGPT and plugging in the assignment prompt. I think a lot of it is brainstorming. I don’t use AI a ton, but when I do, I feel like I’m doing it to help my learning. 

Ella: Saying ‘no AI’ is a naive and ineffective policy that will leave teachers and students on different pages. AI is used quite widely among the student body, but not necessarily unethically. I feel like we need to start integrating it as a tool, such as for reviewing or summarizing a topic. 

Evan: I’m wondering if the administration’s stance on AI usage aligns with that of teachers. I’ve had multiple teachers use AI to support their teaching or to help them grade, so I doubt this topic is agreed upon by the very people teaching and grading us.

Finn: Yeah, plus many people will just be using it to help them once they get past high school or college. 

Ella: After high school, there’s no reason why using it to help you write an email is a bad thing, so I just feel like it’s stupid right now to try to ban it.

Kyle: There’s just so many nuances of AI. Why are we giving the prompt to students before an in-class write? There’s no way [to] use AI if you don’t know what the prompt is beforehand. There’s a value in not using AI, and it can be a little stressful for students, but it’s something they have to deal with.

Penny: And we have seen teachers try to make that happen. I’ve heard there’s been a lot more on-paper, in-class writing in English classes…AI is such a hard thing to pin down, even for teachers. I know plenty of people who turn in highly ChatGPT-ed assignments and get great grades on them. I figure it’s better to modify classes so that AI is not necessary for assessments. 

Quinn: I will say, people who use AI a lot are still going to find ways to do it in in-class essays. And I know people who, because they can take their books in, will have ChatGPT prewrite the whole thing. Whether we like it or not, AI is the future, and I think the admin’s approach is overly simplistic.

Evan: It just creates a system where the kids who are the most skilled at integrating it into their own work are the ones who’ll get away with it. There’s this inequity where the more a person has used AI, the less they’re affected by anti-AI policies. 

Finn: Also, there’s kind of a disdain for other students who use AI a lot. And I think for a lot of students, it can feel like you’re doing all this work when you could easily cheat, but it might not feel right to them. 

Quinn: It’s discouraging. [And] not saying that a test was hard after taking it [is] insane and unrealistic.

Ella: You can get around that policy easily by going to the next hallway to talk to your friend.

Evan: There’s a struggle to really understand the student perspective here. And where do you draw the line? If a student comes out of a test and they’re super red and look like they just failed a test, does that mean they’re cheating, because it’s clear to other people that the test was difficult for them? What’s the difference between telling someone a test was hard and being miserable for the rest of the day because you just think you flunked a test? 

Ella: And if someone asked what’s wrong, you’d be like, ‘I can’t tell you.’

Evan: It seems unrealistic and impossible to actually enforce. 

Penny: There’s also an element of peer pressure to it. There are some people who believe that friends should help each other out on tests, and it affects the friendship dynamics. So there’s an element of ‘I owe it to my friend.’

Quinn: Cause nobody wants to be the person who says no.

Ella: Yeah, [refusing] gives the impression that you think you’re better than everyone else, and it’s such a weird dynamic.

Quinn: I remember in English class last year, there was a day where Dr. Enelow sat us down because everyone wrote their homework about Jacob’s ladder, but the reading’s translation used the word ‘ramp’. No one did the reading, so we all had the same clearly wrong answer.

Finn: It’s a symptom of AI being so easy to use, and the school’s culture about good grades, getting into a good college. AI feels like such an easy option.  

Quinn: Right. The environment our school fosters is one of succeeding, as opposed to learning.

Penny: Obviously, using AI to cheat, or talking about test answers, is morally wrong, and most people agree with that. But sometimes, it feels like that’s your only option. 

Finn: There are very few times that students are genuinely excited about learning, which makes it way harder to do the work. 

Nat: History class last year, the teachers used AI to give feedback on our papers. It’s just a double standard if you’re telling kids not to use AI at all.

Evan: I talked to a teacher who uses ChatGPT to assist their interims…And I honestly have no problem with a teacher resorting to AI as a tool, as long as it’s still their voice and observations.

Ella: Assignments that can be completed in three seconds with AI just aren’t good quality or well-thought out. Skills that are more complex and require thought, AI will never do as well.

Dr. Bry: The school published an AI policy that actually has degrees to which AI can be used for assignments, but it sounds like teachers are just saying ‘AI is bad, don’t use it’.

Nat: No one told us about this. 

Dr. Bry: Do the teachers even know the policy? 

Nat: You know the saying, ‘Strict parents create sneaky kids’? I feel like that’s kind of what [our AI policy is creating].

Leave a Reply

Your email address will not be published. Required fields are marked *