Cheating Cheaters and AI
A shorter newsletter this week, as a number of deadlines are looming!
📰 AI IN THE NEWS
Cheating with AI is one of the most common concerns I hear when I talk to folks. This week I respond to the Wall Street Journal's coverage of the same issue.
🖊 ON MY WRITING DESK
Chapter 1 is coming to a close (for now). I reflect on what introductions mean to me, as a writer.
🎨 ON MY PAINT DESK
Work on my swamp scene continues! Who is the priestess? I think she's up to no good.
Until next week!
Stephen J. Aguilar
📰 AI IN THE NEWS
Cheating Cheaters
One of the questions I get most often about AI boils down to: "Ok, but what about the cheating-cheaters who will use AI to cheat?" My response is always the same: cheater's gunna cheat.
To put it less flippantly—changing the tools available doesn't necessarily change much else. Tools can be misused, abused, or used to do good. We shouldn't over-attend to the negative possibilities that new tools present, as doing so always puts us on the defensive. Education is at it's best when it can be proactive instead of reactive.
Still, that doesn't mean we shouldn't worry about AI's ability to make cheating easier or redefining what cheating vs. authentic work means. AI's rapid integration into education has not has been organized or directed, and that creates a number of negative possibilities. This week the WSJ covered this topic.
This quote stood out to me: Around 400 million people use ChatGPT every week, OpenAI said. Students are the most common users, according to the company..."
I've designed or studied emerging educational technologies for my entire career, and this fact alone should give us pause. Currently, school districts and institutions have responded to AI through a patchwork of policies that range from: "don't use it!" to: "use it! It's the future!"
Yet, given the amorphous nature of generative AI applications (e.g., they can do a lot of things, some well, some not so much) this patchwork approach has led to gaps which enable AI misuse. Gaps create opportunities, and many students are taking advantage of the lack of clarity.
That said, we shouldn't make policy that only focuses on things to avoid. While it's clear that some student use boils down to cheating, other uses live in the grey area of using AI as an "aid." This is the grey area that my current scholarship explores, as it essentially focuses on potential use cases that may be fine on the surface, but may lead to problems down stream. A key affordance of generative AI is speed and scale, but do we always want out students to work quickly? I don't think we do. Slow and steady learns the thing.
🖊 ON MY WRITING DESK
Chapter 1: "The argument"
I am still working on Chapter 1. I see it as a long-ish introduction to my core argument. In general I really enjoy writing introductions because, on average, they are the sections that will be read. Introductions let the reader know what they're getting into. When I write them I am part salesman and part scholar. My goal is to entice readers to keep going. In my view, failed introductions can often lead to unread work.
My goal is to be done with all of Chapter 1 by the end of the month. Will I get there? Probably—though I know I'll have to go back and edit it later. Introductions always shift around a bit based on what ends up being written after them.
🎨 ON MY PAINT DESK
Swamp Scene Progress
Who is this priestess? I'm not sure myself, but she's up to no good. This week my focus is 100% on finishing this scene. I'm relatively happy with the robes, but need to dial back the black and finish other parts. The pygmies also need some attention, but they're not the focal point so they won't be rendered as well as the priestess.