Global AI and He-Man
I wanted to make an "Ides of March" joke, but instead I learned about the Roman calendar:
The Romans did not number each day of a month from the first to the last day. Instead, they counted back from three fixed points of the month: the Nones (the 5th or 7th, eight days before the Ides), the Ides (the 13th for most months, but the 15th in March, May, July, and October), and the Kalends (1st of the following month).
So there you go. Maybe I should try to make a Kalends of March joke instead...
Anyway, let's get into it.
📰 AI IN THE NEWS
What happens when you break AI on purpose? How should you report it, and what are the consequences? This week I discuss a WIRED article that tackles that question as well as the regulating AI.
🖊 ON MY WRITING DESK
What is a mundane technology? (It's one that you don't notice when you use it.) In Authenticating Intelligence I argue that it's necessary for technologies to become mundane before they become truly useful. This week I discuss how I continue to refine this argument in my introduction.
📖 AI SCHOLARSHIP
A recent publication investigated policies across 40 universities in six global regions. The global perspective is important—we can always learn from approaches taken by our neighbors.
🎨 ON MY PAINT DESK
I tackled panting a He-Man "mini" that was more of a biggie this week. I also share how the "miniature" painting hobby includes many different scales of figures.
🎥 YOUTUBE
The peer review process can be demotivating. This week's video is a throwback to when I gave students advice on how to navigate peer review successfully.
Until next week!
Stephen J. Aguilar
Regulating AI
This week I was quoted by CalMatters discussing a number of bills in the California legislature that aim to regulate AI technologies, or related technologies (e.g., auto-mated decision making technologies). Regulation is always a tough subject when it comes to emerging technologies. It's reasonable for business to bristle at the idea of being stymied by regulations when new technologies are in development. How can one mitigate harm when may harms are yet to be discovered? Still, a regulation-free environment is also dangerous since it can incentivize problematic behavior or "looking the other way" when harms do occur.
I don't have a solution, but I suspect that a core component of one pairs transparency with foregrounding the core values that drive why an AI-enabled product is being developed in the first place. It's important to have these debates in good faith, though I suspect that we're in a political environment where god-faith discussions and negotiations are hard to come by.
Breaking AI, and Reporting it
It's easy to become seduced by the relative simplicity of using chat-based generative AI tools. I've integrated ChatGPT into many of my own workflows. I use it all the time to help me come up with titles for my YouTube videos, for example.
It does a good enough job with tasks like this, and I'm usually happy with the results after one or two edits. Ease aside, though, AI models aren't perfect, and they can break. A recent WIRED article discussed a proposal for allowing individuals to report issues when they happen, especially if someone breaks AI models on purpose in order to discover bugs.
Frameworks like this are important, because without them it is difficult to test the limits of AI and understand how to mitigate unintentional harm or prevent bad actors from using AI to do things they shouldn't. For now, though, we are in the "wild west" period of AI, and should take care not become too confidant about the capabilities of current generation AI tools.
🖊 ON MY WRITING DESK
Mundane EdTech
Blackboards started to be mass-produced around the mid-1800s, but they were used well before then. In fact, they have been around for a long, long time. So long, in fact, that it's hard for us to conceive of them as "educational technologies"—yet, they are! When I was a middle school teacher the technology had evolved into whiteboards, but the core affordance of the technology remains the same: temporary imprinting of human written expression on a slate that could be reused and referred to during instruction. Here are a couple of images of West Point students using blackboards over a hundred years ago, and today. Their core use-case has remained unchanged.
As I continue to write the introduction for Authenticating Intelligence I am reminded that a good educational technology, no matter how mundane or how novel, has a core affordance. Once we understand it, we can integrate it into our pedagogical practice. We can use it to help us teach, and to help students learn. When that happens, we stop seeing it as something novel. It shifts toward the mundane, toward the useful. Generative AI is no different. What remains to be seen, however, is what its core affordances truly are once we've moved past the hype phase.
📖 AI SCHOLARSHIP
Generative AI is Global
It's an obvious point, but generative AI applications have global implications. I'm currently running my own international project focused on better understanding generative AI uptake in K12 settings (more on that this summer!), and I'm glad that others are paying attention to global trends as well. Work like that of Jin et al., is important because we can only benefit from understanding what other institutions are doing as they grapple wit the same problems everyone else is grappling with.
The authors studied GenAI policies among institutions in in Africa, Asia, Europe, Latin America, North America, and Oceania. Their method was straightforward: examine the publicly available documents from the top ten universities in each region. While this obviously limits what they can say to what is available publicly, this sort of synthesis is nonetheless essential to begin to understand global trends.
Their analysis indicated that "ethical use" and "academic integrity" were among the most popular themes within policy documents. I have found ethics and academic integrity to be important concerns in my work as well, though I am hopeful that soon we will move away from worrying about what AI shouldn't do so that we can move toward what it should do. Click on the image to give the article a read, it's open access!
🎨 ON MY PAINT DESK
He-Man and Swamp Scene
I pushed my swamp scene forward this week. I have a ways to go, but I've made a few decisions regarding color composition. The priestess's color it still TBD, but that'll be resolved next week. I also took a class on airbrushing over the weekend, and we used this massive He-Man "mini" to learn both basic and advanced airbrush painting techniques. It was a fun experience, and I walked away with a better understanding of how to use an airbrush.
"Miniatures"
I paint miniatures, but what exactly is a miniature? It turns out that the definition doesn't just mean tiny. Below is an is an image of all of the projects I've worked on recently. As you can see the scales vary, from smaller 28mm scale minis (the forest scene and the guy with the cleaver), to larger scales (Juggernaut and the bust), to huge scales (He-Man). Each scale has its challenges. Larger scales = more canvas, but it can be unforgiving if your overall composition is poor. Smaller scales = less canvas, but you have to really understand light and shadow because you are forced to paint them onto the scene. Navigating these challenges is what makes painting fun.
🎥 YOUTUBE
Peer Review
When I submit an academic paper for publication it goes through the "peer review" process, which means experts who I don't know read my work and write feedback. If it passes a minimum bar, I'm invited to revise the work, resubmit it for another round, and around we go until an editor decides that the work merits publication. Sometimes, though, they outright reject the work. Despite that potential negative outcome, I see peer review as an essential way that we can control the quality of scientific work. Without it, anyone with a loud enough voice would shout over work that was done with rigor.
Still, peer review is a process that has its problems. In this video I give graduate students a way to navigate the process to maximize positive outcomes for themselves. (This is another throwback week due to my travel schedule!)