Metacognition + AI


📰 AI IN THE NEWS

Ezra Klein discusses Artificial General Intelligence. I argue that we need to stop being hyperbolic. All "AI" will be devolve into a mundane tool.

🖊 ON MY WRITING DESK

A bit of history. I discuss how I transitioned from being a middle school teacher, to a philosopher, to working for EdTech—and why it matters.

📖 AI SCHOLARSHIP

Thinking about thinking? Why don't mind if I do. This week I discuss "metacognition" and its relationship to AI.

🎨 ON MY PAINT DESK

Two projects moving forward. Still working on skin, and adding color to my forest scene. 

🎥 YOUTUBE

"There is no conflict," is a lie Darth Vader told Luke. It's also a lie academics tell themselves. This week I made a video on how to deal with interpersonal conflict in the academy. 

Until next week!

Stephen J. Aguilar


📰 AI IN THE NEWS

Artificial General Intelligence (AGI)

"I think we are on the cusp of an era in human history that is unlike any of the eras we have experienced before."

Maybe it's the analytic philosopher in me, but this opening line in Ezra Klein's NYT Opinion piece annoyed the hell out of me. (It's a transcript of a podcast episode, so it's probably a better listen than read.)

Anyway, back to being persnickety. 

Hyperbolic statements like this are generally useless when one is trying to understand the impact of a disruptive technology. On the face of it, such a claim reads as profound, maybe even anxiety inducing. Yet, history is ripe with examples of technologies we integrated into our daily lives in ways that make them utterly mundane. That smart phone you're (probably) holding? Unimaginable during the 1980s. 

Ok—not totally unimaginable: 

But that's precisely my point. The technologies we create are awe-inspiring for a little while, then we integrate them into our everyday lives. Once we've acclimated to them, they become completely and utterly mundane. This idea—the process of technological disruption and its eventual integration—is the core thesis of Authenticating Intelligence

Stated differently, AI driven technologies in education are:

This doesn't mean that AI's full integration in our educational systems won't be consequential—it will be. The discussion, then, is less about whether or not consequences will happen, and more about whether or not we will mitigate the damage that "market driven" integration will cause. This is the larger (and more nuanced) point of Ezra's podcast episode, but it's somewhat buried.  


🖊 ON MY WRITING DESK

Teaching and Startups

As I work through my introduction I've started to reflect the jobs I had before I became a professor. Two stand out to me the most: my time as a middle school teacher int he SF Bay area, and my time working for an EdTech startup in Los Angeles. 

Being a middle school teacher was one of the hardest things I've ever done. I woke up at or before 6 to begin preparing for the day, then had a full day of teaching with only a 30 minute lunch break. My weekdays concluded with an evening of grading and prep. During my first year I also took a credentialing class two days a week that went from 5:30pm to 9:00—with a 45 minute dinner break. It was a nightmarish schedule that makes what I do now seem easy.

During the instructional day I had to attend to the educational needs of about 25-29 students. There is no "break" time as a teacher. If you falter, the day can turn on you. I used technology to help make things a little easier, but this was before AI, so there was still a lot of manual labor involved. Despite the limitations of the day, I saw the potential for educational technologies to help teachers make their days easier and better for their students. 

The EdTech startup I worked for was in the the Los Angeles area. The work itself was fine, if not tedious, but a key moment I'm writing about in the introduction of Authenticating Intelligence involves one of my final tasks for a company, and why I felt it walked on the edge of ethical behavior. My experience at that company showed me that "innovation" can be conflated with wishful thinking and overpromising a product's value relative to what it's actually capable doing.  


📖 AI SCHOLARSHIP

Thinking about thinking

Work out of Microsoft, the University College London, and the University of Edinburgh suggests that metacognition is important when folks use generative AI (duh). 

"Metacognition" is just a fancy term for "thinking about thinking." It's an essential component of learning anything. Unless you take a moment to reflect on whether or not you actually know something, it's tricky to move forward and learn something new. Metacognition enables us to pause and strategize when when we are faced with new information. If you've ever paused and asked yourself: "why don't I get it?" then you've engaged in metacognition. 

Generative AI has a lot of potential implications for metacognition. To use generative AI correctly, it's important to encourage metacognition. After all, despite being a fantastic technology, generative AI has limitations...

I spent an embarrassing amount of time trying to get ChatGPT to make a wine glass filled to the brim. It failed. Miserably. Each and every time. That didn't stop it from being confidently wrong, however. Metacognition is what enabled met to evaluate the "wrongness" of its efforts. 

Granted, the wine glass is a silly example, but generative AI output is not always so blatantly wrong. In fact, as generative AI applications become better and better, the errors will become more nuanced—they will require more human interpretation so that their limitations are better known. 


🎨 ON MY PAINT DESK

Red + Setting the Scene

More work on red (left), as well as my diorama (right). With Red, I've started to be more aggressive with blocking in my shadow tones, but there is still a lot more work to do. This image is also better quality than the rest—I used my photo booth and more advanced camera settings. This helps with clarity, but also makes it so I can't hide any of my sins, e.g., brushstrokes in the wrong place, etc. 

I'm also still working on the diorama piece. I've blocked in the color of the trees and have started to light of the scene a little more. I've done this by spaying some grey paint in places that will appear brighter. If you look closely I also added some mushrooms!


🎥 YOUTUBE

Conflict? Deal with it

Navigating interpersonal conflict is essential in any professional job. This week I riff on what it means to do that in the academy, when power dynamics are at play and when sometimes (often?) one has to engage with people who are..."conflict prone." I argue that there's a way to be friendly, without necessarily making a friend—or undermining one's position. 


Previous
Previous

Global AI and He-Man

Next
Next

Snake Oil?