2025-11-13 Papers I've Been Thinking About Lately

Personalized Interactive Narratives via Sequential Recommendation of Plot Points by Hong Yu and Mark O. Riedl
If you are a fan of interactive storytelling, aka DND, you should read this paper, especially if you are a DM/GM. I love that the academic terminology for Dungeon Master is Drama Manager, which perfectly acknowledges the respect for the source material and a solid intervention for generality's sake. We can still call them DMs after all. Ga Tech based folks, by the way.

The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor
If I were teaching a class, I'd probably find a way to shoehorn in this essay. My biggest gripe with AI is that "Artificial Intelligence" means something slightly different to everyone. I guess you could say that about a lot of sciences, but I think everyone knows what you're talking about when you say "Poultry Science," as a counter-example.

Many roads to Rome: cautious considerations on the computability of creativity
Can you truly make creative AI agents? I can't think of a more loaded question. Speaking of no one having a good definition of things, let's talk about creativity too! There are a handful of things in this essay I don't like, but I think that's important and it raises a few valid concepts. It's important to read things that aren't 100% aligned with your approach or methods, or even presumptions. There is value in seeing how those outside of AI or computer science feel about the subject (even if they are unwittingly contributing to it)

Speaking of:
Exploring AI intervention points in high-school engineering education: a research through co-design approach
We all know that Gen AI is changing education at every level. This offers some insight and a framing of pressing issues that I find valuable. I am sure there are several directions to go form here.

A Framework for Sequential Planning in Multi-Agent Settings
Here's food for thought. Can every decision be thought of as an Interactive Partially Observable Markov Decision Process? Does the buck stop there regarding human-level decision-making under uncertainty? I'm not sure, but I am going to keep needling away at this thought until proven otherwise.

A survey of inverse reinforcement learning: Challenges, methods and progress
So if anything can be modeled as an I-POMDP, then what if we can only define the observations and actions? Inverse RL is all about inferring the reward functions from interactions with the environment. Still chomping away at this one, but thought I'd share none-the-less.