Discussion about this post

User's avatar
Mike Kentz's avatar

Hey Sean, I've been talking about something similar with teachers in my CRAFT Program, but in a different way and (I think) for different reasons.

I talk about "grading the chats," and I show teachers that the type of bot you choose dramatically changes the nature of the chat - or the interaction - itself, which subsequently changes your grading rubric and the way that you approach the evaluation.

I also call them Vanilla LLMs, but when it comes to "Custom GPTs," or LLM's that are equipped with a software API, as you describe, I lean towards calling them "personality bots."

Consider -- When I add a capability to an entity that already acts "like" a human being, couldn't you say I am giving it a character trait? The LLM that is attached to Python is "Coder Bot," (or whatever). In human terms, "coder" is an adjective that describes a human being's skillset, hobbies, or identity. A "Contrarian Bot," which is not necessarily attached to another piece of software - but has been adjusted in some way -- is also a human-mimicking bot that has been "given" a personality trait. "Contrarian" is an adjective that we use to describe our Uncle at Thanksgiving Dinner who is just dying to have an argument. It's who he is (identity.)

What say you? This links to a deep discussion about whether or not to anthropomorphize AI, which is a debate I have been having with Rob Nelson from the AI Log for some time now. To me, that's the foundational question - and everything else flows out from it. Anyway, would love to hear your thoughts as I agree with your premise overall absolutely but come at it from a different perspective.

Expand full comment
2 more comments...

No posts