3 Comments

Hey Sean, I've been talking about something similar with teachers in my CRAFT Program, but in a different way and (I think) for different reasons.

I talk about "grading the chats," and I show teachers that the type of bot you choose dramatically changes the nature of the chat - or the interaction - itself, which subsequently changes your grading rubric and the way that you approach the evaluation.

I also call them Vanilla LLMs, but when it comes to "Custom GPTs," or LLM's that are equipped with a software API, as you describe, I lean towards calling them "personality bots."

Consider -- When I add a capability to an entity that already acts "like" a human being, couldn't you say I am giving it a character trait? The LLM that is attached to Python is "Coder Bot," (or whatever). In human terms, "coder" is an adjective that describes a human being's skillset, hobbies, or identity. A "Contrarian Bot," which is not necessarily attached to another piece of software - but has been adjusted in some way -- is also a human-mimicking bot that has been "given" a personality trait. "Contrarian" is an adjective that we use to describe our Uncle at Thanksgiving Dinner who is just dying to have an argument. It's who he is (identity.)

What say you? This links to a deep discussion about whether or not to anthropomorphize AI, which is a debate I have been having with Rob Nelson from the AI Log for some time now. To me, that's the foundational question - and everything else flows out from it. Anyway, would love to hear your thoughts as I agree with your premise overall absolutely but come at it from a different perspective.

Expand full comment
author

Yeah, I agree that it's natural (and potentially useful) to talk about different model/prompt configurations as having different "personality traits". While we should generally be cautious about too much anthropomorphization of these systems, there is a sense in which an LLM or an "LLM-equipped software system" is performing a social *role* that can usefully be summarized as a particular character or personality. I wrote a little bit about that in a past post (https://seantrott.substack.com/p/what-we-talk-about-when-we-talk-about), drawing on Murray Shanahan's work on "role play with large language models": https://www.nature.com/articles/s41586-023-06647-8

I like Shanahan's paper because I think it's quite careful to articulate the benefits and potential drawbacks of using language like "character traits" to describe the behavior of these systems.

Expand full comment

Thanks for sharing this! I plan to dive in this weekend. Appreciate it!

Expand full comment