We’re almost halfway through March, and a few notable things have happened that I wanted to send updates about.
LLMs en español
First, the explainer on Large Language Models (LLMs) I co-authored with Timothy Lee of Understanding AI has now been translated into Spanish by Rubén Alvarez Escobar and Sylvia Elena Rodríguez!
The translation is available here. I’m really grateful for all the work that Rubén and Sylvia put into this (along with input from Elen Irazabal and Enrique Onieva).
This was also my first experience seeing something I’d written translated into another language, and it was a really cool experience seeing the kinds of decisions that go into translation—particularly for technical material. Questions arise about whether and how to translate terms like “transformer” or “output vector”, as well as how to modify the examples originally written in English. Importantly, these decisions can’t just be answered by looking up the Spanish translation: it requires careful consideration of how that direct translation will be interpreted, whether there’s already a more suitable phrase in Spanish, or whether the English term ought to be used (e.g., if that is already the term-of-art in Spanish).
The Gradient podcast
I recently had the pleasure of appearing on the Gradient Podcast (hosted by the excellent Daniel Bashir) with my friend and colleague Cameron Jones. We talked about philosophical and experimental issues relating to LLMs, such as grounding, Theory of Mind, construct validity, and more. I really enjoyed the conversation—Daniel’s a great interviewer and asked a number of thought-provoking questions.
Updates on the readability project
As regular readers will know, I’m working on a project that asks whether LLMs can help measure and modify the readability of texts. This project was voted on by paying subscribers, and involves original empirical work.
So far, I’ve published a couple of pieces relating to this project:
In early February, I wrote a piece on measuring readability with LLMs.
In late February, I wrote another piece on using LLMs to modify readability.
Right now, I’m working on a follow-up to that second post. Specifically, I’m designing a human-subjects experiment, in which humans will rate the readability of those LLM-modified texts. That’ll serve as an important test of whether the LLM-modified texts are, in fact, easier (or harder) to read than the original texts. I hope to have the experiment designed and piloted by the end of the month—with some initial results by early April.
Thanks again for all of your support!