The Counterfactual

Home
Archive
About

Sitemap - 2024 - The Counterfactual

The Counterfactual’s 2024 in review

Deep Dive: Molmo

Towards an unambiguous notation for LLMs

Vision-language models (VLMs), explained (pt. 1)

LLM-ology: the challenges ahead

LLM-ology and the external validity problem

What if we ran ChatGPT "by hand"?

Language models vs. "LLM-equipped software tools"

Results from poll #5

The Counterfactual's poll #5

So you want to be an LLM-ologist?

What we talk about when we talk about LLMs

What "language" is a language model a model of?

Ingredients, flavor networks, and the "essence" of cuisine

"Mechanistic interpretability" for LLMs, explained

How to evaluate statistical claims

Human culture in the age of machines

Results from poll #4

The Counterfactual's poll #4

Tokenization in large language models, explained

Reflections: my class about LLMs and Cognitive Science

Results from poll #3 (and updates)

The Counterfactual's poll #3

Modifying readability, pt. 2: the human study

LLMs and the "not" problem

Newsletter updates

"Cheap tricks" in human language comprehension

Modifying readability with large language models (pt. 1)

GPT-4 is "WEIRD"—what should we do about it?

Results from poll #2

The Counterfactual's poll #2

Measuring the "readability" of texts with Large Language Models

Perceptrons, XOR, and the first "AI winter"

Results from poll #1

Learning, forgetting, and the NYT lawsuit

The Counterfactual's science poll #1

© 2025 Sean Trott
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share