While the course was running, the podcasted lectures were available online (on podcast.ucsd.edu) but I believe they've been taken down now unfortunately.
However, I'd be happy to share the lecture slides with you if you're interested! Some of them (i.e., the transformers lecture) are embedded in this post if you wanted to take a look and see if you're interested in the rest of them.
Thank you for this post. I appreciate your work to clarify what we are dealing with when we enter a command and get a response. What happens in between is critical. I’m wondering how you might bring in concepts from distributed cognition re: Perkins into your course. Humans themselves are specialized beings.
I like the idea of incorporating distributed cognition, which I do think is relevant—both for conceptualizing LLMs themselves and thinking about LLMs embedded in the larger "cognitive systems" they operate in (which also include other humans).
Sean - A very nice post. And a neat course too. I can appreciate this because I have been teaching in this general area for many many years. I think that LLMs are just a tool that fashion has made into more than a tool. Well, look at this for a more damning perspective: https://softwarecrisis.dev/letters/llmentalist/
I find it hard to consider cognition to operate at the level of character strings.
The late great Aravind Joshi once told me to be lenient toward the people whose research concentrated on investigating how far a particular tool will get them — be it blackboard architectures or a particular parsing algorithm or n-gram technology or LLMs or what have you. I think this is good as engineering, and indeed will be more useful to most students than dealing with real science. Which starts with identifying a problem and then looking for models and systems that can solve it, or at least satisfy some prerequisites toward a solution. I am sure that you understand what I mean. Talking about LLMs in the same breath as consciousness is metaphorical thinking. Indeed, lots of scientific contributions — and not only in our field — are dealing with metaphors: a fascinating topic in itself (and I don’t mean in the sense of Lakoff). Anyway, thanks for lots of enjoyable content!
The article is interesting and makes some good analogies. I'm more hesitant than the author to draw such strong conclusions about the *lack* of reasoning ability in LLMs, but it's a fair point that many people overestimate their capabilities.
This course sounds amazing! Is it available online?
While the course was running, the podcasted lectures were available online (on podcast.ucsd.edu) but I believe they've been taken down now unfortunately.
However, I'd be happy to share the lecture slides with you if you're interested! Some of them (i.e., the transformers lecture) are embedded in this post if you wanted to take a look and see if you're interested in the rest of them.
Thank you for this post. I appreciate your work to clarify what we are dealing with when we enter a command and get a response. What happens in between is critical. I’m wondering how you might bring in concepts from distributed cognition re: Perkins into your course. Humans themselves are specialized beings.
I like the idea of incorporating distributed cognition, which I do think is relevant—both for conceptualizing LLMs themselves and thinking about LLMs embedded in the larger "cognitive systems" they operate in (which also include other humans).
I do recommend this paper on how machines/AI could influence human culture, if you haven't read it already: https://www.nature.com/articles/s41562-023-01742-2
Sean - A very nice post. And a neat course too. I can appreciate this because I have been teaching in this general area for many many years. I think that LLMs are just a tool that fashion has made into more than a tool. Well, look at this for a more damning perspective: https://softwarecrisis.dev/letters/llmentalist/
I find it hard to consider cognition to operate at the level of character strings.
The late great Aravind Joshi once told me to be lenient toward the people whose research concentrated on investigating how far a particular tool will get them — be it blackboard architectures or a particular parsing algorithm or n-gram technology or LLMs or what have you. I think this is good as engineering, and indeed will be more useful to most students than dealing with real science. Which starts with identifying a problem and then looking for models and systems that can solve it, or at least satisfy some prerequisites toward a solution. I am sure that you understand what I mean. Talking about LLMs in the same breath as consciousness is metaphorical thinking. Indeed, lots of scientific contributions — and not only in our field — are dealing with metaphors: a fascinating topic in itself (and I don’t mean in the sense of Lakoff). Anyway, thanks for lots of enjoyable content!
Thanks!
The article is interesting and makes some good analogies. I'm more hesitant than the author to draw such strong conclusions about the *lack* of reasoning ability in LLMs, but it's a fair point that many people overestimate their capabilities.