Discussion about this post

User's avatar
Christopher Riesbeck's avatar

+1 for abolishing entries in a mental lexicon, Sean. When we were developing understanding systems for episodic knowledge-based reasoning systems at Yale in the late 1980s, it became clear that language comprehension needed all knowledge, not just some small bits crammed into a lexicon. For you, Elman was the inspiration. For me, it was Quillian's Teachable Language Comprehender (https://dl.acm.org/doi/10.1145/363196.363214). TLC understood phrases like "the lawyer's client" or "the doctor's patient" by finding the connecting paths in a semantic network. TLC was a model with no lexicon! Our application of that idea to our episodic knowledge networks was Direct Memory Access Parsing, a model of language understanding as lexically-cued memory recognition. Will Fitzgerald and I wrote a non-technical introduction to the idea in a response to Gernsbacher's Language Comprehension as Structure Building (https://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?5.38). More technical points are in https://users.cs.northwestern.edu/~livingston/papers/others/From_CA_to_DMAP.pdf.

Quillian 1969, myself 1986, Elman 2009 -- we're due for another attempt to dump the mental lexicon. It all depends -- as it should -- on the quality of the knowledge base.

Expand full comment
JM's avatar

Perhaps it is interesting to note that "cap" in market cap could be considered a close mapping if you understood it to mean directly something like "the head / cap on top of the market value". This would be an incorrect understanding of cap in this context (since the term is short for capitalization and thus is connected to pen cap only via the Latin root) but the end result is entirely coherent and the 'vectors' involved should work fine for most sentences using either term. Our mental models can connect the two words sensibly even if they get the historical etymology (and thus the common dictionary definition) incorrect.

Expand full comment
2 more comments...

No posts