Discussion about this post

User's avatar
Neural Foundry's avatar

Brilliant framing of how GOFAI's habitability issue never got solved, just got eclipsed by GUIs. I've been debugging LLM-based tools lately and its wild how much the overestimation problem shows up when non-technical users hit edge cases. What makes it trickier than older NLIs is the illusion of competence becuase the failures aren't consistent, which makes calibration way harder.

Benjamin Riley's avatar

What a phenomenal essay. Gonna need to re-read this a few times to absorb the various puzzles you've presented!

4 more comments...

No posts

Ready for more?