Podcast Lesson
"Hack plasticity by updating context, not weights Knuth's experiment with LLMs showed a practical workaround for one of LLMs' core limitations — their frozen weights — by having the model update its memory in the context window with what it learned each step: "after it found a solution for a particular value of M, he made the LLM update its memory with exactly what it learned in solving that problem." The speaker calls this "hacking together plasticity" — it is not changing the underlying model, but it simulates incremental learning within a session. Anyone building agents or multi-step AI pipelines should consider explicitly instructing the model to summarize and carry forward what it has learned at each stage rather than treating each step as independent. Source: Vishal Misra, No Priors (Martin Casado), 'How LLMs Actually Work: Bayesian Inference, Causality, and the Path to AGI'"
The a16z Podcast
Andreessen Horowitz
"Why Scale Will Not Solve AGI | Vishal Misra - The a16z Show"
⏱ 39:00 into the episode
Why This Lesson Matters
This insight from The a16z Podcast represents one of the core ideas explored in "Why Scale Will Not Solve AGI | Vishal Misra - The a16z Show". Artificial Intelligence & Technology podcasts consistently surface lessons that are immediately applicable — and this one is no exception. The timestamp link below takes you directly to the moment this was said, so you can hear it in context.