Podcast Lesson
"Train on adjacent skills to lift your core domain Meta originally skipped coding training for Llama 2 because users wouldn't ask coding questions on WhatsApp. That turned out to be a mistake: "it turns out that coding is important for a lot of domains not just coding — training the models on coding helps them just be more rigorous and answer the question and kind of help reason across a lot of different types of domains." For Llama 3 they reversed course and heavily prioritized coding as a result. The lesson applies broadly: practicing a discipline adjacent to your main skill — logic, writing, or a second language — often strengthens your primary domain more than direct repetition of it. Source: Mark Zuckerberg, Dwarkesh Patel Podcast, Llama 3, Meta AI, Future of AI"
Dwarkesh Podcast
Dwarkesh Patel
"Mark Zuckerberg — Llama 3, $10B models, Caesar Augustus, & 1 GW datacenters"
⏱ 13:30 into the episode
Why This Lesson Matters
This insight from Dwarkesh Podcast represents one of the core ideas explored in "Mark Zuckerberg — Llama 3, $10B models, Caesar Augustus, & 1 GW datacenters". Artificial Intelligence & Technology podcasts consistently surface lessons that are immediately applicable — and this one is no exception. The timestamp link below takes you directly to the moment this was said, so you can hear it in context.