UNLIMITED
This Week in Machine Learning & AI - 6/17/16: Apple's New ML APIs, IBM Brings Deep Learning Thunder: This Week in Machine Learning & AI brings you the… by The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)UNLIMITED
100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
UNLIMITED
100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562
FromThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
ratings:
Length:
51 minutes
Released:
Mar 7, 2022
Format:
Podcast episode
Description
Today we’re joined by Subutai Ahmad, VP of research at Numenta. While we’ve had numerous conversations about the biological inspirations of deep learning models with folks working at the intersection of deep learning and neuroscience, we dig into uncharted territory with Subutai. We set the stage by digging into some of fundamental ideas behind Numenta’s research and the present landscape of neuroscience, before exploring our first big topic of the podcast: the cortical column. Cortical columns are a group of neurons in the cortex of the brain which have nearly identical receptive fields; we discuss the behavior of these columns, why they’re a structure worth mimicing computationally, how far along we are in understanding the cortical column, and how these columns relate to neurons.
We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models.
The complete show notes for this episode can be found at twimlai.com/go/562
We also discuss what it means for a model to have inherent 3d understanding and for computational models to be inherently sensory motor, and where we are with these lines of research. Finally, we dig into our other big idea, sparsity. We explore the fundamental ideals of sparsity and the differences between sparse and dense networks, and applying sparsity and optimization to drive greater efficiency in current deep learning networks, including transformers and other large language models.
The complete show notes for this episode can be found at twimlai.com/go/562
Released:
Mar 7, 2022
Format:
Podcast episode
Titles in the series (100)
- 25 min listen