Home
Projects
Publications
People
Join the Lab
Contact
Login
Sparse Neural Networks
Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization
Training sparse networks to converge to the same performance as dense neural architectures has proven to be elusive. Recent work …
Kale-ab Tessera
,
Sara Hooker
,
Benjamin Rosman
PDF
Cite
Cite
×