Presenters: Prof. Tomaso Poggio (MIT)
Prof. Gabriel Kreiman (Harvard Medical School, BCH)
Prof. Thomas Serre (Brown U.)
Discussants: Prof. Leyla Isik (JHU), Martin Schrimpf (MIT), Michael Lee (MIT), Prof. Susan Epstein (Hunter CUNY), and Jenelle Feather (MIT)
Moderator: Prof. Josh McDermott (MIT)
Date: December 1, 2020 3:00 pm- 5:00 pm
Abstract: Deep Learning architectures designed by engineers and optimized with stochastic gradient descent on large image databases have become de facto models of the cortex. A prominent example is vision. What sorts of insights are derived from these models? Do the performance metrics reveal the inner workings of cortical circuits or are they a dangerous mirage? What are the critical tests that models of cortex should pass?We plan to discuss the promises and pitfalls of deep learning models contrasting them with earlier models (VisNet, HMAX,…) which were developed from the ground up following neuroscience data to account for critical properties of scale + position invariance and selectivity of primate vision.