KLAB code, data, and databases
You must agree with the terms and conditions specified in this link before downloading any material from the Kreiman lab web site. Downloading any material from the Kreiman Lab web site implies your agreement with this license.
Kreiman. Biological and Computer Vision. Cambridge University Press (2020). Vision applications are pervasive from smartphones that recognize faces to automatic analysis of clinical images. This book bridges neuroscience, cognitive science, computer science, and artificial intelligence to explain neural mechanisms of biological vision and how artificial intelligence computer vision systems work.
Zhang et al. Putting visual recognition in context. CVPR 2020. PDF. This study systematically investigates where, when, and how contextual information modulates visual object recognition. The work introduces a computational model (CATNet, context-aware two-stream network) that approximates human visual behavior in the incorporation of contextual cues for visual recognition.
Jacquot et al. Can deep learning recognize subtle human activities? CVPR 2020. PDF. Success in many computer vision efforts capitalizes on confounding factors and biases introduced by poorly controlled datasets. Here we introduce a procedure to create more controlled datasets, and we exemplify the process by creating a challenging dataset to study recognition of everyday actions.
Xiao et al. Finding Preferred Stimuli for Visual Neurons Using Generative Networks and Gradient-Free Optimization. PLoS Computational Biology 2020. PDF. This study introduces the XDream algorithm to find preferred stimuli for neurons in an unbiased manner. The study shows the robustness of XDream to different architectures, generators, developmental regimes, and noise.
Vinken et al. Incorporating intrinsic suppression in deep neural networks captures dynamics of adaptation in neurophysiology and perception. Science Advances, 2020. PDF. This study introduces a computational model of adaptation in visual cortex. The model relies exclusively on activity-dependent neuronally-intrinsic mechanisms. The deep convolutional neural network architecture can explain a plethora of observations both at the perceptual levels and neurophysiological levels.
Ben-Yosef et al. Minimal videos: Trade-off between spatial and temporal information in human and machine vision. Cognition 2020. PDF. This study investigates the role of spatiotemporal integration in visual recognition. We introduce “minimal videos”, which can be readily recognized by humans but become unrecognizable by a small reduction in the amount of either spatial or temporal information. The stimuli and behavioral results presented here challenge state-of-the-art computer vision models of action recognition.
Ponce et al. Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences. Cell 2019. PDF. This study introducs a new algorithm to discover neural tuning properties in visual cortex. The method combines a deep generative network and a genetic algorithm to search for images that elicit high firing rates in real-time in an unbiased manner. The results of applying this algorithm to macaque V1 and IT neurons challenge existing dogmas about how neurons in ventral visual cortex represent information.
Zhang et al. Lift-the-flap: what, where and when for context reasoning. arXiv 1902.00163. PDF. This study shows that it is possible to infer the identity of an object purely from contextual cues, without any information about the object itself. The study proposes a computational model of contextual reasoning inference.
Madhavan et al. Neural Interactions Underlying Visuomotor Associations in the Human Brain. Cerebral Cortex, 2019. PDF. This study uncovers plausible neural mechanisms instantiating reinforcement learning rules to associate visual and motor actions by trial-and-error learning via interactions between frontal regions and visual cortex as well as between frontal cortex and motor cortex.
Kreiman. What do neurons really want? The role of semantics in cortical representations. This chapter discusses how the field has investigated the neural code for visual features along ventral cortex, how computational models should be used to define neuronal turning preferences and how to think about the role of semantics in the representation of visual information.
Tang et al. Recurrent computations for visual pattern completion. PNAS 2018. PDF. How can we make inferences from partial information? This study combines behavioral, neurophysiological and computational tools to show that recurrent computations can help perform visual pattern completion.
Misra et al. Minimal memory for details in real life events. Scientific Reports 2018. PDF. This study scrutinizes one hour of real life events and shows that humans tend to forget the vast majority of the details. Only a small fraction of events is crystallized in the form of episodic memories.
Zhang et al. Finding any Waldo: zero-shot invariant and efficient visual search. Nature Communications 2018. PDF. This study demonstrates that humans can perform invariant and efficient visual search and introduces a biologically inspired computational model capable of performing zero-shot invariant visual search in complex natural scenes.
Wu et al, Learning scene gist with convolutional neural networks to improve object recognition. IEEE CISS 2018. A deep convolutional architecture with two sub-networks, a fovea and a periphery, to integrate spatial contextual information for visual recognition.
Isik et al. What is changing when: Decoding visual information in movies from human intracranial recordings. Neuroimage (2017). PDF. Detection of temporal transitions directly from field potentials along ventral visual cortex.
Olson et al. Simple learning rules generate complex canonical circuits. This study demonstrates that it is possible to develop a network that resembles the canonical circuit architecture in neocortex starting from a tabula rasa network and implementing simple spike-timing dependent plasticity rules.
Lotter et al. Deep predictive coding networks for video prediction and unsupervised learning. ICLR (2017). arXiv:1605:08104. PDF. A deep model including bottom-up and top-down connections to make predictions in video sequences.
Tang et al. Predicting episodic memory formation for movie events. Scientific Reports (2016). PDF. Machine learning approach to predict whether specific events within a movie will be remembered or not.
Miconi et al. There’s Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task. Cerebral Cortex (2016). PDF. This work presents a biologically inspired computational model for visual search. GitHub
Tang et al. Cascade of neural processing orchestrates cognitive control in human frontal cortex. eLife (2016). PDF. A dynamic and hierarchical sequence of steps in human frontal cortex orchestrates cognitivie control.
Bansal et al. Neural dynamics underlying target detection in the human brain. Journal of Neuroscience (2014). PDF. Feature-based attention modulates responses along the human ventral visual stream during a target detection task.
Singer and Kreiman. Asynchrony disrupts object recognition. Journal of Vision (2014). PDF. Spatiotemporal integration during recognition breaks down with even small deviations from simultaneity.
Hemberg et al. Integrated genome analysis suggests that most conserved non-coding sequences are regulatory factor binding site. Nucleic Acids Research (2012). PDF. A method to build putative transcripts from high-throughput total RNA-seq data. (HATRIC)
Kriegeskorte and Kreiman. Understanding visual population codes MIT Press (2011). Towards a common multivariate framework for cell recording and functional imaging. Link to code and other resources.
Kim et al. Widespread transcription at thousands of enhancers during activity-dependent gene expression in neurons. Nature (2010). PDF. Discovery of transcription at enhancers, eRNAs.
Rasch et al. From neurons to circuits: linear estimation of local field potentials. Journal of Neuroscience (2009). PDF. Computational model to investigate the relationship between spikes and local field potential signals.
Agam et al. Robust selectivity to two-object images in human visual cortex. Current Biology, 20:872-879. The physiological responses at the level of field potentials along ventral visual cortex show robustness to clutter.
Liu et al. Timing, timing, timing: Fast decoding of object inforrmation from intracranial field potentials in human visual cortex. Neuron (2009). PDF. Rapid selective and tolerant responses along the ventral visual stream in the human can be decoded in single trials.
Hung et al. Fast read-out of object identity from macaque inferior temporal cortex. Science (2005), 310:863-866. Single trial rapid decoding of visual information from pseudo-populations of neurons in macaque inferior temporal cortex.
Kreiman et al. Object selectivity of local field potentials and spikes in the macaque inferior temporal cortex. Neuron (2016), 49:433-445. Local field potentials in the macaque inferior temporal cortex show visual selectivity to different objects.
Su et al. A gene atlas of the mouse and human protein-encoding transcriptomes. PNAS (2004). PDF. Microarray based profiling of gene expression across multiple tissues in mice and humans.
Kreiman. Identification of sparsely distributed clusters of cis-regulatory elements in sets of co-expressed genes. Nucleic Acids Research (2004). PDF. A method for de novo discovery of gene regulatory sequences for sets of co-regulated genes. (CISREGUL). GitHub link.
Zirlinger et al. Amygdala-enriched genes identified by microarray technology are restricted to specific amygdaloid sub-nuclei. PNAS (2001). PDF. Microarray technology uncovered gene expression patterns of the different sub-nuclei within the amygdala.
Spike sorting software (Spiker) Extracellular recordings of spikes often capture the activity of multiple neurons in the vicinity of the microwire electrode. Spiker is an unsupervised algorithm to separate the different putative units.