Using computational models to improve visual learning

jan 30, 2025

Morgan Talbot presenting at Vision Journal Club

Morgan will be presenting the paper “L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection and Enhancement.”

For a concise summary, please see the project website. The paper explores ways to enhance visual category learning in humans by applying adversarially trained ANNs as models of visual perception.

6543092 talbot 1 apa 50 date 1063 https://kreimanlab.com/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22WNAHBFCS%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Talbot%22%2C%22parsedDate%22%3A%222026-02-13%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTalbot%2C%20M.%20%282026%29.%20%26lt%3Bi%26gt%3BEmulating%20and%20enhancing%20human%20visual%20perception%20and%20learning%20with%20image%20computable%20models%26lt%3B%5C%2Fi%26gt%3B%20%5BHarvard%20University%5D.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F170NSbDaSSbN7pVh19ZeS7ZTE6PNAJwhI%5C%2Fview%3Fusp%3Dsharing%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F170NSbDaSSbN7pVh19ZeS7ZTE6PNAJwhI%5C%2Fview%3Fusp%3Dsharing%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22thesis%22%2C%22title%22%3A%22Emulating%20and%20enhancing%20human%20visual%20perception%20and%20learning%20with%20image%20computable%20models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%22%2C%22lastName%22%3A%22Talbot%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22thesisType%22%3A%22%22%2C%22university%22%3A%22Harvard%20University%22%2C%22date%22%3A%222026-02-13%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F170NSbDaSSbN7pVh19ZeS7ZTE6PNAJwhI%5C%2Fview%3Fusp%3Dsharing%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222026-02-13T13%3A42%3A35Z%22%7D%7D%2C%7B%22key%22%3A%22HVAUGC94%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Tausani%20et%20al.%22%2C%22parsedDate%22%3A%222026%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTausani%2C%20L.%2C%20Muratore%2C%20P.%2C%20Talbot%2C%20M.%20B.%2C%20Amerio%2C%20G.%2C%20Kreiman%2C%20G.%2C%20%26amp%3B%20Zoccolan%2C%20D.%20%282026%29.%20%26lt%3Bi%26gt%3BStretching%20Beyond%20the%20Obvious%3A%20A%20Gradient-Free%20Framework%20to%20Unveil%20the%20Hidden%20Landscape%20of%20Visual%20Invariance%26lt%3B%5C%2Fi%26gt%3B.%20International%20Conference%20on%20Learning%20Representations%20%28ICLR%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2506.17040%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2506.17040%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Stretching%20Beyond%20the%20Obvious%3A%20A%20Gradient-Free%20Framework%20to%20Unveil%20the%20Hidden%20Landscape%20of%20Visual%20Invariance%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lorenzo%22%2C%22lastName%22%3A%22Tausani%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paolo%22%2C%22lastName%22%3A%22Muratore%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%20B.%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Giacomo%22%2C%22lastName%22%3A%22Amerio%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%22%2C%22lastName%22%3A%22Kreiman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Davide%22%2C%22lastName%22%3A%22Zoccolan%22%7D%5D%2C%22abstractNote%22%3A%22Uncovering%20which%20features%26%23039%3B%20combinations%20high-level%20visual%20units%20encode%20is%20critical%20to%20understand%20how%20images%20are%20transformed%20into%20representations%20that%20support%20recognition.%20While%20existing%20feature%20visualization%20approaches%20typically%20infer%20a%20unit%26%23039%3Bs%20most%20exciting%20images%2C%20this%20is%20insufficient%20to%20reveal%20the%20manifold%20of%20transformations%20under%20which%20responses%20remain%20invariant%2C%20which%20is%20key%20to%20generalization%20in%20vision.%20Here%20we%20introduce%20Stretch-and-Squeeze%20%28SnS%29%2C%20an%20unbiased%2C%20model-agnostic%2C%20and%20gradient-free%20framework%20to%20systematically%20characterize%20a%20unit%26%23039%3Bs%20invariance%20landscape%20and%20its%20vulnerability%20to%20adversarial%20perturbations%20in%20both%20biological%20and%20artificial%20visual%20systems.%20SnS%20frames%20these%20transformations%20as%20bi-objective%20optimization%20problems.%20To%20probe%20invariance%2C%20SnS%20seeks%20image%20perturbations%20that%20maximally%20alter%20the%20representation%20of%20a%20reference%20stimulus%20in%20a%20given%20processing%20stage%20while%20preserving%20unit%20activation.%20To%20probe%20adversarial%20sensitivity%2C%20SnS%20seeks%20perturbations%20that%20minimally%20alter%20the%20stimulus%20while%20suppressing%20unit%20activation.%20Applied%20to%20convolutional%20neural%20networks%20%28CNNs%29%2C%20SnS%20revealed%20image%20variations%20that%20were%20further%20from%20a%20reference%20image%20in%20pixel-space%20than%20those%20produced%20by%20affine%20transformations%2C%20while%20more%20strongly%20preserving%20the%20target%20unit%26%23039%3Bs%20response.%20The%20discovered%20invariant%20images%20differed%20dramatically%20depending%20on%20the%20choice%20of%20image%20representation%20used%20for%20optimization%3A%20pixel-level%20changes%20primarily%20affected%20luminance%20and%20contrast%2C%20while%20stretching%20mid-%20and%20late-layer%20CNN%20representations%20altered%20texture%20and%20pose%20respectively.%20Notably%2C%20the%20invariant%20images%20from%20robust%20networks%20were%20more%20recognizable%20by%20human%20subjects%20than%20those%20from%20standard%20networks%2C%20supporting%20the%20higher%20fidelity%20of%20robust%20CNNs%20as%20models%20of%20the%20visual%20system.%22%2C%22proceedingsTitle%22%3A%22%22%2C%22conferenceName%22%3A%22International%20Conference%20on%20Learning%20Representations%20%28ICLR%29%22%2C%22date%22%3A%222026%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2506.17040%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1lQ–MuWiQfbWn7Dgy4l8-vWrWIlEsBU3%5C%2Fview%3Fusp%3Dsharing%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222026-02-13T13%3A40%3A05Z%22%7D%7D%2C%7B%22key%22%3A%22J8P8UFLG%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Talbot%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTalbot%2C%20M.%20B.%2C%20Kreiman%2C%20G.%2C%20DiCarlo%2C%20J.%20J.%2C%20%26amp%3B%20Gaziv%2C%20G.%20%282025%29.%20L-WISE%3A%20Boosting%20Human%20Image%20Category%20Learning%20Through%20Model-Based%20Image%20Selection%20And%20Enhancement.%20%26lt%3Bi%26gt%3BInternational%20Conference%20on%20Learning%20Representaitons%20%28ICLR%29%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%26%23039%3B%26gt%3Bhttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22L-WISE%3A%20Boosting%20Human%20Image%20Category%20Learning%20Through%20Model-Based%20Image%20Selection%20And%20Enhancement%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%20B.%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%22%2C%22lastName%22%3A%22Kreiman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22James%20J.%22%2C%22lastName%22%3A%22DiCarlo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guy%22%2C%22lastName%22%3A%22Gaziv%22%7D%5D%2C%22abstractNote%22%3A%22The%20currently%20leading%20artificial%20neural%20network%20%28ANN%29%20models%20of%20the%20visual%20ventral%20stream%20–%20which%20are%20derived%20from%20a%20combination%20of%20performance%20optimization%20and%20robustification%20methods%20–%20have%20demonstrated%20a%20remarkable%20degree%20of%20behavioral%20alignment%20with%20humans%20on%20visual%20categorization%20tasks.%20Extending%20upon%20previous%20work%2C%20we%20show%20that%20not%20only%20can%20these%20models%20guide%20image%20perturbations%20that%20change%20the%20induced%20human%20category%20percepts%2C%20but%20they%20also%20can%20enhance%20human%20ability%20to%20accurately%20report%20the%20original%20ground%20truth.%20Furthermore%2C%20we%20find%20that%20the%20same%20models%20can%20also%20be%20used%20out-of-the-box%20to%20predict%20the%20proportion%20of%20correct%20human%20responses%20to%20individual%20images%2C%20providing%20a%20simple%2C%20human-aligned%20estimator%20of%20the%20relative%20difficulty%20of%20each%20image.%20Motivated%20by%20these%20observations%2C%20we%20propose%20to%20augment%20visual%20learning%20in%20humans%20in%20a%20way%20that%20improves%20human%20categorization%20accuracy%20at%20test%20time.%20Our%20learning%20augmentation%20approach%20consists%20of%20%28i%29%20selecting%20images%20based%20on%20their%20model-estimated%20recognition%20difficulty%2C%20and%20%28ii%29%20using%20image%20perturbations%20that%20aid%20recognition%20for%20novice%20learners.%20We%20find%20that%20combining%20these%20model-based%20strategies%20gives%20rise%20to%20test-time%20categorization%20accuracy%20gains%20of%2033-72%25%20relative%20to%20control%20subjects%20without%20these%20interventions%2C%20despite%20using%20the%20same%20number%20of%20training%20feedback%20trials.%20Surprisingly%2C%20beyond%20the%20accuracy%20gain%2C%20the%20training%20time%20for%20the%20augmented%20learning%20group%20was%20also%20shorter%20by%2020-23%25.%20We%20demonstrate%20the%20efficacy%20of%20our%20approach%20in%20a%20fine-grained%20categorization%20task%20with%20natural%20images%2C%20as%20well%20as%20tasks%20in%20two%20clinically%20relevant%20image%20domains%20–%20histology%20and%20dermoscopy%20–%20where%20visual%20learning%20is%20notoriously%20challenging.%20To%20the%20best%20of%20our%20knowledge%2C%20this%20is%20the%20first%20application%20of%20ANNs%20to%20increase%20visual%20learning%20performance%20in%20humans%20by%20enhancing%20category-specific%20features.%22%2C%22date%22%3A%222025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-01-23T14%3A45%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22VXAJKMWE%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Talbot%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BTalbot%2C%20M.%20B.%2C%20Zawar%2C%20R.%2C%20Badkundri%2C%20R.%2C%20Zhang%2C%20M.%2C%20%26amp%3B%20Kreiman%2C%20G.%20%282023%29.%20Tuned%20compositional%20feature%20replays%20for%20efficient%20stream%20learning.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Neural%20Networks%20and%20Learning%20Systems%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3BPP%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tuned%20compositional%20feature%20replays%20for%20efficient%20stream%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%20B.%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rushikesh%22%2C%22lastName%22%3A%22Zawar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rohil%22%2C%22lastName%22%3A%22Badkundri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengmi%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%22%2C%22lastName%22%3A%22Kreiman%22%7D%5D%2C%22abstractNote%22%3A%22Our%20brains%20extract%20durable%2C%20generalizable%20knowledge%20from%20transient%20experiences%20of%20the%20world.%20Artificial%20neural%20networks%20come%20nowhere%20close%20to%20this%20ability.%20When%20tasked%20with%20learning%20to%20classify%20objects%20by%20training%20on%20nonrepeating%20video%20frames%20in%20temporal%20order%20%28online%20stream%20learning%29%2C%20models%20that%20learn%20well%20from%20shuffled%20datasets%20catastrophically%20forget%20old%20knowledge%20upon%20learning%20new%20stimuli.%20We%20propose%20a%20new%20continual%20learning%20algorithm%2C%20compositional%20replay%20using%20memory%20blocks%20%28CRUMB%29%2C%20which%20mitigates%20forgetting%20by%20replaying%20feature%20maps%20reconstructed%20by%20combining%20generic%20parts.%20CRUMB%20concatenates%20trainable%20and%20reusable%20memory%20block%20vectors%20to%20compositionally%20reconstruct%20feature%20map%20tensors%20in%20convolutional%20neural%20networks%20%28CNNs%29.%20Storing%20the%20indices%20of%20memory%20blocks%20used%20to%20reconstruct%20new%20stimuli%20enables%20memories%20of%20the%20stimuli%20to%20be%20replayed%20during%20later%20tasks.%20This%20reconstruction%20mechanism%20also%20primes%20the%20neural%20network%20to%20minimize%20catastrophic%20forgetting%20by%20biasing%20it%20toward%20attending%20to%20information%20about%20object%20shapes%20more%20than%20information%20about%20image%20textures%20and%20stabilizes%20the%20network%20during%20stream%20learning%20by%20providing%20a%20shared%20feature-level%20basis%20for%20all%20training%20examples.%20These%20properties%20allow%20CRUMB%20to%20outperform%20an%20otherwise%20identical%20algorithm%20that%20stores%20and%20replays%20raw%20images%20while%20occupying%20only%203.6%25%20as%20much%20memory.%20We%20stress-tested%20CRUMB%20alongside%2013%20competing%20methods%20on%20seven%20challenging%20datasets.%20To%20address%20the%20limited%20number%20of%20existing%20online%20stream%20learning%20datasets%2C%20we%20introduce%20two%20new%20benchmarks%20by%20adapting%20existing%20datasets%20for%20stream%20learning.%20With%20only%203.7%25-4.1%25%20as%20much%20memory%20and%2015%25-43%25%20as%20much%20runtime%2C%20CRUMB%20mitigates%20catastrophic%20forgetting%20more%20effectively%20than%20the%20state-of-the-art.%20Our%20code%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FMorganBDT%5C%2Fcrumb.git.%22%2C%22date%22%3A%222023-12-25%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222162-2388%22%2C%22language%22%3A%22eng%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-18T01%3A07%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22CZPF5T4P%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Singh%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSingh%2C%20P.%2C%20Li%2C%20Y.%2C%20Sikarwar%2C%20A.%2C%20Lei%2C%20W.%2C%20Gao%2C%20D.%2C%20Talbot%2C%20M.%2C%20Sun%2C%20Y.%2C%20Shou%2C%20M.%2C%20Kreiman%2C%20G.%2C%20%26amp%3B%20Zhang%2C%20M.%20%282023%29.%20Learning%20to%20Learn%3A%20How%20to%20Continuously%20Teach%20Humans%20and%20Machines.%20%26lt%3Bi%26gt%3BInternational%20Conference%20on%20Computer%20Vision%20%28ICCV%29%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%26lt%3B%5C%2Fa%26gt%3B%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20to%20Learn%3A%20How%20to%20Continuously%20Teach%20Humans%20and%20Machines%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A%22%2C%22lastName%22%3A%22Sikarwar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22W%22%2C%22lastName%22%3A%22Lei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MB%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MZ%22%2C%22lastName%22%3A%22Shou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G%22%2C%22lastName%22%3A%22Kreiman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22WPNGV3LP%22%5D%2C%22dateModified%22%3A%222025-01-23T14%3A45%3A40Z%22%7D%7D%5D%7D
Talbot, M. (2026). Emulating and enhancing human visual perception and learning with image computable models [Harvard University]. https://drive.google.com/file/d/170NSbDaSSbN7pVh19ZeS7ZTE6PNAJwhI/view?usp=sharing
Tausani, L., Muratore, P., Talbot, M. B., Amerio, G., Kreiman, G., & Zoccolan, D. (2026). Stretching Beyond the Obvious: A Gradient-Free Framework to Unveil the Hidden Landscape of Visual Invariance. International Conference on Learning Representations (ICLR). https://doi.org/10.48550/arXiv.2506.17040
Talbot, M. B., Kreiman, G., DiCarlo, J. J., & Gaziv, G. (2025). L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection And Enhancement. International Conference on Learning Representaitons (ICLR). http://arxiv.org/abs/2412.09765
Talbot, M. B., Zawar, R., Badkundri, R., Zhang, M., & Kreiman, G. (2023). Tuned compositional feature replays for efficient stream learning. IEEE Transactions on Neural Networks and Learning Systems, PP. https://drive.google.com/file/d/1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH/view?usp=sharing
Singh, P., Li, Y., Sikarwar, A., Lei, W., Gao, D., Talbot, M., Sun, Y., Shou, M., Kreiman, G., & Zhang, M. (2023). Learning to Learn: How to Continuously Teach Humans and Machines. International Conference on Computer Vision (ICCV). https://drive.google.com/file/d/1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z/view?usp=sharing