
jan 30, 2025
Morgan Talbot presenting at Vision Journal Club
Morgan will be presenting the paper “L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection and Enhancement.”
For a concise summary, please see the project website. The paper explores ways to enhance visual category learning in humans by applying adversarially trained ANNs as models of visual perception.
6543092
talbot
1
apa
50
date
1063
https://kreimanlab.com/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22J8P8UFLG%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Talbot%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETalbot%2C%20M.%20B.%2C%20Kreiman%2C%20G.%2C%20DiCarlo%2C%20J.%20J.%2C%20%26amp%3B%20Gaziv%2C%20G.%20%282025%29.%20L-WISE%3A%20Boosting%20Human%20Image%20Category%20Learning%20Through%20Model-Based%20Image%20Selection%20And%20Enhancement.%20%3Ci%3EInternational%20Conference%20on%20Learning%20Representaitons%20%28ICLR%29%3C%5C%2Fi%3E.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22L-WISE%3A%20Boosting%20Human%20Image%20Category%20Learning%20Through%20Model-Based%20Image%20Selection%20And%20Enhancement%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%20B.%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%22%2C%22lastName%22%3A%22Kreiman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22James%20J.%22%2C%22lastName%22%3A%22DiCarlo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Guy%22%2C%22lastName%22%3A%22Gaziv%22%7D%5D%2C%22abstractNote%22%3A%22The%20currently%20leading%20artificial%20neural%20network%20%28ANN%29%20models%20of%20the%20visual%20ventral%20stream%20–%20which%20are%20derived%20from%20a%20combination%20of%20performance%20optimization%20and%20robustification%20methods%20–%20have%20demonstrated%20a%20remarkable%20degree%20of%20behavioral%20alignment%20with%20humans%20on%20visual%20categorization%20tasks.%20Extending%20upon%20previous%20work%2C%20we%20show%20that%20not%20only%20can%20these%20models%20guide%20image%20perturbations%20that%20change%20the%20induced%20human%20category%20percepts%2C%20but%20they%20also%20can%20enhance%20human%20ability%20to%20accurately%20report%20the%20original%20ground%20truth.%20Furthermore%2C%20we%20find%20that%20the%20same%20models%20can%20also%20be%20used%20out-of-the-box%20to%20predict%20the%20proportion%20of%20correct%20human%20responses%20to%20individual%20images%2C%20providing%20a%20simple%2C%20human-aligned%20estimator%20of%20the%20relative%20difficulty%20of%20each%20image.%20Motivated%20by%20these%20observations%2C%20we%20propose%20to%20augment%20visual%20learning%20in%20humans%20in%20a%20way%20that%20improves%20human%20categorization%20accuracy%20at%20test%20time.%20Our%20learning%20augmentation%20approach%20consists%20of%20%28i%29%20selecting%20images%20based%20on%20their%20model-estimated%20recognition%20difficulty%2C%20and%20%28ii%29%20using%20image%20perturbations%20that%20aid%20recognition%20for%20novice%20learners.%20We%20find%20that%20combining%20these%20model-based%20strategies%20gives%20rise%20to%20test-time%20categorization%20accuracy%20gains%20of%2033-72%25%20relative%20to%20control%20subjects%20without%20these%20interventions%2C%20despite%20using%20the%20same%20number%20of%20training%20feedback%20trials.%20Surprisingly%2C%20beyond%20the%20accuracy%20gain%2C%20the%20training%20time%20for%20the%20augmented%20learning%20group%20was%20also%20shorter%20by%2020-23%25.%20We%20demonstrate%20the%20efficacy%20of%20our%20approach%20in%20a%20fine-grained%20categorization%20task%20with%20natural%20images%2C%20as%20well%20as%20tasks%20in%20two%20clinically%20relevant%20image%20domains%20–%20histology%20and%20dermoscopy%20–%20where%20visual%20learning%20is%20notoriously%20challenging.%20To%20the%20best%20of%20our%20knowledge%2C%20this%20is%20the%20first%20application%20of%20ANNs%20to%20increase%20visual%20learning%20performance%20in%20humans%20by%20enhancing%20category-specific%20features.%22%2C%22date%22%3A%222025%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.09765%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-01-23T14%3A45%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22VXAJKMWE%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Talbot%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-25%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ETalbot%2C%20M.%20B.%2C%20Zawar%2C%20R.%2C%20Badkundri%2C%20R.%2C%20Zhang%2C%20M.%2C%20%26amp%3B%20Kreiman%2C%20G.%20%282023%29.%20Tuned%20compositional%20feature%20replays%20for%20efficient%20stream%20learning.%20%3Ci%3EIEEE%20Transactions%20on%20Neural%20Networks%20and%20Learning%20Systems%3C%5C%2Fi%3E%2C%20%3Ci%3EPP%3C%5C%2Fi%3E.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%27%3Ehttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Tuned%20compositional%20feature%20replays%20for%20efficient%20stream%20learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%20B.%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rushikesh%22%2C%22lastName%22%3A%22Zawar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rohil%22%2C%22lastName%22%3A%22Badkundri%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengmi%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gabriel%22%2C%22lastName%22%3A%22Kreiman%22%7D%5D%2C%22abstractNote%22%3A%22Our%20brains%20extract%20durable%2C%20generalizable%20knowledge%20from%20transient%20experiences%20of%20the%20world.%20Artificial%20neural%20networks%20come%20nowhere%20close%20to%20this%20ability.%20When%20tasked%20with%20learning%20to%20classify%20objects%20by%20training%20on%20nonrepeating%20video%20frames%20in%20temporal%20order%20%28online%20stream%20learning%29%2C%20models%20that%20learn%20well%20from%20shuffled%20datasets%20catastrophically%20forget%20old%20knowledge%20upon%20learning%20new%20stimuli.%20We%20propose%20a%20new%20continual%20learning%20algorithm%2C%20compositional%20replay%20using%20memory%20blocks%20%28CRUMB%29%2C%20which%20mitigates%20forgetting%20by%20replaying%20feature%20maps%20reconstructed%20by%20combining%20generic%20parts.%20CRUMB%20concatenates%20trainable%20and%20reusable%20memory%20block%20vectors%20to%20compositionally%20reconstruct%20feature%20map%20tensors%20in%20convolutional%20neural%20networks%20%28CNNs%29.%20Storing%20the%20indices%20of%20memory%20blocks%20used%20to%20reconstruct%20new%20stimuli%20enables%20memories%20of%20the%20stimuli%20to%20be%20replayed%20during%20later%20tasks.%20This%20reconstruction%20mechanism%20also%20primes%20the%20neural%20network%20to%20minimize%20catastrophic%20forgetting%20by%20biasing%20it%20toward%20attending%20to%20information%20about%20object%20shapes%20more%20than%20information%20about%20image%20textures%20and%20stabilizes%20the%20network%20during%20stream%20learning%20by%20providing%20a%20shared%20feature-level%20basis%20for%20all%20training%20examples.%20These%20properties%20allow%20CRUMB%20to%20outperform%20an%20otherwise%20identical%20algorithm%20that%20stores%20and%20replays%20raw%20images%20while%20occupying%20only%203.6%25%20as%20much%20memory.%20We%20stress-tested%20CRUMB%20alongside%2013%20competing%20methods%20on%20seven%20challenging%20datasets.%20To%20address%20the%20limited%20number%20of%20existing%20online%20stream%20learning%20datasets%2C%20we%20introduce%20two%20new%20benchmarks%20by%20adapting%20existing%20datasets%20for%20stream%20learning.%20With%20only%203.7%25-4.1%25%20as%20much%20memory%20and%2015%25-43%25%20as%20much%20runtime%2C%20CRUMB%20mitigates%20catastrophic%20forgetting%20more%20effectively%20than%20the%20state-of-the-art.%20Our%20code%20is%20available%20at%20https%3A%5C%2F%5C%2Fgithub.com%5C%2FMorganBDT%5C%2Fcrumb.git.%22%2C%22date%22%3A%222023-12-25%22%2C%22language%22%3A%22eng%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%222162-2388%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH%5C%2Fview%3Fusp%3Dsharing%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222025-03-18T01%3A07%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22CZPF5T4P%22%2C%22library%22%3A%7B%22id%22%3A6543092%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Singh%20et%20al.%22%2C%22parsedDate%22%3A%222023%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3ESingh%2C%20P.%2C%20Li%2C%20Y.%2C%20Sikarwar%2C%20A.%2C%20Lei%2C%20W.%2C%20Gao%2C%20D.%2C%20Talbot%2C%20M.%2C%20Sun%2C%20Y.%2C%20Shou%2C%20M.%2C%20Kreiman%2C%20G.%2C%20%26amp%3B%20Zhang%2C%20M.%20%282023%29.%20Learning%20to%20Learn%3A%20How%20to%20Continuously%20Teach%20Humans%20and%20Machines.%20%3Ci%3EInternational%20Conference%20on%20Computer%20Vision%20%28ICCV%29%3C%5C%2Fi%3E.%20%3Ca%20class%3D%27zp-ItemURL%27%20href%3D%27https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%27%3Ehttps%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Learning%20to%20Learn%3A%20How%20to%20Continuously%20Teach%20Humans%20and%20Machines%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22P%22%2C%22lastName%22%3A%22Singh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y%22%2C%22lastName%22%3A%22Li%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22A%22%2C%22lastName%22%3A%22Sikarwar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22W%22%2C%22lastName%22%3A%22Lei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22D%22%2C%22lastName%22%3A%22Gao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MB%22%2C%22lastName%22%3A%22Talbot%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Y%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22MZ%22%2C%22lastName%22%3A%22Shou%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22G%22%2C%22lastName%22%3A%22Kreiman%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22M%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222023%22%2C%22language%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdrive.google.com%5C%2Ffile%5C%2Fd%5C%2F1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z%5C%2Fview%3Fusp%3Dsharing%22%2C%22collections%22%3A%5B%22WPNGV3LP%22%5D%2C%22dateModified%22%3A%222025-01-23T14%3A45%3A40Z%22%7D%7D%5D%7D
Talbot, M. B., Kreiman, G., DiCarlo, J. J., & Gaziv, G. (2025). L-WISE: Boosting Human Image Category Learning Through Model-Based Image Selection And Enhancement. International Conference on Learning Representaitons (ICLR). http://arxiv.org/abs/2412.09765
Talbot, M. B., Zawar, R., Badkundri, R., Zhang, M., & Kreiman, G. (2023). Tuned compositional feature replays for efficient stream learning. IEEE Transactions on Neural Networks and Learning Systems, PP. https://drive.google.com/file/d/1WN6RMwjhIinpMoz7Brg-mjwhrXZCkTcH/view?usp=sharing
Singh, P., Li, Y., Sikarwar, A., Lei, W., Gao, D., Talbot, M., Sun, Y., Shou, M., Kreiman, G., & Zhang, M. (2023). Learning to Learn: How to Continuously Teach Humans and Machines. International Conference on Computer Vision (ICCV). https://drive.google.com/file/d/1iaiPhS-IrJMFXzygwnXUa_urok-0bN6Z/view?usp=sharing