Publications
View By Topic:
All Topics
CI Causal Induction CD Cognitive Development CEIL Cultural Evolution and Iterated Learning DMRL Decision Making and Reinforcement Learning E Education F Foundations IB Inductive Biases NBM Nonparametric Bayesian Models P Perception PR Probabilistic Reasoning RPM Rational Process Models S&C Similarity and Categorization SC Social Cognition SML Statistical Models of Language
(Click on an author's name to view all papers by that author.)
Filter publications
By McCoy, R.IB Bencomo, G. , Gupta, M. , Marinescu, I. , McCoy, R. T. , & Griffiths, T. L. (2025). Teasing apart architecture and initial weights as sources of inductive bias in neural networks. (preprint)
IB Gupta, M. , Rane, S. , McCoy, R. T. , & Griffiths, T. L. (2025). Convolutional neural networks can (meta-) learn the same-different relation. (preprint)
F Ku, A. , Campbell, D. , Bai, X. , Geng, J. , Liu, R. , Marjieh, R. , McCoy, R. T. , Nam, A. , Sucholutsky, I. , Veselovsky, V. , Zhang, L. , Zhu, J. Q. , & Griffiths, T. L. (2025). Using the tools of cognitive science to understand large language models at different levels of analysis. (preprint)
IB S&C Marinescu, I. , McCoy, R. T. , & Griffiths, T. L. (2025). Neural networks can capture human concept learning without assuming symbolic representations. (preprint)
IB SML Zhang, L. , Veselovsky, V. , McCoy, R. T. , & Griffiths, T. L. (2025). Identifying and mitigating the influence of the prior distribution in large language models. (preprint)
F PR Griffiths, T. L. , Zhu, J. Q. , Grant, E. , & McCoy, R. T. (2024). Bayes in the age of intelligent machines. Current Directions in Psychological Science, 33(5) , 283-291. (pdf)
PR S&C Marinescu, I. R. , Thomas McCoy, R. T. , & Griffiths, T. (2024). Distilling symbolic priors for concept learning into neural networks. 46th Annual Meeting of the Cognitive Science Society. (pdf)
F SML McCoy, R. T. , Yao, S. , Friedman, D. , Hardy, M. D. , & Griffiths, T. L. (2024). Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences, 121 (41), e2322420121. (pdf)
F SML McCoy, R. T. , Yao, S. , Friedman, D. , Hardy, M. D. , & Griffiths, T. L. (2024). When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1. (preprint)
PR SML Prabhakar, A. , Griffiths, T. L. , & McCoy, R. T. (2024). Deciphering the factors influencing the efficacy of chain-of-thought: Probability, memorization, and noisy reasoning. Proceedings of the 19th Conference on Empirical Methods in Natural Language Processing. (pdf)
F IB Griffiths, T. L. , Kumar, S. , & McCoy, R. T. (2023). On the hazards of relating representations and inductive biases. Behavioral and Brain Sciences, 46 , e275. (pdf)
IB SML McCoy, R. T. , & Griffiths, T. L. (2023). Modeling rapid language learning by distilling Bayesian priors into artificial neural networks. (preprint)
SML McCoy, R. T. , Grant, E. , Smolensky, P. , Griffiths, T. L. , & Linzen, T. (2020). Universal linguistic inductive biases via meta-learning. Proceedings of the 42nd Annual Conference of the Cognitive Science Society . (pdf)