Publications

View By Topic:
All Topics
CI Causal Induction
CD Cognitive Development
CEIL Cultural Evolution and Iterated Learning
DMRL Decision Making and Reinforcement Learning
E Education
F Foundations
IB Inductive Biases
NBM Nonparametric Bayesian Models
P Perception
PR Probabilistic Reasoning
RPM Rational Process Models
S&C Similarity and Categorization
SC Social Cognition
SML Statistical Models of Language

(Click on an author's name to view all papers by that author.)


Filter publications

By Yao, S.
F
SML
McCoy, R. T., Yao, S., Friedman, D., Hardy, M. D., & Griffiths, T. L. (2024). Embers of autoregression show how large language models are shaped by the problem they are trained to solve. Proceedings of the National Academy of Sciences, 121(41), e2322420121. (pdf)
F
SML
McCoy, R. T., Yao, S., Friedman, D., Hardy, M. D., & Griffiths, T. L. (2024). When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1. (preprint)
F
SML
Sumers, T. R., Yao, S., Narasimhan, K., & Griffiths, T. L. (2023). Cognitive architectures for language agents. Transactions on Machine Learning Research 2024(preprint)
DMRL
SML
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems 37. (pdf)

© 2025 Computational Cognitive Science Lab  |  Department of Psychology  |  Princeton University