Eric Bigelow
PhD student in cognitive psychology at Harvard University
Harvard University
Cambridge, MA, USA
Previously software engineer at Uber, AI and cognitive science at Univ. Rochester
My primary research focus is interpreting and evaluating pre-trained Large Language Models (LLMs), in particular in-context learning and reasoning, uncertainty, and human-like cognitive abilities.
Two broad themes of my work are: (a) using theories, experiments, and computational models from cognitive science to understand LLMs (Bigelow et al. 2023, Xiang et al. 2025) (b) studying LLMs as a novel kind of intelligence by rethinking foundational concepts from first principles (Bigelow et al. 2024).
Sometimes I also study mental imagery and cognition in humans (Bigelow et al. 2023).
selected publications
- In-Context Learning Dynamics with Random Binary SequencesInternational Conference on Learning Representations (ICLR), 2024
- Forking paths in neural text generationInternational Conference on Learning Representations (ICLR), 2025
- Language models assign responsibility based on actual rather than counterfactual contributionsIn Proceedings of the Annual Meeting of the Cognitive Science Society, 2025
- Non-commitment in mental imageryCognition, 2023