Eric Bigelow
PhD student in psychology at Harvard. Previously at Goodfire AI, NTT Research, Uber, Univ. Rochester
My primary research focus is interpreting and evaluating pre-trained Large Language Models (LLMs), in particular in-context learning and reasoning, uncertainty, and high-level cognitive abilities. Two themes of my work are: (1) using theories, experiments, and models from cognitive science to understand LLMs, and (2) studying LLMs as a novel kind of intelligence, rethinking foundational concepts from first principles.
I am also broadly interested in computational modeling in cognitive science, mental imagery, and how people reason about programs.
selected publications
- Forking paths in neural text generationInternational Conference on Learning Representations (ICLR), 2025
- Belief Dynamics Reveal the Dual Nature of In-Context Learning and Activation SteeringarXiv preprint arXiv:2511.00617, 2025
- In-Context Learning Dynamics with Random Binary SequencesInternational Conference on Learning Representations (ICLR), 2024
-