Eric Bigelow

PhD student in cognitive psychology at Harvard University

prof_pic.jpg

Harvard University

Cambridge, MA, USA

Previously software engineer at Uber, AI and cognitive science at Univ. Rochester

My primary research focus is interpreting and evaluating pre-trained Large Language Models (LLMs), in particular in-context learning and reasoning, uncertainty, and human-like cognitive abilities.

Two broad themes of my work are: (a) using theories, experiments, and computational models from cognitive science to understand LLMs (Bigelow et al. 2023, Xiang et al. 2025) (b) studying LLMs as a novel kind of intelligence by rethinking foundational concepts from first principles (Bigelow et al. 2024).

Sometimes I also study mental imagery and cognition in humans (Bigelow et al. 2023).

selected publications

  1. In-Context Learning Dynamics with Random Binary Sequences
    Eric Bigelow, Ekdeep Singh Lubana, Robert P Dick, Hidenori Tanaka, and Tomer Ullman
    International Conference on Learning Representations (ICLR), 2024
  2. Forking paths in neural text generation
    Eric Bigelow, Ari Holtzman, Hidenori Tanaka, and Tomer Ullman
    International Conference on Learning Representations (ICLR), 2025
  3. Language models assign responsibility based on actual rather than counterfactual contributions
    Yang Xiang*, Eric Bigelow*, Tobias Gerstenberg, Tomer Ullman, and Samuel J Gershman
    In Proceedings of the Annual Meeting of the Cognitive Science Society, 2025
  4. Non-commitment in mental imagery
    Eric Bigelow, John P McCoy, and Tomer Ullman
    Cognition, 2023