Eric Bigelow

prof_pic.jpg

PhD student in psychology at Harvard. Previously at Goodfire AI, NTT Research, Uber, Univ. Rochester

My primary research focus is interpreting and evaluating pre-trained Large Language Models (LLMs), in particular in-context learning and reasoning, uncertainty, and high-level cognitive abilities. Two themes of my work are: (1) using theories, experiments, and models from cognitive science to understand LLMs, and (2) studying LLMs as a novel kind of intelligence, rethinking foundational concepts from first principles.

I am also broadly interested in computational modeling in cognitive science, mental imagery, and how people reason about programs.

selected publications

  1. Forking paths in neural text generation
    Eric Bigelow, Ari Holtzman, Hidenori Tanaka, and Tomer Ullman
    International Conference on Learning Representations (ICLR), 2025
  2. Belief Dynamics Reveal the Dual Nature of In-Context Learning and Activation Steering
    Eric Bigelow*, Daniel Wurgaft*, YingQiao Wang, Noah Goodman, Tomer Ullman, Hidenori Tanaka, and Ekdeep Singh Lubana
    arXiv preprint arXiv:2511.00617, 2025
  3. In-Context Learning Dynamics with Random Binary Sequences
    Eric Bigelow, Ekdeep Singh Lubana, Robert P Dick, Hidenori Tanaka, and Tomer Ullman
    International Conference on Learning Representations (ICLR), 2024
  4. Non-commitment in mental imagery
    Eric Bigelow, John P McCoy, and Tomer Ullman
    Cognition, 2023