About Me
Hi, my name is Eric Todd. I'm a third-year PhD student at Northeastern University advised by Professor David Bau. Prior to beginning my PhD, I studied Applied and Computational Mathematics at Brigham Young University (BYU).
I'm interested in understanding the learned structure of large neural networks, and how their internal representations enable their impressive generalization capabilities.
My research interests generally include machine learning and interpretability. I'm particularly excited by generative models and their applications in natural language and computer vision.
News
January 2025
Our NNsight and NDIF paper was accepted to ICLR 2025!
November 2024
Reviewed for the Interpretable AI: Past, Present, and Future NeurIPS Workshop and the ICLR 2025 Main Conference.
August 2024
Our causal interpretability survey is out on arXiv. As interpretability researchers, we're still trying to understand the right level of abstraction for thinking about neural network computation, but causal methods have become a promising method for studying them.
July 2024
Our preprint about NNsight and NDIF is out on arXiv. I'm excited about this framework for enabling access to the internal computations of large foundation models!
June 2024
Reviewed for NeurIPS Main Conference and 1st ICML Workshop on In-Context Learning.
May 2024
Invited talk on Function Vectors at the Princeton Neuroscience Institute.
May 2024
Had a great time presenting our Function Vectors work at ICLR 2024.
January 2024
Our Function Vectors paper was accepted to ICLR 2024!
Selected Publications


Eric Todd, Millicent L. Li, Arnab Sen Sharma, Aaron Mueller, Byron C. Wallace, David Bau. Proceedings of the 2024 International Conference on Learning Representations. (ICLR 2024)