Student Investigator: James D’Alleva-Bochain
Mentored by IDIES member Professor Brice Ménard
Despite its popularity and critical role as a core technology in wide-reaching applications, deep learning lacks theoretical foundations. Artificial neural networks are often considered uninterpretable black boxes due to their large parameter counts and high model complexity according to traditional metrics. However, recent research has revealed emergent structures in the internal geometry of their learned weight representations. In this project, we will study these structures to better understand how neural networks encode, process, and transfer information. Using tools from high-dimensional statistics, random matrix theory, and information theory, we will investigate how these internal structures emerge and respond to different architecture choices and features of the data such as complexity, embedded symmetry groups, and noise levels
James D’Alleva-Bochain is a sophomore majoring in Mathematics and Applied Mathematics & Statistics, with a minor in Physics. He is interested in developing theoretical understandings of deep learning and hopes to pursue research in the field in graduate school and beyond. In his free time, James enjoys playing basketball and visiting art museums.