IDIES RSE Luncheon Seminar Series
Tuesday, November 11 @ 12:00pm–1:30pm
|
Bloomberg Center for Physics and Astronomy, Room 462
Daniel Muthukrishna, astrophysicist and ML research scientist at the Massachusetts Institute of Technology (MIT) and an AstroAI Fellow at Harvard’s Center for Astrophysics, visits IDIES at the Bloomberg Center for Physics and Astronomy on Tuesday, November 11 at 12pm. Registration is required.
With a PhD in Astrophysics from the University of Cambridge, Dr. Muthukrishna specializes in applying advanced machine learning techniques to analyze astronomical data. He is a vital member of the Transiting Exoplanet Survey Satellite (TESS) team, where he leverages neural networks to classify exoplanets.
His research covers a range of topics, from modeling supernovae to anomaly detection in large datasets using cutting-edge methods like diffusion models and transformers. Dr. Muthukrishna has developed widely-used software tools that enhance the accuracy and efficiency of astronomical data analysis.
Speaking on:
Causal Foundation Models: Disentangling Physics from Systematics
Register
Talk Abstract: Foundation models for scientific data must contend with a fundamental challenge: observations often conflate the true underlying physical phenomena with systematic distortions introduced by measurement instruments. This entanglement limits model generalization, especially in heterogeneous or multi-instrument settings.
In this talk, I present a causally-motivated foundation model that explicitly disentangles physical and instrumental factors using a dual-encoder architecture trained with structured contrastive learning or a generative flow-matching model. Leveraging naturally occurring observational triplets (i.e., where the same target is measured under varying conditions, and distinct targets are measured under shared conditions), the model learns separate latent representations for the underlying physical signal and instrument effects. Evaluated on simulated astronomical time series designed to resemble the complexity of variable stars observed by missions like NASA’s Transiting Exoplanet Survey Satellite (TESS), the method outperforms traditional single-latent space foundation models on downstream prediction tasks, particularly in low-data regimes. These results demonstrate that our model supports key capabilities of foundation models, including few-shot generalization and efficient adaptation, and highlight the importance of encoding causal structure into representation learning for structured data.
Registration is free, but required. A pizza lunch will be provided.

