Sara Magliacane

Sara Magliacane

Assistant Professor, Researcher

University of Amsterdam

MIT-IBM Watson AI Lab

Biography

I am an assistant professor in the Amsterdam Machine Learning Lab at the University Amsterdam and a Research Scientist at MIT-IBM Watson AI lab. My research focuses on three directions, causal representation learning, causality-inspired machine learning and how can causality ideas help RL adapt to new domains and nonstationarity faster. The goal is to leverage ideas from causality to make ML methods robust to distribution shift and generalizable across domains and tasks. I also continue working on my previous research on causal discovery, i.e. learning causal relations from data.

Previously I was a postdoctoral researcher at IBM Research NY, working on methods to design experiments that would allow one to learn causal relations in a sample-efficient and intervention-efficient way. I received a PhD at the VU Amsterdam on learning causal relations jointly from different experimental settings, even with latent confounders and small samples. During Spring 2022, I was visiting the Simons Institute in Berkeley for a semester on Causality.

Download my resumé .

Interests
  • Causality, causal discovery
  • Causal Representation Learning
  • Causality-inspired ML and RL
Education
  • PhD in Artificial Intelligence, 2017

    VU Amsterdam

  • MEng in Computer Science, 2011

    Politecnico di Milano, Politecnico di Torino

News

Recent Publications

(2022). CITRIS: Causal Identifiability from Temporal Intervened Sequences.

PDF Cite Code Dataset Poster Slides Source Document

(2022). AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning. International Conference on Learning Representations.

PDF Cite Code Source Document

(2022). Factored Adaptation for Non-Stationary Reinforcement Learning. Advances in Neural Information Processing Systems.

PDF Cite Code Source Document DOI

(2022). Intervention Design for Causal Representation Learning. UAI 2022 Workshop on Causal Representation Learning.

PDF Cite Code Poster Slides URL

(2021). Verifiably safe exploration for end-to-end reinforcement learning. Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control.

PDF Cite Code Video Source Document

Team and Collaborators

PhD students

Avatar

Danru Xu

PhD student (UvA)

Avatar

Mátyás Schubert

PhD student (UvA)

Avatar

Yongtuo Liu

PhD student (UvA)

Guest researchers

Avatar

Andrea Conte

Master student (University of Torino)

Avatar

Davide Talon

PhD student (IIT Genoa)

Avatar

Fan Feng

PhD student (City University Hong Kong)

Alumni

Avatar

Eva Sevenster

Master student (UvA)

Avatar

Frank Brongers

Master student (UvA)

Avatar

Nadja Rutsch

Master student (UvA)

Avatar

Willemijn de Hoop

Master student (UvA)

Contact

For teaching matters, you can contact me via Canvas.

PhD students If you have questions about joining my group or AMLab, check this list first. There is an open PhD position in high-dimensional causal inference I will be co-supervising with Stéphanie van der Pas, deadline is April 2nd 2023.

Other jobs Currently I don’t have any open vacancy for interns, research assistants or postdocs, but if I do, they will be announced on the UvA vacancies website.

Master students If you are a Master in AI student at the University of Amsterdam and are interested in causality, feel free to contact me for potential thesis topics. As a rule, I also don’t supervise Bachelor or Master students from other universities.