FAIR-DI e.V.
FAIRmat
NOMAD Laboratory
NOMAD CoE

CECAM Online Workshop (June 9, 17 & 23, 2021)

AI for Materials Science: Mining and Learning Interpretable, Explainable, and Generalizable Models from Data

Topic of the workshop

In artificial intelligence (AI), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually, we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret AI models, somewhat surprisingly, interpretability in AI is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). [1-4]

The idea of the workshop is to bring together materials scientists who have been already applying AI techniques in their field and AI experts who are particularly interested in interpretability, explainability, and generalizability issues related to the trained AI models. The AI experts are not expected to be already familiar about materials science. The core task will be to understand if and how physics, and in particular materials science, is in some sense special with respect to interpretability, explainability, and generalizability and if therefore needs special, possibly yet to be developed concepts and tools.

References

[1] L. M. Ghiringhelli, Interpretability of machine-learning models in physical sciences, in "Roadmap on Machine Learning in Electronic Structure", IOP Electronic Structure, ed. by S. Botti and M. Marques; arXiv:2104.10443 (2021)
[2] J.W. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, no. 44 (2019): 22071-22080.
[3] R. Roscher, B. Bohn, M. F. Duarte, and J. Garcke. Explainable machine learning for scientific insights and discoveries. IEEE Access 8 (2020): 42200-42216.
[4] Z. C. Lipton, The Mythos of Model Interpretability, ICML Workshop on Human Interpretability in Machine Learning. Communications of the ACM 61, 36 arXiv:1606.03490 (2016).

 

 

Format of the workshop

The workshop will take place in a virtual form (via the zoom platform) in three nonconsecutive days: June 9, June 17, and June 23. On all dates, the time of the workshop will be 3:00 to 5:30 p.m. CEST. Each day will be structured into 3-4 invited talks given by experts in AI applied to materials science or experts in AI with a computer-science / applied-mathematics background. Each talk will end with a Q&A session, where participants who are not speakers may participate via (written) chat.

 

Programm

All indicated times are in CEST

Wednesday, June 09th 2021 - Day 1

15:00 to 15:10 - Welcome & Introduction
15:10 to 15:35 - Jochen Garcke (Universität Bonn)
15:35 to 16:00 - James Kermode (University of Warwick)
16:00 to 16:25 - Mario Fritz (CISPA Helmholtz Center for Information Security)
16:25 to 16:50 - Frank Noé (Freie Universität Berlin)
16:50 to 17:30 - Discussion

 

Thursday June 17th 2021 - Day 2

15:00 to 15:05 - Introduction Day 2
15:05 to 15:30 - Gareth Conduit (University of Cambridge)
15:30 to 15:55 - Jonas Peters (University of Copenhagen)
15:55 to 16:20 - Jörg Behler (Georg-August-Universität Göttingen)
16:20 to 16:45 - Bjørk Hammer (Aarhus University)
16:45 to 17:30 - Discussion

 

Wednesday June 23rd 2021 - Day 3

15:00 to 15:05 - Introduction Day 3
15:05 to 15:30 - Christina Heinze-Deml (ETH Zürich)
15:30 to 15:55 - Stefan Goedecker (University of Basel)
15:55 to 16:20 - Leman Akoglu (Carnegie Mellon University)
16:20 to 17:30 - Discussion

 

Registration

For registration please visit: https://www.cecam.org/workshop-details/1031

 

Organisers
  • Luca Ghiringhelli (Fritz-Haber-Institut der Max-Planck-Gesellschaft & Humboldt-Universität zu Berlin)
  • Matthias Scheffler (Fritz-Haber-Institut der Max-Planck-Gesellschaft & Humboldt-Universität zu Berlin)
  • Jilles Vreeken (Exploratory Data Analysis, Cluster of Excellence MMCI, Saarland University)

 

Access to the workshop

The workshop will take place via zoom. All registered participants will receive the zoom invitation vial e-mail.

Access to the talks and Q&A:

https://us02web.zoom.us/j/84538039388?pwd=Rm53Y1h3cHFTMDRiM1hHMU5TMS92QT09

Meeting ID: 845 3803 9388
Passcode: NOMAD