Mar 5 – 8, 2024
Lahan Select Gyeongju, South Korea
Asia/Seoul timezone

Experience with ML-based Model Calibration Methods for Accelerator Physics Simulations

Mar 8, 2024, 9:00 AM
20m
Lahan Select Gyeongju, South Korea

Lahan Select Gyeongju, South Korea

Lahan Select Gyeongju, South Korea
Oral (16mins + 4 mins) Methods Methods

Speaker

Frederick Cropp (SLAC)

Description

Physics simulations of particle accelerators enable predictions of beam dynamics in a high degree of detail.This can include the evolution of the full 6D phase space distribution of the beam under the impact of nonlinear collective effects, such as space charge and coherent synchrotron radiation. However, despite the high fidelity with respect to the expected beam physics, it is challenging to obtain simulations that predict the behavior of the as-built system with a high degree of accuracy due to the numerous sources of static errors and time-varying changes. Identifying and accounting for sources of mismatch between simulation and measurement, i.e. ``model calibration'', is often a laborious process conducted by expert physicists, and it frequently requires time-intensive gathering of data, such as extensive parameter scans. ML-based approaches for model calibration can potentially leverage larger, more diverse sets of data and can consider a wider variety of possible sources of error simultaneously. Information-based sampling can also improve the efficiency of the data acquisition. Here we discuss the results of several model calibration approaches we have applied to common beamline setups in accelerators, as well as LCLS, LCLS-II, and FACET-II. We also review future plans for regular deployment of these methods to keep online models up-to-date as these machines change over time. The approaches center around the use of learnable calibration functions applied to differentiable models that represent the expected physics behavior, including both neural network surrogate models trained on physics simulations and directly differentiable physics simulations. Furthermore, we comment on uncertainty quantification for the learned calibration factors and disentangling multiple possible error sources. Finally, we discuss the software tools we have been working on to facilitate model calibration.

Primary Keyword surrogate model architecture
Secondary Keyword active learning
Tertiary Keyword digital twins

Primary authors

Auralee Edelen (SLAC, Stanford) Christopher Mayes (SLAC) Claudio Emma (SLAC) Frederick Cropp (SLAC) Juan Pablo Gonzalez-Aguilera (University of Chicago) Kathryn Baker (STFC / ISIS) Ryan Roussel (SLAC National Accelerator Laboratory) Sanjeev Chauhan (Duke University) Tobias Boltz (SLAC National Accelerator Laboratory)

Presentation materials