Speaker
Description
Neural networks have a strong history as universal function approximators and as such have seen extensive use as surrogate models for computationally expensive physics simulations. However, achieving predictions for sparse or step functions are difficult, particularly within the spatial domain. This is of particular concern when these sparse events in the spatial dimension are used to calculate other outputs as the cumulative effect of the error in the peak reconstruction would result in vastly different results to that of the ground truth. This is a common case in surrogate models for particle accelerators, when machine settings are used to predict beam descriptors along the spatial dimension of some accelerator component.
This work will discuss two solutions for mitigating the problem of step function reproducibility for neural networks. Firstly, by introducing a Fourier feature transformation and secondly by using a loss function that enforces co-dependence between outputs during training. It considers as a case study a surrogate model developed to predict key beam descriptors in the Medium Energy Beam Transport (MEBT) at the ISIS Neutron and Muon Source.
Primary Keyword | surrogate model architecture |
---|---|
Secondary Keyword | surrogate model tuning |
Tertiary Keyword | digital twins |