Speaker
Description
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time and the high computational cost of simulation codes pose significant hurdles in generating the necessary data for training state-of-the-art machine learning models. Furthermore, optimisation methods can be used to tune accelerators and perform complex system identification tasks. However, they too require large numbers of samples of expensive-to-compute objective functions in order to achieve state-of-the-art performance. In this work, we introduce Cheetah, a PyTorch-based high-speed differentiable linear-beam dynamics code that enables fast collection of large datasets and sample-efficient gradient-based optimisation, while being easy to use, straightforward to extend and integrating seamlessly with widely adopted machine learning tools. Ultimately, we believe that Cheetah will simplify the development of machine learning-based methods for particle accelerators and fast-track their integration into everyday operations of accelerator facilities.
Primary Keyword | differentiable models |
---|---|
Secondary Keyword | digital twins |