Machine Learning for the Future Circular Collider Design

October 18, 2022
In Progress
Share this project


In June 2020, the Council of the European Organization for Nuclear Research (CERN) updated the European strategy for particle physics to establish the electron-positron Higgs factory as the highest-priority facility after the Large Hadron Collider (LHC) era [1]. The strategy emphasizes the importance of ramping up the research and development for advanced accelerator and computing technologies to prepare for future collider facilities, such as the Future Circular Collider (FCC) [2].

The Swiss Accelerator Research and Technology Institute (CHART, [3,4]) appointed the Laboratory for Particle Accelerator Physics at EPFL [5] to spearhead the efforts for developing the framework and the particle beam dynamics tools required to design and optimize the FCC. While the FCC will undoubtedly lead to significant breakthroughs in high-energy particle physics, the design and operation of the machine as a precision instrument needs careful consideration and presents an exciting Big Data and Machine Learning (ML) opportunity.



SDSC Team:
Guillaume Obozinski
Yousra El-Bachir
Ekaterina Krymova

PI | Partners:

Paul Scherrer Institute:

  • Prof. Mike Seidel

More info


  • Dr. Pieloni Tatiana

More info



Accelerator performance is characterized by the size of the area in phase space where the beam particles feature bounded, i.e. stable, dynamics under long-term tracking (~10 h, or 108 revolutions). The dynamics is defined by the static and variable non-linear electric and magnetic field elements that comprise the machine, also known as the lattice. Knowledge of the boundary between the stable and the chaotic regimes, called the Dynamic Aperture (DA), and maximization of the latter are vital during the design phase and eventually for successful machine operation since particles located in the unstable regime will be lost from the beam and reduce its lifetime. Such losses limit the collider’s luminosity reach and hence have a critical impact on its physics performance. The best way to assess the DA at present is by means of particle tracking simulations [6]. Typically, simulations can cover only up to 106 revolutions around the ring (1-minute in real time). While the physics of the involved processes is well understood, the mathematical formulation is a non-linear dynamics problem that is highly dependent on the initial conditions.

A key aspect of accelerator design is to maximize the stable area, or DA, while respecting a plethora of other objectives, by properly adjusting the 10’000s of electro-magnetic elements using a large number of high-level parameters. This requires efficient search algorithms of the available parameter space. The main objectives of this project are hence twofold: first, to identify and develop a suitable ML model of the stable phase space area as a function of the machine input parameters by training on a large number of tracking simulations; and second, to develop an active learning framework that provides a targeted, efficient, yet exhaustive search for optimal accelerator designs in terms of parameter settings and / or lattice configurations to potentially revolutionize accelerator design. Parallelization of the algorithms will be relevant to best use the available high-performance clusters at EPFL. Potentially, the DA ML model could also be employed to focus only on tracking the particles in relevant areas of the phase space. This would free up computational resources to extend the essential simulations beyond current limitations and improve the accuracy of the highly relevant particle beam lifetime estimates. In addition, we believe a paradigm shift is possible for this kind of simulation studies by exploring parameter space in a smart way and by avoiding redundant. The LHC also provides a large amount of experimental data that is currently processed within the PACMAN project [7] and will be valuable for comparison and evaluation of the accelerator design framework proposed here.


  • A future hadron collider shall be designed for maximum performance,
  • Tolerances and specifications of magnets must be determined (cost impact),
  • Survival time of circulating particles as a function of deviation from beam centre is a key element,
  • Is computationally expensive: 10^6 turns, several 10k elements per turn.


  • Determine stability of starting amplitudes,
  • Use machine learning to explore and interpolate the phase space (amplitudes) and accelerator parameter space (tuning),
  • ML will speed up process by orders of magnitude.


  • Maximise performance of this expensive research infrastructure under planning,
  • Magnet specifications as best cost/benefit ratio.



Additional resources


  1. CERN Council, “2020 Update of the European Strategy for Particle Physics”, CERN- ESU-013, June 2020.
  2. FCC Conceptual Design Report, January 2019.
  3. Swiss Accelerator Research and Technology Institute (CHART).
  4. L. Rivkin, Swiss Accelerator Research & Technology, CHIPP Board Meeting, Bern (CH), January 2019.
  5. EPFL Laboratory for Particle Accelerator Physics (LPAP).
  6. Sixtrack single-particle simulation code.
  7. PACMAN project, 2019.


Related Pages

More projects


In Progress
Detecting drought impacts on forests in earth observation data
Energy, Climate & Environment


In Progress
Acceleration of Free-boundary Grad-Shafranov codes using advanced numerical methods
Energy, Climate & Environment


In Progress
Sustainable pathways towards net zero Switzerland
Energy, Climate & Environment


In Progress
MaCHIne-Learning-assisted Ptychographic nanotomography
Big Science Data


Latest news

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data
May 1, 2024

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data

Smartair | An active learning algorithm for real-time acquisition and regression of flow field data

We’ve developed a smart solution for wind tunnel testing that learns as it works, providing accurate results faster. It provides an accurate mean flow field and turbulence field reconstruction while shortening the sampling time.
The Promise of AI in Pharmaceutical Manufacturing
April 22, 2024

The Promise of AI in Pharmaceutical Manufacturing

The Promise of AI in Pharmaceutical Manufacturing

Innovation in pharmaceutical manufacturing raises key questions: How will AI change our operations? What does this mean for the skills of our workforce? How will it reshape our collaborative efforts? And crucially, how can we fully leverage these changes?
Efficient and scalable graph generation through iterative local expansion
March 20, 2024

Efficient and scalable graph generation through iterative local expansion

Efficient and scalable graph generation through iterative local expansion

Have you ever considered the complexity of generating large-scale, intricate graphs akin to those that represent the vast relational structures of our world? Our research introduces a pioneering approach to graph generation that tackles the scalability and complexity of creating such expansive, real-world graphs.

Contact us

Let’s talk Data Science

Do you need our services or expertise?
Contact us for your next Data Science project!