Exploiting the results of the CERN LHC - about my own research

A large fraction of my work as a particle physicist concerns the exploitation of the results of the Large Hadron Collider (LHC) at CERN. With this post, I provide bits of explanations about why this is important and how it could be done.

The Standard Model of particle physics explains most high-energy physics data. That is a fact. It however suffers from various conceptual issues and practical limitations (see here for a short summary of those motivations). All of this makes physicists thinking that the Standard Model only consists in the tip of a huge iceberg explaining how the universe works at the fundamental level.

The problem is that we have very little guidelines on how to explore this iceberg and what it could be. In fact, there are thousands of possibilities and the LHC at CERN is looking for clarifying the situation. However, for now, there is no sign of any new phenomenon in data. We are thus in the dark!

This is where what I want to discuss today comes into the game! This has been associated with two of my recent research articles, that are available from here and there for those interested in getting information beyond this post.


[image credits: open photos @ CERN]

Searching for new phenomena at the LHC


As said above, we have today no compelling evidence for the existence of something unexpected. All LHC collaborations try hard to discover a sign of phenomenon beyond the Standard Model, but without any success so far. As an appetiser, let me discuss a bit how that hunt for new phenomena works in practice.

To design a search, we start by considering how a specific new phenomenon should materialise in data. The choice of the phenomenon itself is generally motivated by concrete beyond the Standard Model theories. In this context, the exact details behind what should be seen depend on the theory parameters, and those parameters can in principle take any value. This yields slight differences in the expected observations, but those differences are irrelevant for the design of the search itself.

From there, one implements an analysis allowing us to unravel the considered new phenomenon from the background of the Standard Model. If something is found in data, we may have a first sign of the path to follow for a big discovery. If not, the theory we started from gets constrained: its parameters cannot take any random value anymore.

An example is shown below, taken from this research article of mine.


[Credits: arxiv]

The theoretical framework we consider contains two parameters that can take any positive value: the masses of two new particles. Those masses correspond to the x and y axes on the above figure. Moreover, each coloured point consists in a theoretically valid mass spectrum, or parameter configuration.

Any configuration lying in the lower left corner, inside the red contour, is found to be excluded by data. In other words, if this was the path chosen by nature for our universe, we should have seen some related new phenomena already. As this is not the case, the configuration is excluded.

Taking the entire experimental programme targeting new phenomena, we actually iterate this procedure over a huge set of signatures, originating from many theories. The entire catalogue has been designed thick enough so that any model can fit in: its expectation in terms of new phenomena are in principle (there are rare exceptions) covered.

As the LHC has not found anything, beyond the Standard Model theories are getting severely constrained (without being actually excluded). The experimental publications however always focus on deriving constraints on specific models, that are often chosen to be the most popular ones…


Re-interpreting the results


This is where my work comes into the game: There are many theories that deserve to be tested against data, and our experimental colleagues have not enough resources to test them all.

The idea we had half a decade ago was to develop an opensource platform where anyone could inject his/her own signal and verify whether it is compatible with the non observation of anything new in data. In this way, instead of asking to our experimental colleagues to work more, we could come to them with only the most relevant models, after having tested them by ourselves.


[Credits: CERN]

A framework developed by theorists however features, by definition, differences with respect to the full experimental software (that is not public). For instance, we need to re-implement the experimental analyses by ourselves, and understanding the related documentation is sometimes (well… often) a nightmare. We need sometimes ages to get the information right.

In addition, the detector simulation is differently handled in our framework. Instead of the huge (not publicly available) machinery related to an LHC detector, we use an opensource lightweight option. We hence need less than a second for simulating one single collision, instead of minutes.

As a consequence, it is important to **validate carefully **the re-implementation of each analysis in the platform. This takes time (often months). This is however mandatory, as we must be sure that any prediction made with the platform is a good-enough approximation of what would have happened in real life.

Thanks to this platform, it becomes possible to use an analysis targeting a signature of a model ‘A’ to constrain a model ‘B’ featuring a similar signature without having to beg our experimental colleagues to do the work for us.

It also helps us to assess whether every model is correctly covered by the on-going analyses. In other words, we can conclude about the (non-)existence of loopholes in the LHC search programme, and do whatever needs to be done to fill these loopholes if necessary.


Summary - searching for new phenomena at the LHC


The search for new phenomena beyond the Standard Model of particle physics plays a big role at the Large Hadron Collider at CERN. Our experimental colleagues are searching for signs of the unexpected through various signatures belonging to a vast catalogue. Unfortunately, all results are so far compatible with the background of the Standard Model and there is no hint of anything new so far.

As a consequence, constraints are imposed on many theories of physics beyond the Standard Model. Our experimental colleagues do not however have the resources to constrain all potentially interesting theories. This is where my work comes in. We developed a platform allowing to test, in an approximate but good enough manner, whether any given signal could survive the absence of any sign of new physics in data.

In this way, anyone, including theorists and even non physicists, has a way to confront his/her favourite model to data, and come back to the experimental collaborations so that they could focus on the most interesting and intriguing findings!

I hope you have appreciated this small window on my work. Feel free to ask me anything!

PS: This article has been formatted for the STEMsocial front-end. Please see here for a better reading.