100 years of discoveries in particle physics - building the Standard Model brick by brick

It is Monday, and as can be seen, I am still aligned with my good resolution of a weekly write-up about particle physics and cosmology. Whereas fingers are still crossed for the future, my current time resources are still high enough so that I could cope with this.

I was initially unsure about what to discuss this time, and I even tried to have @agmoore helping me to choose the topic. I have indeed several interesting subjects that could be written about, like for instance the flavour anomalies (a hot topic in the high-energy community for a couple of years). Those anomalies are seen in data for several years, and have even been reinforced by recent results from the Large Hadron Collider at CERN. Although these anomalies are more and more solid year after year, it is still too early to conclude about any failure of the Standard Model of particle physics. The Standard Model then persists as being the best framework to explain (and predict) more than a century of data.

This being said, I finally opted not to write about these anomalies and keep this topic for an upcoming post. Instead, I discuss the Standard Model of particle physics itself, so that I could refer to this post in many of my future writings on STEMsocial. I follow the approach I usually take when discussing particle physics with the general audience, and start from a definition of particle physics before moving alongside historical developments.

I guess this is it for setting up the stage of this post, that I keep brief. Let’s now embark into a journey in more than 100 years of particle physics discoveries!


[Credits: CERN]


What is particle physics about?


In order to easily answer what is particle physics about, it is good to ask about the definition of physics itself. As a starting point, we can check in the dictionary. We will find that physics is defined as a science that studies the general properties of matter and that establishes the laws governing all material phenomena. This definition comes from a French dictionary (Le Robert), and slight variations can be found in other (in particular English) dictionaries.

When we move to particle physics, we can apply the same definition, but to the most fundamental building blocks of matter. Particle physics hence tries to determine the dynamics dictating how the most elementary particles behave in their everyday life.

And this brings us to a first very important question: what are those elementary building blocks? We can decide to answer in a very egoist and human-centred way. The human body is mainly made of six chemicals: oxygen, carbon, hydrogen, nitrogen, calcium and phosphorus. However, humans are nothing in our universe (humans should actually keep this in mind more often, but this is another debate). At the scale of the universe, hydrogen and helium are sufficient to describe more than 99.9% of everything we see.

The above discussion is nevertheless only focused on chemical elements. Let’s stick to this for a moment.

Whereas most matter in the universe can be described by only two of these elements, we know today that we have 118 of them lying around, as illustrated in the image below showing the current version of Mendeleev’s periodic table of elements. From my own perspective, this table is just a mess. There are so many elements… This makes the story complicated enough, and for that reason (among others as a matter of fact) I decided not to become a chemist many many years ago.


[Credits: Explorersinternational (Pixabay)]

For the physicists of the first half of the 20thcentury, such a complex picture could be drastically simplified and rely only on three building blocks. It was indeed known that every single atom could be described with protons, neutrons and electrons (this is what was called the Rutherford model). In addition, early radioactivity studies demonstrated the existence of a stealthy fourth guy, the neutrino.

Therefore, in the 1930s, we could be tempted to state that physics was fully done. Absolutely all matter in the universe could be simply explained with 4 entities. Fortunately, the story is not so simple, which is why we still have many interesting things to study today.


Cosmic rays, the first accelerators and the particle zoo


We are now in the 1930s-1940s. The situation is not very different from today: physicists from that time were trying to understand the underlying dynamics of the world. However, they did not have many options for that, in particular as there were no particle accelerators like the Large Hadron Collider at CERN (as colliders were not invented yet). The best strategy then consisted to use highly-energetic cosmic rays.

When a highly-energetic cosmic ray enters the atmosphere of our planet, it usually quickly hits one of the particles comprising it. The products of such a (highly energetic) collision are secondary particles, that are still energetic enough to proceed with secondary collisions. This generates even more particles, that are then ready to undergo collisions too. And so on, and so on…

As a result, we get a cascade of particles hitting the ground (known as a cosmic ray shower), as depicted in the image below. The interesting point is that hints from the particles included in this shower can be obtained in a simple manner, through photographic emulsions.


[Credits: A. Chantelauze, S. Staffi and L. Bret (Pierre Auger Experiment)]

The principle is very simple: we patiently wait for a shower to happen above us in the sky, with a photographic plate. The plate is expected to record the tracks of any charged particle passing through it, which happens when the shower gets close to the ground. While such an apparatus is super cheap to set up, the problem is elsewhere. There is no way to control the moment at which a cosmic ray would arrive at the right spot (well aligned with the location at which the plate is gently waiting to record anything). However, with patience, everything is possible (and this worked back in the days)!

A bit later, let’s say roughly in the 1940s-1950s, the first accelerators appeared. A good example of those machines is the Cosmotron at Brookhaven National Laboratory in the US. The great advantage of particle accelerators relative to using cosmic rays is that highly-energetic collisions are controlled thanks to electric fields (acceleration of the particle beams) and magnetic fields (control of the trajectories of the beams). There is moreover no need to wait for ages to have a cosmic ray hitting the atmosphere at the right place. We can instead simply turn on the accelerator and have collisions exactly where and when we want.

Cosmic ray and accelerator studies were more or less the same thing: high-energy physics collisions yielding the production of new subatomic particles. These collisions solely rely on Einstein’s special relativity. Energy and mass are equivalent. Therefore, with a lot of energy, we can produce more massive objects and potentially new particles not discovered so far.

At the end of the 1950s, this yielded the discovery of a plethora of new subatomic particles: dozens of pions, kaons, 𝛺’s, 𝜮’s, 𝜩’s, 𝜔’s, 𝚫’s, etc., and of course the muon, a heavy cousin of the electron, and the muon neutrino (the big fat brother of the electron neutrino).

In other words, we were back to step one, a picture of the microscopic world that is as busy as Mendeleev’s table


Cleaning the mess: from the quarks to the Standard Model


The situation started to change in the mid-sixties, when Gell-Mann and Zweig proposed that all known particles, with the exception of the electron, the muon and the two neutrinos, were composite systems made of three fundamental entities called quarks. Glashow and Bjorken refined the picture. All existing subatomic particles (still with the exception of the electron, the muon and the two neutrinos that are elementary) can be described with four quarks named the up, down, strange and charm quarks. In contrast, the other four guys and girls (the electron, the muon, the electron neutrino and the muon neutrino) are called leptons.

The dozens of known particles (the pions, kaons, 𝛺’s, 𝜮’s, 𝜩’s, 𝜔’s, 𝚫’s, etc.) could be classified on the basis of their quark content, and new states emerging from this classification were predicted (and discovered later). The symmetry properties underlying the classification were indeed pointing to missing particles. Consequently, this offered to physicists clear directions to where to search for new particles.

This idea stopped being just an idea in 1968, when the proton structure was discovered at the Standard Linear Accelerator Center (SLAC). This demonstrated that the quark model could be a correct description of nature. This feeling becomes even stronger after the so-called November Revolution in 1974, which has nothing to do with any political movement (the word ‘revolution’ is sometimes different from what it seems). This revolution is the way the discovery of the charm quark is coined in the field of high-energy physics.


[Credits: CERN]

Despite of these successes of the quark model, observations were still teasing physicists in the mid 1970s. During the 1930s-1940s, subatomic particles called kaons were discovered (see above), all these kaons being unstable and decaying after some time. In the 1970s, it turned out that the properties of kaon decays were not described correctly by the quark model.

To correct this issue, Harari postulated in the mid 1970s that six quarks were lying around. Such an assumption may indeed be more elegant than sending to the graveyard a theory that was working so well for many other things. This six-quark option was of course the correct hypothesis. The fifth quark, the bottom quark was discovered at Fermilab in 1977. We however had to wait until 1995 for the last of the quarks (the top quark) to be found, also at Fermilab. Its very large mass (it is an elementary particle that is as heavy as a gold atom) hid it from accelerators during more than 20 years (a lot of energy was needed to produce it, as dictated by special relativity, so that powerful accelerators were in order).

On the other hand, there is no reason to have six quarks and only four leptons. Tsai therefore postulated the existence of the tau lepton in the 1970s. It was discovered slightly later at SLAC and Berkeley.

As for the top quark, the last neutrino (the tau neutrino) hid itself for a long period, but not because of its mass (as it is almost massless). Being weakly interacting and always related to the production of tau leptons, it was very hard to see it directly in an experiment. In 2000, the physicists of the DONUT experiment (at Fermilab again) revealed the existence of the last neutrino of the Standard Model with a direct proof. They recorded 12 events in which a tau neutrino produced a tau lepton, out of a trillion of tau neutrinos that did nothing; this is what we can definitely call looking for a needle in a needle stack.

The matter sector of the Standard Model, i.e. the elementary building blocks sufficient to explain all phenomena at the fundamental level, is hence made of 12 entities: 6 quarks (up, down, charm, strange, bottom and top), 3 charged leptons (electron, muon and tau) and 3 neutrinos (electron, muon and tau neutrino). As shown above, getting to those conclusions took us already a few years, the last missing bit having been unravelled only 21 years ago, in July 2000.


The fundamental interactions


Before closing this post, there is still an important point that we need to address. We have so far discussed the elementary particles and how they have been postulated and discovered. However, we didn’t discuss how they drink, sing and dance (aka their interactions).

In the Standard Model, this is implemented through what we call gauge symmetries. Without entering into details (as this goes way beyond the purpose of this post), this tells us that each of the fundamental interactions, namely electromagnetism (connected to electricity, magnetism, biology, chemistry, etc.), weak interactions (related to radioactivity and the life of stars) and the strong force (binding protons and neutrons in atomic nuclei) are all modelled through the exchange of a mediator between elementary particles.

In other words, particles exchange force carriers to be said to interact via a specific fundamental interaction. We hence say that electrically-charged particles interact electromagnetically when they exchange photons. In addition, particles interact weakly when they exchange W-bosons or Z-bosons, and quarks are strongly interacting through gluon exchanges.

Is this simple vision of the interactions the right one (let’s forget about the complexity behind it, that I shamelessly hide under the carpet)? Well, a picture is better than 1000 words (even if this post has already a couple of thousands of words, I know). In the plot below, we have a comparison of theoretical predictions of the Standard Model (with its gauge interactions) and experimental data from the Large Hadron Collider at CERN.


[Credits: The ATLAS collaboration (CERN)]

The rates of many processes that could arise in proton-proton collisions at the Large Hadron Collider are displayed (each circle/square/triangle represents one process in which two protons produce the final state given on the x-axis). The colour bands correspond to experimental data, and the grey bands stand for the associated theory predictions. The higher we are vertically, the more common is the process considered. And conversely, the rightmost part of the figure concerns rarer processes (that are thus harder to measure as less common, and hence associated with larger error bars).

This plot is a clear proof that we have a theory (the Standard Model) making predictions over 14 orders of magnitude that are in excellent agreement with data! And this is of course not the only example (but this is the only one I show in this post).


Is this it?


In this post, I shared how I usually introduce the Standard Model of particle physics when I discuss it with the general audience. I have explained how physicists came, in more than 100 years, with a theory containing 12 entities (6 quarks, 3 charged leptons and 3 neutrinos) that interact electromagnetically, weakly and strongly (through so-called gauge interactions).

This theory, the Standard Model, is the current paradigm to explain how our universe works at its most fundamental level.

In addition, I have also tried to convey why the Standard Model is so great. In two words, because it works. It is as simple as that. I could share many other pictures such as the last one of this post. In any of these pictures, we would have a clear proof that theoretical predictions and experimental data match.

In other words, particle physics accumulated data for more than 100 years, and we have a single theory capable to explain everything that has been recorded so far. But the beauty of the Standard Model does not end there. We also have a theory capable to make predictions relative to what we should observe in current and future experiments. This is the exact definition of a theory, somehow: making predictions.

There are two things that I didn’t discuss at all in this article.

  • First of all, the long-awaited Higgs boson has been discovered almost 10 years ago. This guy being a little bit special, I decided to keep its story for another post (maybe next week, or maybe not; we will see).
  • Second, I mostly do research on beyond the Standard Model physics. This means that despite of having an excellent theoretical framework, there are reasons to go beyond it. Again, this is a topic that deserves a full post and I decided not to address it here. After all, this is the cornerstone of my research…

I guess it is now time to stop writing for today (even if I am still far from the block size limit). Feel free to ask clarifications about anything that could be unclear, and also suggest topics you would like me to address. See you next Monday, for a new episode. The winter break is approaching, but I will still be here for more or less 2 weeks before decoupling (in Belgium) for 2-3 days!

H2
H3
H4
3 columns
2 columns
1 column
29 Comments
Ecency