• Ei tuloksia

Spinful Algorithmization of High Energy Diffraction

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Spinful Algorithmization of High Energy Diffraction"

Copied!
421
0
0

Kokoteksti

(1)

SPINFUL

ALGORITHMIZATION OF

HIGH ENERGY

DIFFRACTION

Mikael Mieskolainen

Department of Physics, Faculty of Science

DOCTORAL DISSERTATION

To be presented for public discussion with the permission of the Faculty of Science of the University of Helsinki, in the Lars Ahlfors Auditorium A111, Exactum, on

20th of March, 2020, at 12 o'clock.

(2)

University of Helsinki, Finland

Reviewers

Professor Tuomas Lappi

University of Jyväskylä, Finland Professor David Milstead

Stockholm University, Sweden

Opponent

Professor Valery Khoze

Institute for Particle Physics Phenomenology University of Durham, UK

Ocer

Professor Kenneth Österberg University of Helsinki, Finland

ISSN 0356-0961

Report Series in Physics HU-P-D270 ISBN 978-951-51-5942-7 (print) ISBN 978-951-51-5943-4 (pdf) https://ethesis.helsinki.

Unigraa Oy Helsinki 2020

(3)
(4)

High energy diraction probes fundamental interactions, the vacuum, and quantum mechanically coherent matter waves at asymptotic energies. In this work, we algo- rithmize our abstract ideas and develop a set of rigid rules for diraction. To get spin under control, we construct a new Monte Carlo simulation engine, Graniitti. It is the rst event generator with custom spin-dependent scattering amplitudes for the glueball domain semi-exclusive diraction, driven by fully multithreaded importance sampling and written in C++. Our simulations provide new computational evidence that the enigmatic glueball lter observable is a spin polarization lter for tensor resonances. For algorithmic spin studies, we automate the classic Laplace spherical harmonics inverse expansion, carefully dene the geometric acceptance related phase space issues and study the harmonic mixing properties systematically in dierent Lorentz frames.

To improve the big picture, we generalize the standard soft diraction observ- ables and denitions by developing a high dimensional probabilistic framework based on incidence algebras, Combinatorial Superstatistics, and solve also a new superposition inverse problem using the Möbius inversion theorem. For inverting stochastic autoconvolution integral equations or `inverting the proton', we develop a novel recursive inverse algorithm based on the Fast Fourier Transform and relative entropy minimization. The rst algorithmic inverse results of the proton double mul- tiplicity structure and multiparton interaction rates are obtained using the published LHC data, in agreement with standard phenomenology. For optimal inversion of the detector eciency response, we build the rst Deep Learning based solution work- ing in higher phase space dimensions, DeepEfficiency, which inverts the detector response on an event-by-event basis and minimizes the event generator dependence.

Using the ALICE experiment proton-proton data at the LHC at√

s= 13 TeV, we obtain the rst unfolded ducial measurement of the multidimensional combina- torial partial cross sections, the rst multidimensional maximum likelihood t of the eective soft pomeron intercept and the rst multidimensional maximum likelihood t of the single, double and non-diractive component cross sections. Great care is taken with the ducial and non-ducial denitions. The second topic of measure- ments centers on semi-exclusive central diractive production of hadron pairs, which we study with the ALICE data. We measure and t the resonance spectra of identi- ed pion and kaon pairs, which is crucial on the road towards solving the mysteries of glueballs, the proton structure uctuations, and the pomeron.

(5)

thank Professor Orava for a great project and giving me freedom to independently attack many cutting edge physics problems. Some problems were solved, some new were found out. My skills were also greatly enhanced during the years by being able to supervise numerous summer students and trainees at CERN and on teaching the fundamentals of particle physics on the demanding crash course IPP1 in Helsinki.

I would like to thank Professor Khoze for being the opponent in the public defence, Professors Lappi and Milstead for their reviews and comments on the work, and Professor Österberg for being the ocer for this thesis.

For useful discussions or remarks during the project, I would like to thank the following persons: R. Orava for several years of discussions, and in an alphabetical order: M. Albrow, E. Brücken, P. Buehler, S.U. Chung, R. Ciesielski, S. Evdokimov, L. Goerlich, K. Golec-Biernat, C. Gütschow, H. Hirvonsalo, L. Jenkovszky, T. Kim, K. Kowalski, E. Kryshen, P. Kupiainen, J. Lämsä, J. Lång, T. Mäkelä, J. Nystrand, S. Patomäki, J. Pekkanen, J. Pinfold, V. Pyykkönen, S. Sadovsky, R. Schicker, A.

Shabanov, F. Sikler, A. Villatoro Tello, T. Tuunanen, J. Welti. Last but not least, I would like to thank my family and friends for their support.

I acknowledge the funding by Helsinki Institute of Physics and University of Helsinki.

Now it is time for some new challenges in the HEP Group at the Blackett Laboratory of Imperial College London.

London, January 2020

(6)

All computational machinery and code of this work are available as Open Source under the MIT license at github.com/mieskolainen for maximal scientic impact, scholarship and reproducability. Every single gure or number in this thesis is pro- duced by this code together with external libraries.

Diraction phenomenology and computational theory

1. Graniitti: A new Monte Carlo event generator and algorithmic engine written in fully multithreaded C++17 with 35k lines of code. Currently the most advanced event generator for the low mass central exclusive diraction at the LHC and beyond.

Motivation: Explicit control of a Monte Carlo event generator capable of simulating spin dependent scattering processes in the low mass central diraction with forward proton excitation kinematics.

2. Combinatorial Superstatistics: A new statistical incidence algebra ap- proach, Möbius inversion and abstract construction designed as a higher di- mensional denition of diractive event topologies and substructures.

Motivation: Abstract theory beyond standard tools and a self-consistent mea- surement framework beyond the traditional large rapidity gap counting.

Advanced algorithms

1. DeepEfficiency: The rst Deep Learning based detector eciency inversion algorithm for maximally unbiased ducial measurements in higher dimensions.

Motivation: Maximally Monte Carlo event generator independent detector eciency corrections of single or multidimensional observables.

2. Kisu: A new Shannon entropy and the Fast Fourier Transform based algorithm for non-linear inversion of stochastic autoconvolution integral equations. The rst application with public ALICE data.

Motivation: Algorithmic way to invert statistically multiparton interactions, multipomeron interactions or pileup.

3. DrEM-PID: A new two-step probabilistic particle identication algorithm based on Expectation Maximization (EM) iteration.

Motivation: Mathematically optimal nal state identication.

(7)

mation.

5. S-harmonics: A new formulation of the classic Laplace spherical harmonics decay distribution decomposition with novel detail in dening dierent phase spaces: ducial versus at non-ducial and their invertability and harmonic mixing properties in dierent Lorentz rest frames.

Motivation: Mathematically rigorous angular distribution and spin polariza- tion measurements.

ALICE and beyond

1. The rst measurement of unfolded and ducial combinatorial cross sections at the LHC, done with the ALICE proton-proton data at the center-of-mass energy√

s= 13 TeV. The mathematical construction and analysis technology built up in this thesis.

Motivation: Maximally model independent, multidimensional ducial mea- surement of diraction and soft QCD.

2. The rst ducial multidimensional maximum likelihood extraction of single, double and non-diractive cross sections. A factorized denition of ducial cross sections and then extrapolation to total inclusive cross sections.

Motivation: Mathematically nearly optimal extraction of the model based cross sections, separate ducial and extrapolated total cross sections.

3. The rst multidimensional maximum likelihood extraction of the eective pomeron intercept and the rapidity gap distributions with incomplete forward detector information.

Motivation: High precision diraction phenomenology with incomplete data.

4. Resonance spectrum measurements of semi-exclusive diraction at the LHC with the ALICE proton-proton data at the center-of-mass energy√

s= 7TeV.

We introduce also a new analysis and generic data preservation strategy called F2X.Motivation: Where are the glueballs and what happens with the low mass forward proton dissociation?

(8)

1. M. Mieskolainen

Graniitti: A Monte Carlo Event Generator for High Energy Diraction arXiv:1910.06300 [hep-ph]

2. M. Mieskolainen

Combinatorial Superstatistics for Soft QCD arXiv:1910.06279 [hep-ph]

3. M. Mieskolainen

On the Inversion of High Energy Proton arXiv:1905.12585 [hep-ph]

4. M. Mieskolainen

DeepEciency optimal eciency inversion in higher dimensions at the LHC arXiv:1809.06101 [physics.data-an]

5. M. Mieskolainen

Algorithmics of Diraction

Acta Physica Polonica B Proceedings of Diraction and Low-x 2018 arXiv:1811.01730 [hep-ph]

6. M. Mieskolainen, R. Orava Observables of QCD Diraction AIP Proceedings of Diraction 2016 arXiv:1612.00980 [hep-ph]

7. The MoEDAL collaboration

Magnetic Monopole Search with the Full MoEDAL Trapping Detector in 13 TeV pp Collisions Interpreted in Photon-Fusion and Drell-Yan Production Physical Review Letters 123, 021802 (2019)

arXiv:1903.08491 [hep-ex]

8. K. Akiba et al.

LHC Forward Physics

Journal of Physics G: Nuclear and Particle Physics 43 110201 (2016) arXiv:1611.05079 [hep-ph]

(9)

analytical and numerical work.

* In7: I contributed by re-implementing independently the photon fusion based generator level simulation of spin-1/2 monopole pairs and thus re-checked the numbers in the paper before the publication. This implementation with velocity β-dependent and pure Dirac couplings, driven by collinear EPA, kt-EPA and inclusive luxQED-pdf photon uxes, is publicly available in Graniitti event generator.

* In 8: I contributed in the AD detector project and in the ALICE diraction analysis group in which I participated in detector wrappings, in a beam test, low-level signal analysis, high-level physics analysis, a double rapidity gap trig- ger design planning and veried detector simulations.

(10)

1 Introduction 1

1.1 Strong and even stronger interactions. . . 4

1.2 Fundamental open problems . . . 8

1.3 Outline . . . 9

2 Waves and Paths 10 2.1 Wave and Helmholtz equation . . . 11

2.2 Helmholtz-Kircho integral theorem . . . 12

2.3 Fresnel-Kircho and Rayleigh-Sommerfeld formulations. . . 13

2.4 Fresnel and Fraunhofer approximations. . . 15

2.5 Feynman path integral . . . 17

3 ALICE and combinatorial measurement 19 3.1 Experimental setup . . . 20

3.2 Experimental selections and corrections . . . 24

3.3 Unfolded ducial partial cross sections . . . 29

3.4 Fiducial and total inelastic cross section . . . 36

3.5 Diractive cross sections and the soft pomeron . . . 38

3.6 F-projected observables . . . 42

3.7 Conclusions . . . 46

4 ALICE and glueballs 47 4.1 DrEM-PID: Double Recursive Expectation Maximization Particle Identication . . . 49

4.2 F2X: A Faster Analysis of Cross Sections. . . 52

4.3 Experimental tracking and PID setup . . . 55

4.4 Semi-exclusive event selection . . . 57

4.5 System invariant mass spectrum . . . 59

4.6 System transverse momentum . . . 64

(11)

Diraction 68

5.1 Introduction . . . 69

5.2 Dynamics . . . 72

5.3 Kinematics and Monte Carlo sampling . . . 111

5.4 Analysis engine . . . 119

5.5 Technology . . . 135

5.6 Discussion and conclusions. . . 136

6 Combinatorial Superstatistics for Soft QCD 139 6.1 Introduction . . . 140

6.2 Binary vector spaces and Diraction . . . 142

6.3 Posets, incidence algebra and Möbius inversion . . . 154

6.4 Measurements. . . 161

6.5 Combinatorial supercompound Poisson process . . . 167

6.6 Invertibility simulations . . . 180

6.7 Summary . . . 185

7 On the Inversion of High Energy Proton 186 7.1 Introduction . . . 187

7.2 Direct problem . . . 189

7.3 Inverse problem . . . 195

7.4 Algorithm . . . 200

7.5 Simulations . . . 205

7.6 LHC data inversion. . . 210

7.7 Conclusions and prospects . . . 215

8 DeepEciency 217 9 Conclusions 222 Bibliography 224 Appendix A ALICE measurements 242 A.1 Detector level marginal distributions . . . 242

A.2 Detector response simulations . . . 257

A.3 LHC optics . . . 261

(12)

Appendix B Graniitti and kinematics 263

B.1 Alternative models of pomeron . . . 263

B.2 Kinematics of2→3 . . . 266

B.3 The slope parameter . . . 271

B.4 Harmonic acceptance decompositions . . . 273

Appendix C Combinatorics and pileup 277 C.1 Vector space subspaces over nite elds . . . 277

C.2 Pileup combinatorics . . . 278

C.3 Exclusive eciency . . . 282

C.4 Poisson pileup problem. . . 283

C.5 Luminosity and total inelastic cross section . . . 285

C.6 F-projection technique . . . 287

C.7 Diraction t algorithms . . . 288

Appendix D Graniitti Code (2×2) 290 D.1 Makele . . . 291

D.2 MadGraph amplitude to Graniitti converter . . . 294

D.3 C++ header les . . . 296

D.4 C++ source les . . . 333

(13)

Algorithms and high energy physics are dual topics. The rst modern computer architecture was introduced by John von Neumann in 1945 [1], the same man behind axiomatic quantum mechanics. The Manhattan project resulted in the rst Monte Carlo sampling methods by Stanislav Ulam and von Neumann. The best known physics algorithms are due to Richard Feynman [2] the path integral description and the diagrammatic representation of the perturbation series, also the most visual pictures of fundamental interactions. The rst large scale computer algebra software Schoonschip [3] was written by Martinus J.G. Veltman in 1964 in studies towards what later resulted in the renormalizability proof of Yang-Mills [4] theories by Gerard 't Hooft and Veltman. The discretized integral transform by Paul Hough in 1962 [5]

was the rst computer vision AI-algorithm utilized rst in the bubble chamber track tting, nowadays used by self-driving cars to keep the car between the highway lines or to geometrically align your favorite urban Instagram pictures.

Stephen Wolfram, also from high energy physics, industrialized computer algebra with Mathematica [6] in the 1980's and the World Wide Web was invented at CERN by Tim Berners Lee [7] to organize the experimental information chaos. LHC exper- iments produce more data faster than any other scientic experiment so far. Also, by physics standards, the most heterogeneous data ever. The ROOT technology [8] developed at CERN during 1990's was during its launch the fastest fully generic object data to disk serializer in the world, perhaps still is. Software like MadGraph have automated computations that have seemed impossible. Nowadays much-hyped quantum computers and their true supremacy are seemingly the distant future of computation, discussed rst by Feynman in 1982 [9]. However, the future may rely instead on synthetic biology. Articial neural networks are currently the mainstream target of large scale computer algebra, more precisely, the network optimization re- lies on the so-called automatic dierentiation techniques, which we have also utilized in this thesis. What is not always known that the crucial reverse-mode automatic dierentiation of the current AI industry, later re-invented as the backpropagation, was rst invented by a Finnish mathematician Seppo Linnainmaa from Helsinki.

(14)

This was in 1971 during his MSc thesis, published later in [10]. Of course, also pure mathematics gets its part from high energy physics, in terms of string theory and scattering amplitude techniques, such as generalized polylogarithms. To summarize, we have listed some down to earth examples to motivate the studies of high energy interactions.

According to the de Broglie particle-wave duality, all matter must exhibit wavelike properties with the wavelength λ=h/pinversely proportional to the momentump, with the bounding constant of fundamental resolution being the Planck constant h. This is natural given the uncertainty principle behind non-commuting observables or the Fourier transform, on the other hand. The wavelike quantum properties of matter span all scales of physics. Experimentally the non-classical behavior has been recently observed even with a chain of 15 amino acid biomolecules [11]. However, we do not know what happens with quantum gravity or is the space-time discretized, for example.

The rst documented diraction observations are from 1665 by Francesco Maria Grimaldi [12]. Perhaps the most famous quantum mechanics experiment is the elec- tron diraction through the double slit, which demonstrates the probabilistic Born rule and wavelike properties of elementary particles in a highly controlled way. The double helix structure of DNA was discovered by using X-ray diraction crystallog- raphy. Everything of diraction in terms of quantum electrodynamics (QED) is well understood. However, the Born rule, for example, cannot be currently derived but is an axiom, which ts the data. Many algebraic arguments have been developed to support it. It works remarkably well and there are no known deviations of it.

Experiments to test deviations of Born rule are often based on multi-slit diraction.

According to the path integral picture, particles are free to go several loops around the slits in space-time. However, the weight of these highly non-classical paths is usually vanishing in the full probability amplitude.

The topic of this work hadronic high energy diraction, is far from well under- stood. This is due to the non-perturbative strong nature of quantum chromodynam- ics (QCD), N-body parton problems and the peculiar nature of connement. That is, there are no currently known powerful ab initio methods to simulate soft QCD diraction, such as lattice methods or small parameter expansions. Phenomenologi- cal Monte Carlo modeling is the only way to get proper observables simulated for the comparison with data. However, there are some deep principles behind this mod- eling, in the context of Regge theory, relativistic kinematics and spin algebra, for example. It is not just parametrizations of data. In general, one needs to remember that we cannot currently even calculate the proton structure function or parton den- sity input at the starting scale for any kind of events, not only diractive. However,

(15)

once the input is tted from data, the integro-dierential evolution schemes such as DGLAP work quite well together with hard matrix elements and parton showers, and reliable predictions can be made for the LHC and elsewhere to hunt for new particles. This is based on a factorization between the soft and hard scales, which is not always exact, but extremely useful nevertheless.

(16)

1.1 Strong and even stronger interactions

The primary tool to probe strong interactions at ever-increasing energies is a high energy collider and a general purpose detector built around the interaction point.

More and more powerful microscopes are being built, in essence. The initial state may be either leptons, hadrons or heavy ions. The choice of the particular initial state is dictated by the physics goals. High energy collider experiments have a long history at CERN, Brookhaven, SLAC, Berkeley and Protvino. The colliders highly relevant regarding this work are the ISR [13] at CERN, the Sp¯pS [14] at CERN, the LEP [15] at CERN, the HERA [16] at DESY, the Tevatron [17] at Fermilab and nowadays the RHIC [18] at Brookhaven and the LHC at CERN [19,20]. The future is unknown, but large electron-ion colliders and massive proton-proton colliders may be expected.

The non-abelian SU(3) gauge theory of strong interactions [2124], quantum chromodynamics, is very complicated. Ideally, we would like to understand it at a similar level of detail as the quantum electrodynamics is understood. Yet un- derstanding QED in detail does not automatically mean we understand collective N-body phenomena, such as chemistry, but only in principle. In the numerous corners of strong interactions, there are many new techniques, but the soft Regge domain has basically always defeated any other methods than the Regge-like power law scaling asymptotics or calculus based on it. There simply seems not to exist any truly powerful small parameter to expand against, other than already found decades ago. The original Reggeon eld theory by Gribov was left unnished by the master.

Naturally, many papers have been written to complete it in several plausible direc- tions. However, one should not limit oneself to this massive obstacle posed by eld theories measuring, probing and modeling QCD diraction is possible in various other ways.

Our number one strategy in this work is to design rigorous mathematical algo- rithms and then using them, implement novel higher dimensional measurements. Of course, along the way we want to see in explicit detail, what is possible in terms of models, how they compare with the LHC data and introduce an advanced simulation framework, Graniitti, with special emphasis on spin. For the mathematical formu- lation of inclusive diraction observables, our approach is to use incidence algebras.

These algebras were formulated by the father of modern combinatorics, mathemati- cian Gian-Carlo Rota at MIT in the 1960's. That is, we use a simple combinatorial counting of nal states as our starting point. Simple is good because already the ex- perimental issues are very complex. Basically, what we formulate is a new denition of diraction. Funnily enough, this picture is visually closer to the classic diraction

(17)

Soft QCD Low-pt∼Asymptotic energy dependence, elastic scattering, in- elastic diraction: single, double and central diraction, mul- tiple large rapidity gaps, spin-parity selection rules in cen- tral diraction, absorption and screening, multiplicity, trans- verse momentum: from exponential to power laws, soft mul- tiparton (multipomeron) interaction observables, transition to hard scattering, long range rapidity correlations (ridge struc- ture) and high multiplicity without jets, input for fragmentation (hadronization) models and functions

Hard QCD High-pt∼Deep(ly) inelastic scattering, dierential jet and sys- tem kinematics, event shapes, jet composition, substructures, multijets, radiation patterns and color ow eects, quark/gluon separation, IR/CL-safe jet algorithms (hard, weighted), parton densityf(x, Q2)tting, hard (HERA) diraction = hard system + rapidity gap, hard multiparton interactions, search for new massive BSM-resonances, running coupling αs measurements Cosmic Rays Extended Air Showers (EAS), very forward `fragmentation re-

gion' measurements of energy and particle composition at col- liders, Monte Carlo tuning

Heavy Ions Probing the unknown QCD phase diagram at dierent densi- ties and temperatures, chiral symmetry recovery, search and understanding for the solid observables of Quark-Gluon Plasma (QGP): photon and lepton rates, strangeness, quarkonia, jet quenching, plasma screening eects, spherical and elliptic ow, uctuations, Bose-Einstein correlations, phase transition tem- perature ts, search for GLASMA and other hypothetical (amorphic) states of matter

Spectroscopy Light mesons, baryons, glueballs and their mixing, multiquark states

Quarkonia J/Ψ(c¯c),Υ(b¯b), . . . and their spin-excited states

Spin Physics Quarkonia polarization, polarization in photoproduction Gamma-Pomeron (γ−gg) processes, proton pdf `spin crisis' Table 1.1: Strong interactions topics classied by experimental observables.

(18)

Nuclear Yukawa model (30's), Chiral perturbation theory (60's), Eective theories

Regge domain Regge theory (60-70's), Hard domain (Lipatov et al.

since late 70's), Durham QCD (KMR) type models (00's), Stochastic calculus (70's)

Low-x Saturation (80's), Color Glass Condensate (90's), Clas- sic Yang-Mills

Integro-Differential DGLAP, BFKL, BK. . . (70's - 90's)

Parton densities Bjorken scaling, Feynman-Gribov parton model (late 60's), QCD integrated pdfs, unintegrated (generalized) pdfs (80-90's)

Collider QCD, jets Fixed order pQCD (late 70's), multileg/multiloop

`NLO revolution' (late 00's), analytic resummation, parton showers, Monte Carlo event generators (late 70's), eective collinear eld theories (00's)

High Density Cold and hot nuclear matter, the equation of state (EOS), Neutron stars, thermal-pQCD + Lattice QCD (70's)

Hadron spectroscopy Regge trajectories (60's), Gell-Mann/Zweig quark/ace model (60's), Lattice QCD, Holography (90's), Super- symmetric meson-baryon spectrum (00's)

Quarkonia Non-Relativistic NR-QCD (80's)

Hadronization Lund strings (late 70's), Pre-connement (Veneziano), Webber clusterization (80's)

Vacuum properties Instanton calculus and 't Hooft, Wilson lattice QFT, lattice QCD (late 70's)

Hydrodynamics Lattice, transport coecients (80's)

Scattering amplitudes S-matrix unitarity and analyticity (60's), Spinor- Helicity / Parke-Taylor (80's), Generalized unitarity (90's), Color factorized amplitudes and string theory methods (00's), geometric `Amplituhedron' methods (10's)

Table 1.2: Strong interactions topics classied in a theory driven way, with the approximate time of origin indicated.

(19)

pattern experiments than the de facto `large rapidity gap' counting. What we also see is that it should be one of the best ways to probe the algebraic properties of the Abramovski-Gribov-Kancheli (AGK) scattering amplitude cutting rules [25]. That is, how does one see those rules from data? The interference structure contained in this calculus should be reected in the multiplicity densities per rapidity interval.

Those rules are, after all, of highly combinatorial nature.

To this end, we may mention that increasingly many collider measurements are collected under the automated Rivet [26] platform and their comparison with nu- merous Monte Carlo models is algorithmized undermcplots.cern.ch[27]. A technical requirement for this to work is that the measurement is a strictly ducial one. Our novel measurements are also ducial measurements and thus directly comparable with event generators using only the nal state information. We see that highly automated and ducial approaches should be the case for all elds of physics and science in general, not just limited to collider physics. We shall illustrate for the reader the topics of strong interactions in Tables 1.1 and 1.2. Obviously, we ignore the (eective) strong interactions in condensed matter and elsewhere and stay only in the high energy physics context.

(20)

1.2 Fundamental open problems

To expand the mind of the reader regarding where are our topic ts in the large spectrum of modern physics, we shall rst list the following to span the space of outstanding problems and questions, in no specic strong order. Our topic belongs to the rst one.

Non-perturbative strong interactions and scattering amplitudes Detectors operating at the single quantum limit at dierent energy scales Early universe, big bang, ination, monopoles and topological defects

Origin of mass, hierarchy, avor, matter versus antimatter and details of (the) Higgs boson

Unication of dynamics, superstrings, extra dimensions and holography The cosmological constant, dark energy and vacuum(s)

Dark matter or misunderstood gravity at large scales

Quantum gravity, black holes and information, wormholes and entanglement Stochastic gravitational waves in the (early) universe

Quantum computers; unitary port logic driven versus `adiabatic' realizations Algorithms and the synthesis of biology, the arrow of time and entropy Articial intelligence and physics of neuroscience; classical versus quantum Mathematical physics: number theory and physics, generalized particle statis-

tics, topics of string theory

Highly geometric theory of quantum mechanics and space-time; twistors, ampli- tuhedrons, emergent unitarity, explanations for the origin of gauge symmetries High temperature superconductivity, exotic forms of condensed matter and

quantum chemistry

(21)

1.3 Outline

The structure of this thesis goes as follows. In Chapter2 we go through the classic picture of diraction in terms of waves and paths and illustrate these visually brilliant topics through simulations. In Chapter 3 we described the main measurement and multidimensional diractive cross section extractions of this thesis. In Chapter 4 we discuss new algorithms for semi-exclusive diraction and glueball hunting and go through case studies with data. In Chapter 5 we have a computational tour of high energy diraction, where we describe our new Graniitti Monte Carlo event generator and advanced spin analysis tools. In Chapter 6 we describe in detail the mathematics of our combinatorial measurement framework and the related inverse problems. In Chapter7we study the `inversion of proton', by rst developing a new inverse algorithm for stochastic autoconvolution integral equations and then apply it to data. In Chapter8we describe the rst Deep Learning based high dimensional detector eciency inversion algorithm, DeepEfficiency. Finally in Chapter9 we end with overall conclusions.

(22)

We shall remind ourselves of the elegant formulations of classic diraction. There are no radical new results in this chapter, only a compact summary of the essentials.

Huygens (1678)Fresnel (1818) principle Every space-time point of a propa- gating wavefront is a source of secondary spherical wavefronts (recursion).

Babinet's principle (∼1800) A geometric aperture with a hole gives the same diraction pattern in the far eld as the complement aperture (geometry).

(23)

2.1 Wave and Helmholtz equation

From the Maxwell equations [28], we can derive the vector valued wave equation [29]

∇ ×(∇ ×E(x, t)) + 1 v2

2

∂t2E(x, t) = 0, (2.1) which is a second order partial dierential equation (PDE) in space and time with the speed of propagation v. Now, the corresponding scalar wave equation reads

2− 1 v2

2

∂t2

U(x, t) = 0, (2.2)

where the scalar function U(x, t) can be taken one of the spatial components of E(x, t). By using the scalar equation, we loose all the polarization (vector) dependent phenomena, which is just ne in the case of acoustic elds, for example.

Then substitution of a single monochromatic waveU(x, t) =u(x)e−iωtat angular frequency ω gives us the linear Helmholtz equation with no time dependence [30]

2+k2

u(x) = 0, (2.3)

where the wavenumber isk22/v2,k= 2π/λ. Usually, one uses Dirichlet and von Neumann boundary conditions. The former sets u on the boundary and the latter denes ∂u/∂n on the boundary. A practical way to solve classic diraction or eld congurations in arbitrary geometries and materials is to simulate the scalar elds or Maxwell eld equations numerically. An often used formulation in electrodynamics is the Finite Dierence Time Domain (FDTD) method invented by Yee in 1966 [31].

However, we shall go through certain analytic classic scalar diraction theory results.

To this end, we may dream about the future where high energy QCD quantized non- linear eld equations can be solved on a computer like Maxwell equations.

(24)

2.2 Helmholtz-Kircho integral theorem

Coordinates: Let our aperture plane and the coordinate system xy-plane coincide with a positive z-axis taken towards the detector. Let x be the position vector where we evaluate the diraction integrals at the aperture and let ybe the position vector in the outgoing space at the virtual detector.

We use the Green's [32] equation for a point source

2+k2

G(x,y) =−4πδ(x−y) (2.4) withG(x,y) being the Green function kernel

G(x,y) = eik|x−y|

|x−y|. (2.5)

In this case we have translation invariance G(x−y) ≡ G(x,y) by homogeneous medium, such as free space or dielectric. Thus, the solution can be written as a convolution

u(y) = 1 4π

Z

V

dxG(x−y)s(x), ify∈V and u(y) = 0 fory outsideV, (2.6) for the equation ∇2+k2

u(x) =−s(x)within volume V.

The Helmholtz-Kircho (HK) integral theorem [33] states that we can express the scalar eld at a pointyas a function of the eld values at the volume boundary

∂V

HK theorem:u(y) = 1 4π

Z

∂V

dS(x)

u(x)∂G

∂n|x−y−G(x−y)∂u

∂n|x

, (2.7) wherenis a unit normal vector oriented to inside the volumeV with normal deriva- tive∂u/∂n≡ ∇u(x)·nandS(x)is a surface element. In addition, we need boundary conditions [29]

A:Kircho aperture:u(x) =uI(x) ∧ ∂u

∂n = ∂uI

∂n (2.8)

S :Kircho screen:u(x) = 0 ∧ ∂u

∂n = 0 (2.9)

R:Sommerfeld radiation: lim

|x|→∞|x|

∂u

∂n −iku(x)

= 0, (2.10) where we denote the aperture (hole + obstacle) withA, the opaque screen or detector withS and the outgoing far eld radiation eld half-sphere boundary withR. The incoming eld is denoted with uI(x).

(25)

2.3 Fresnel-Kircho and Rayleigh-Sommerfeld formula- tions

Now we use a spherical point source

u(x) =Aeik|x−xs|

|x−xs| ≡Aeiks

s (2.11)

with the source position xs and the amplitude A. For the Green function, we get the gradient as

∂G

∂n|(x−y) =−

ik− 1

|x−y|

G(x−y) cosδ(x,y) (2.12) ' −ikG(x−y) cosδ(x,y), (2.13) where the approximation holds when |x−y| λ. Above the so-called inclination factor is

cosδ(x,y) =nx·

x−y

|x−y|

, (2.14)

where δ is the angle between the vector x−y and the normal vector nx (z-axis).

Then using the HK theorem, we get the Fresnel-Kircho diraction equation [34] as FK equation: u(y) =− 1

4π Z

A

dS(x)

iku(x) cosδ(x,y) + ∂u

∂n|x

G(x−y).

(2.15) However, this equation is ill-posed by inconsistent boundary conditions, presumably rst noted by Sommerfeld. We would like to get rid of the normal derivative ∂u/∂n term at the aperture.

Getting rid of the normal term is done by imposing the Sommerfeld radiation condition of Eq. 2.10 to the FK equation and evaluating Eq. 2.12 at the point x instead of x−y, which results in a factor of 2 dierence. These modications yield the Rayleigh-Sommerfeld [35] diraction equation

RS equation: u(y) =−i k 2π

Z

A

dS(x)u(x) cosδ(x,y)G(x−y). (2.16) Note that sometimes in the literature, this is called the Fresnel-Kircho equation.

This integral is a superposition of source waves (Huygen's principle) with the phase shift at the aperture byπ/2dictated by−ifactor. We see that the boundary condi- tions restricted the Helmholtz-Kircho integral to be non-zero only on the aperture

(26)

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -1

-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

Figure 2.1: A numerical Rayleigh-Sommerfeld integral based simulation of the eld values u(y) with three square holes in A, the source being isotropic at z→ −∞. boundary A. That is, the eld value at y depends only on that. One could call this the rst holographic principle, later made explicit by Gabor by the invention of optical holography in 1947 and then extended to extra space-time dimensions by 't Hooft, Susskind, Maldacena and others.

(27)

2.4 Fresnel and Fraunhofer approximations

By translation invariance and some approximations, we can re-write the RS dirac- tion equation as a convolution integral

u(y) = Z

A

dS(x)u(x)h(x−y), (2.17)

using the kernel functionh(x). Now dierent classic analytic approximation schemes can be obtained by expanding the kernel in various ways and truncating, for example in the so-called Fresnel approximation

Fresnel regime: b2

zλ ∼1, (2.18)

whereb2is approximately the aperture dimension squared,z' |x−y|is the distance from the aperture to the detector andλis the wavelength of the incoming radiation.

Actually, by reciprocity of the FK- or RS-equation, the condition assumes also that z can be replaced by s = |x−xs|, which is the distance from the source to the aperture. The corresponding approximated Green's function kernel is

h(x−y) = 1

iλzeikzeik2z|x−y|2 ' −i k

2π cosδ(x,y)G(x−y) (2.19) obtained by Taylor expanding

|x−y| 'z 1 +1 2

|x−y|

z

2!

(2.20) inside the exponential function to the rst non-trivial order and outside exponential simply with|x−y| 'zand substituting these in the Green's function of Eq. 2.5. The inclination factor was approximated with 1 in the forward (paraxial) limit δ → 0, such that the vector y has much larger z component than transverse ones. The Fresnel diraction integral equation is then

Fresnel equation: u(y) = 1 iλzeikz

Z

A

dS(x)u(x)eik2z|x−y|2

= 1

iλzeikze2zik|y|2 Z

A

dS(x)u(x)eik2z|x|2eikzx·y. (2.21) If the source and the detector are in the far eld from the aperture or we model the outgoing eld from a lens which is focusive (positive), we can model the diraction

(28)

of the waves as plane waves and use the Fraunhofer approximation which results in linear dependence on the integration variable on the aperture, that is, a 2D-Fourier transform. The key point here is that the phase of the eld is the same at each point at the aperture, due to incoming plane waves. The approximation condition can be written as

Fraunhofer limit: b2

zλ 1, (2.22)

wherez can again be replaced with s. The Fraunhofer diraction integral equation is a simplied version of the Fresnel equation

Fraunhofer equation: u(y) = 1

iλzeikze2zik|y|2 Z

A

dS(x)u(x)eikzx·y, (2.23) obtained by neglecting the quadratic terms from the expansion, to obtain linear dependence on the integration variables. Classic analytic solutions can be obtained by Fourier transform for rectangular (→ sinc2), circular (→ Airy) and Gaussian (→ Gaussian) density slits.

(29)

2.5 Feynman path integral

The Feynman path integral based [2] complex transition amplitude or propagator for a particle, to start from(xi, ti)and end up in(xf, tf), in a non-relativistic formulation is

K(xf, tf|xi, ti) =hxf, tf|xi, tii= Z

all pathsD[x(t)]eiS[x(t)]/~, (2.24) where all dierent paths of the integral represent dierent quantum phases. The transition probability is obtained as the amplitude squared, as usual, by the Born rule. The Born rule cannot be derived, currently. In the transition amplitude, the action functional is a time integral over the Lagrangian, which encapsulates the dynamics of our physics

S[x(t)] = Z tf

ti

L[x(t),x(t), t]˙ dt (2.25) with a classical Lagrangian, here

L= 1

2x(t)˙ 2−V[x(t), t], (2.26) where V is an external potential term and the dotted variable is the time deriva- tive. The Feynman path measure D[x(t)] needs to be understood as the following discretization

D[x(t)]≡ lim

n→∞

1 (2πi~∆/m)d/2

Z

Rd n−1

Y

k=1

dxk

(2πi~∆/m)d/2, (2.27) with the phase oscillating exponentiated action discretized as

exp(iS[x(t)]/~)→exp i

~

n

X

z=1

"

1 2m

xz−xz−1

2

−V(xz)

#!

, (2.28)

where d denotes the number of spatial dimensions, x0 ≡ xi, xn ≡ xf and ∆ = (tf −ti)/n. For a mathematician, this path integral measure causes some headache, especially in the case of quantum eld theories. The stochastic path integral with the Wiener measure, on the other hand, is well dened. This is probably just lack of suitable mathematics. Physically, this picture is both intuitive and elegant. It also provides the path to lattice quantum eld theories.

(30)

-30 -20 -10 0 10 20 30 0

0.2 0.4 0.6 0.8 1

Figure 2.2: A toy Monte Carlo complex path integral simulation of diraction.

Now a simple single or double slit diraction can be trivially calculated in 1+1 dimensions for a free particle without any potential term. Instead of an analytical calculation, by curiosity, we did a numerical Monte Carlo simulation, where instead of doing any Wick rotation to imaginary time as is often plausible to do to avoid the highly oscillating exponential term, we brute force evaluated directly the complex path integral in real time by sampling and accumulating discretized paths without any importance sampling. The resulting diraction pattern is illustrated in Figure 2.2. For proper simulations, one needs to remember ne enough spatial sampling in the propagation and at the detector by the Shannon-Nyquist-Whittaker-Kotelnikov sampling theorem, in order not to produce aliasing. Extreme discretization instabil- ity was observed with complex path sampling, a major problem in dierent quantum simulation scenarios known in general as the sign problem. It is an NP-hard problem with no known generic solutions [36]. Basically, this is currently one of the funda- mental limitations to any ab initio simulations of high energy diraction and suggests the need for quantum computers, not feasible yet.

On the other hand, deep learning based approximation techniques are already capable of producing extremely high dimensional generative sampling of photoreal- istic high resolution images [37]. This could be an interesting target for the future research regarding the lattice simulations. This is because image matrices can be understood analogous to extremely complicated lattice eld congurations.

(31)

We describe the proton-proton combinatorial ducial cross section measurement at the center-of-mass energy of √

s= 13TeV in the ALICE experiment, implemented during the thesis. The full analysis code and grid computing code, including all ex- perimental treatments, is available online on github. It is directly compatible with the ALICE experiment, but all fundamental algorithms and methods are experi- ment independent, by design. The measurement is the rst of its kind, no similar multidimensional unfolded measurement has been attempted before. However, we shall point out that the TOTEM double diractive measurement [38] is technically a subset of this measurement. Also already the measurements by UA5 [39] were towards this direction, however, philosophically dierent. Here our main goal is not actually to extract diractive cross sections, but go beyond and implement a ducial measurement of the combinatorial subspaces or measure the Grasmannian, a math- ematician would say. From another perspective, these cross sections are in a sense coded diraction aperture measurements a high energy analog to the classic case of controlled coded aperture diraction, which can be used for the phase eld recovery with modern algorithms. Here we use them to recover high energy observables and model parameters.

A detailed description of the developed methodology is given in Chapter6. In the analysis, all basic low level detector distributions were compared with the GEANT simulations driven by Monte Carlo event generators, low level detector signal quality cuts were part of the basic procedures and explicit cut ows were inspected. We shall concentrate here on the physics results but also go through all the basic steps.

This and our code should inspire also others to implement a radically new type of measurements.

(32)

3.1 Experimental setup

The detector setup consisted of the AD, VZERO and SPD subdetectors, with max- imal pseudorapidity coverage. The minimum bias trigger required was algebraic

`global OR', where each of the subdetectors had their independent signal decision criteria. The ALICE experiment uses a naming convention of C- and A-sides, where the C-side is on negative rapidities and the A-side on positive. For the generic detector performance of the ALICE experiment, see [40].

The VZERO detector [41] is a forward scintillating plastic counter made with eight cells in four radial rings giving 32 channels per rapidity side. Scintillator de- tectors are based on a physical mechanism where incoming charged particles induce molecular excitations in the plastic, which is de-excited by the emission of visible wavelength photons. Hamamatsu photomultipliers (PMT) are coupled directly to the scintillating plastics on the A-side and on the C-side through optic bers. The digitized output gives signal hit time and accumulated charge information. The geometric acceptances over pseudorapidity are η ∈ [−3.7,−1.7] and [2.8,5.1]. The minimum bias trigger decision was based on time-domain signal arrival time cuts which lter out most of the beam induced background such as beam-gas collisions.

The PMT high voltage gain setup and signal detection thresholds were adjusted near the single minimum ionizing particle (MIP) limit as described in [41], with param- eters adjusted run-by-run for dierent beam background and noise conditions. The oine calibration was done using a beam test data and cosmic muons when the LHC beam was not active. The detector simulations were done with run anchored setups.

In the oine analysis, background and noise sensitivity were studied by requiring also a minimum charge threshold for the signal decision, in addition to more stringent time windows.

The AD detector [42] is a forward shower counter built on similar technology as the VZERO detector but with dierent geometry and optical coupling. The geometric acceptances over pseudorapidity are η ∈ [−7.0,−4.9] and [4.8,6.3]. It consists of two longitudinal (z-axis) layers of scintillating plastic with four radial transverse quadrants around the beam pipe, giving 8 channels per rapidity side.

The scintillation light is steered through a wavelength shifter, to optimize the light transport which is propagated through one meter of optical bers to Hamamatsu photomultipliers. The front-end electronics is the same as in the VZERO case and the digitized output gives hit time and accumulated charge information. The detector PMT gain and signal threshold setup was similar as with the VZERO and a signal coincidence between adjacent layers was required for the trigger. In the same way as with the VZERO, in the oine analysis, we studied the noise and background by

(33)

1

2

2

3

3

3

ZDC

ADC V0C

SPD

V0A ADA

ZDA lnpms

p

lnmpp s

pmaxT 'p2se!j2j,p

s= 13 TeV

2

-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11 12 pT(GeV)

10-1 100 101 102 103

E'pTcosh(2)(GeV)

100 101 102 103

Figure 3.1: The geometric acceptance of ALICE sub-detectors. The triangle plot style adapted and extended from R. Orava.

more stringent accumulated charge and time window cuts.

The SPD (Silicon Pixel Detector) [43] of the inner tracking system (ITS) consists of two layers of pn-type silicon pixel diodes next to the interaction point with the detector cylinder radius 3.9 cm and 7.6 cm and geometric acceptance|η|<2and|η <

1.5. The material thickness of each pixel is 200µm. The inner layer has 40 half-stave modules and the outer layer 80, each having 10 readout chips with 8192 pixels per chip. This gives in total 1200 readout chips with 9.83 million pixels with50×425µm2 pitch size. The radiation damaged or otherwise faulty, inactive and noisy outlier chips were explicitly inspected from the data and bit masked also from the simulations.

The low level online trigger implemented by the SPD is based on a Fast-OR decision where in each chip, the discriminator output of pixel cells is collected and algebraic OR is taken. In the oine analysis, at least one Fast-OR in the inner or outer SPD layer was required and the noise sensitivity was inspected by requiring more stringent criteria, such as hits in both layers. This, however, eectively also reduces

(34)

the acceptance to |η| < 1.5. Moving further in the transverse direction, there are also two layers of the silicon strip detector (SSD) and two outermost layers of the silicon drift detector (SDD) with acceptance |η|<0.9. These layers we did not use due to their more limited acceptance. In the analysis, we used the red chip level information. In the LHC run III, the inner tracking system will be replaced with a new silicon tracker with 12.5 gigapixels. In addition, we studied the ZDC (zero degree calorimeter) data, which however is not used in the analysis due to lacking calibration, but is used only as `spectator' detector for general data quality cross checks. We illustrate the detector phase-space cartoon coverage in Figure 3.1.

In this analysis, three special low luminosity runs at √

s = 13 TeV with identi- cation numbers 274593, 274594 and 274595 were chosen with approximately 487k, 418k, 775k triggered minimum bias beam-beam events per run with results given here for the last one, two other used for the irreducible run-by-run uncertainties en- capsulating unknown systematic uncertainties. Using the standard coding, the LHC lling scheme was

<SPACING>-<NBb>-<IP1/5>-<IP2>-<IP8>-<code>

=Multi-57b-56b-25-20-24-4bpi-15inj, (3.1)

meaning total 57 and 56 bunches circulating in the clockwise and counterclockwise, out of which 20 colliding bunch pairs in the ALICE IP2 interaction point. This type of beam conguration removes any o-time pileup. The instantaneous luminosity with these runs was very low, with Poissonµ∼10−3, given by the normalized global OR trigger rateR

µ=−ln(1−R), where R= L0b

Nbfrev = L0b

LMb ∈[0,1), (3.2) where the LHC revolution frequency is frev = 11.245 kHz, Nb is the number of interacting bunch pairs and L0b (LMb) are the global OR and bunch cross trigger frequencies. The numbers in these runs give vanishing pileup corrections, which how- ever were part of our correction algorithms through our Möbius inversion technique [44]. For the trigger details and rates, see AppendixA.4.

A van der Meer scan using the VZERO detector was used to calibrate the absolute luminosity, details and uncertainties of this are given in [45]. The basic idea is that the VZEROC and VZEROA, together with their boolean AND trigger combination, provide a visible inelastic cross section

σV0-AND= 57.8±1.2 mb (3.3)

(35)

which is used to directly scale the event counts to visible cross sections because VZERO was part of our detector setup. The visible inelastic cross section seen by the VZERO is not a ducial cross section, or neither a total inelastic cross section, to emphasize. These three dierent types of denitions were taken into account by the unfolding and extrapolation procedures.

(36)

3.2 Experimental selections and corrections

The oine signal selections are listed in Table 3.1. No high level reconstructed observables were used or are available in ALICE beyond |η| < 0.9, which would naturally be the optimal case with hypothetical large forward acceptance tracking and calorimetry.

Detector Signal Cut ADC ∆t∈[63,69]ns V0C ∆t∈[0,6]ns SPDC F ≥2 SPDA F ≥2

V0A ∆t∈[7,14]ns ADA ∆t∈[54,60]ns

Table 3.1: Oine signal selection quality cuts in terms of the signal arrival time∆t and the number of red chips F.

After the low-level signal decision criteria, the data was corrected for the irre- ducible residual beam gas, satellite collisions and noisy events. That is, diractive events, especially single diractive, are experimentally very similar to a xed target like beam gas collisions passing through the time window and charge threshold cuts.

To correct this, we used beam-beam, beam-empty, empty-beam and empty-empty trigger masks which were based utilizing the normal and special LHC bunch bucket sequences. The data was corrected with

Nj ←Nj(B)−α(A)Nj(A)−α(C)Nj(C)+ 2α(E)Nj(E), (3.4) whereNj(k)represent the number of events in thej-th combination with thekrunning over the aforementioned special bunch trigger collected event statistics. The last factor with a positive sign and a factor of two takes into account the double counting.

The correction scale factors α(k) taking into account the dierent bunch sequence luminosity dierences and the deterministic and random trigger downscaling were obtained from the trigger statistics with

α(k)=

LMb(B) LMb(k)

L0a(B)/L0b(B) L0a(k)/L0b(k)

, (3.5)

where k = A is a beam from the A-side and nothing from the C-side, k = C is a beam from the C-side and nothing from the A-side andk=E is nothing from both

Viittaukset

LIITTYVÄT TIEDOSTOT

The objective of this thesis is to go through the calculation of a tree-level high energy bremsstrahlung scattering cross section for a process that involves a high energy

Hence, therefore in terms accuracy and computation time, the deep learning method best suited for using real-time information of formulating energy trade bids for Singapore’s

tieliikenteen ominaiskulutus vuonna 2008 oli melko lähellä vuoden 1995 ta- soa, mutta sen jälkeen kulutus on taantuman myötä hieman kasvanut (esi- merkiksi vähemmän

Ryhmillä oli vastuu myös osaamisen pitkäjänteisestä kehittämisestä ja suuntaa- misesta niin, että aluetaso miellettiin käytännössä yleisesti ennemminkin ryhmien osaamisen

Jos sähkönjakeluverkossa on sen siirtokapasiteettiin nähden huomattavia määriä ha- jautettua tuotantoa, on tärkeää, että hajautettujen energiaresurssien tehoa voidaan ennus- taa

Hyvä: poistoilmanvaihdon perusparannus (tarpeenmukainen säätö) + talosaunan iv Paras: huoneistokohtainen tulo + poisto tai huoneistokohtainen ilmalämmitys. Paras:

Hence, therefore in terms accuracy and computation time, the deep learning method best suited for using real-time information of formulating energy trade bids for Singapore’s

Työn merkityksellisyyden rakentamista ohjaa moraalinen kehys; se auttaa ihmistä valitsemaan asioita, joihin hän sitoutuu. Yksilön moraaliseen kehyk- seen voi kytkeytyä