• Ei tuloksia

Contributions to the theory of multiplicative chaos

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Contributions to the theory of multiplicative chaos"

Copied!
38
0
0

Kokoteksti

(1)CONTRIBUTIONS TO THE THEORY OF M U LT I P L I C AT I V E C HAO S jann e ju nnil a. Academic dissertation To be presented, with the permission of the Faculty of Science of the University of Helsinki, for public criticism in Auditorium XII of the Main Building of the University on [ May 26, 2018 at 16:31 – classicthesis version 0.1 ] 6th October 2018 at 10 o’clock. Department of Mathematics and Statistics Faculty of Science University of Helsinki. Helsinki 2018.

(2) isbn 978-951-51-4513-0 isbn 978-951-51-4514-7 http://ethesis.helsinki.fi Unigrafia Oy Helsinki 2018. (paperback) (PDF).

(3) ACKNOWLEDGMENTS. Above all, I am most grateful to my advisor Eero Saksman for his excellent support and guidance during my studies. His vast knowledge of mathematics in general, and analysis in particular, has certainly left me impressed, and hopefully a bit of it will stick with me as well. I have also always liked Eero’s habit of telling stories from the history of mathematics. Besides being entertaining, these anecdotes connect the theorems to the human beings behind them and illustrate how diverse the mathematical community actually is. Next, I would like to give special thanks to Christian Webb for his collaboration and insights. Especially, his ability to place our work into the larger context of mathematical physics has been an illuminating experience for me. I wish to thank Julien Barral and Vincent Vargas for their efforts and dedication when working as the pre-examiners of this thesis. Their positive comments are highly appreciated. I also thank the funders of this research: the Doctoral Programme in Mathematics and Statistics at the University of Helsinki, the Academy of Finland Centre of Excellence ‘Analysis and Dynamics’, as well as the Academy of Finland Project ‘Conformal methods in analysis and random geometry’. My stay at the Kumpula campus has been a joyous time. Thanks here go to all my friends and colleagues for creating the cozy atmosphere. I thank Otte Heinävaara for the numerous interesting discussions we have had, as well as for running the high school math club Tekijäryhmä with me. I also thank my friends in the Finnish competitive programming community for providing me with lots of pastimes, whether it be solving coding problems, playing the violin, or just chatting on the irc. Likewise, I thank everyone who has played with or against me at the traditional Friday badminton sessions of the math department. Finally, I wish to thank my mom and dad for all their support, as well as my sister for all the silly laughs! Helsinki, September 2018. Janne Junnila i.

(4) LIST OF INCLUDED ARTICLES. This thesis consists of an introduction and the following three articles: [I] J. Junnila and E. Saksman. Uniqueness of critical Gaussian chaos. Electronic Journal of Probability 22 (2017). [II] J. Junnila. On the Multiplicative Chaos of Non-Gaussian Log-Correlated Fields. International Mathematics Research Notices (2018), rny196. [III] J. Junnila, E. Saksman, and C. Webb. Imaginary multiplicative chaos: Moments, regularity and connections to the Ising model. To be submitted (2018). All respective authors played equal roles in the analysis and writing of the joint articles [I] and [III].. ii.

(5) CONTENTS 1. w hat is mu ltipl icat ive c haos? 1.1 An introductory example 1 1.2 History and current trends 2. 2. l o g-c orrel ated gaussia n fiel ds 2.1 Distribution-valued Gaussian fields 2.2 Log-correlated fields 9 2.3 Approximations 11. 1. 7 7. 3. g aussian multiplic ative chaos 3.1 Definitions 13 3.2 Moments and support 15. 4. c om plex and critic al gaussian chaos 4.1 Extending the range of 𝛽 17 4.2 Purely imaginary chaos 17 4.3 Other types of complex chaos 20 4.4 Critical chaos 21. 5. n on-gaus sian c haos 23 5.1 History and applications 23 5.2 Our approach 24 5.3 Proving convergence 25 5.4 Open questions 27 bibl io graphy. 13. 17. 29. iii.

(6)

(7) 𝟙 W HAT I S M U LT I P L I C AT I V E C HAO S ?. 1.1 An introductory example The subject of this thesis is the study of various instances of random distributions that we collectively call multiplicative chaos.1 They are often constructed by taking the product of an infinite number of independent random functions, which explains the name. One of the simplest examples of multiplicative chaos is based on the random Fourier series ∞. 𝐴𝑘 cos(2𝜋𝑘𝑥) + 𝐵𝑘 sin(2𝜋𝑘𝑥) , √𝑘 𝑘=1. 𝑋(𝑥) ≔ ∑. (1.1). where 𝐴𝑘 and 𝐵𝑘 are independent standard Gaussian random variables. Let 𝑌𝑛 (𝑥) denote the 𝑛th term in the series (1.1), fix a parameter 𝛽 > 0, and define for 𝑛 ≥ 1 and 𝑥 ∈ [0, 1] the product 𝑛. 𝑀𝑛 (𝑥) ≔ ∏ exp (𝛽𝑌𝑘 (𝑥) − 𝑘=1. 𝛽2 𝔼 𝑌𝑘 (𝑥)2 ) . 2. It is easy to check that the sequence (𝑀𝑛 (𝑥))∞ 𝑛=1 is a non-negative martingale, and from this it follows that the measures 𝑀𝑛 (𝑥) 𝑑𝑥 converge almost surely in the weak∗-sense to a random measure 𝜇 on the interval [0, 1]. The measure 𝜇, which is an example of a Gaussian multiplicative chaos measure, turns out to be almost surely the zero measure if 𝛽 ≥ √2, but for 𝛽 < √2 one gets a non-trivial limit. In the latter case 𝜇 is almost surely singular with respect to the Lebesgue measure, yet it has no atoms. 1 A note on terminology: In this introduction the word ‘distribution’ will always refer to continuous linear functionals on some space of test functions, typically tempered distributions à la Schwartz. Probability distributions will be called probability laws instead.. 1.

(8) 2. what is multiplicative chaos?. 20 15 10 5 0. 0. 1. Figure 1.1. A computer simulation of 𝑀𝑛 (𝑥) for 𝑛 = 1000 and 𝛽 = 1.. In Figure 1.1 we have plotted a single simulated realization of 𝑀𝑛 (𝑥) when 𝑛 = 1000 and 𝛽 = 1. One can see that the function stays most of the time rather close to 0, but it has some large spikes. As 𝑛 in[ May 26, 2018 at 16:32 – classicthesis version 0.1 ] creases, the spikes will get thinner and taller, and in the limit the whole mass will actually be concentrated on some set of Hausdorff dimension strictly less than 1. 1.2 History and current trends The history of multiplicative chaos traces back to the 1970s, where it appeared almost simultaneously and independently in two quite different contexts. First, in the 1971-paper [17] Raphael Høegh-Krohn considered certain quantum field theories whose Hamiltonians have the form 𝐻 = 𝐻0 + 𝑉, where 𝑉 is a multiplicative chaos -type object. Second, in the 1972-article [24] Benoît Mandelbrot proposed a novel limit log-normal model for energy dissipation in turbulence. Both Høegh-Krohn and Mandelbrot were able to study their models rigorously in the so called 𝐿2 -phase where the second moment of the multiplicative chaos is finite. Mandelbrot moreover provided heuristical arguments on how one might be able to go past this phase to.

(9) 1.2 history and current trend s. 𝑊1. 𝑊3. 𝑊2 𝑊5. 𝑊4. 0. 1/4. 𝑊6. 1/2. 𝑊7. 3/4. 1. Figure 1.2. A dyadic Mandelbrot cascade on [0, 1].. the whole subcritical phase.2 Making these heuristics rigorous however turned out to be difficult, so he switched to study simplified models called multiplicative cascades in his two subsequent papers [25, 26]. 2018 at 16:32 – classicthesis version 0.1 ] Multiplicative cascades have[ May also26, been called Mandelbrot cascades, and a basic example of a one-dimensional cascade built on the unit interval is illustrated in Figure 1.2. We will next explain the construction. Briefly, the goal is to form a random measure on some space, say the unit interval as in the picture. We begin by dividing the interval recursively into left and right halves, starting from the interval [0, 1] itself. Each dyadic subinterval can then be identified in a natural way with a node in the tree of splittings, which one can visualize to be hanging above the unit interval. Next we assign every node/interval 𝑘 a nonnegative random weight 𝑊𝑘 . The 𝑛th level approximation 𝜇𝑛 for the cascade measure is then obtained by letting the measure of a dyadic subinterval 𝐼 of length 2−𝑛 be the product of the weight of 𝐼 together with the weights of all the ancestors of 𝐼, distributing the mass uniformly inside 𝐼. In the picture 𝐼 could be the red interval, and then the. 2 In our introductory example the 𝐿2 -phase corresponds to having 𝛽 ∈ (0, 1), while the subcritical phase has 𝛽 ∈ (0, √2).. 3.

(10) 4. what is multiplicative chaos?. corresponding weights would be the ones lying on the red path from the root of the tree to the node corresponding to 𝐼. If we assume that the weights 𝑊𝑘 are independent and identically distributed random variables with mean 1/2, the sequence 𝜇𝑛 becomes an almost surely converging martingale with a limit measure 𝜇. The precise condition for the non-degeneracy of the limit in this case is 𝔼 𝑊1 log(𝑊1 ) < 0 , a fact that was proven by Jean-Pierre Kahane and Jacques Peyrière in an article [20] that appeared two years after Mandelbrot’s work. In the same paper the authors also proved many other important basic properties of 𝜇, concerning e.g. the existence of moments and the support of the cascade measure. The progress made on Mandelbrot cascades was thus relatively fast, but when the model is viewed as a simplification of the original limit log-normal model, it takes its toll on naturality. The problem is that the cascade measures would rather live on the boundary of a tree than in Euclidean space: The stochastic dependence between two locations 𝑥 and 𝑦 can vary drastically depending on where 𝑥 and 𝑦 are located in the space, even if their Euclidean distance is held constant. For instance, if 𝑥 = 1/2−𝜀 and 𝑦 = 1/2+𝜀 are points on the unit interval in Figure 1.2, then the lowest common ancestor of 𝑥 and 𝑦 in the tree is the root. This means that the behaviour of 𝜇 at 𝑥 is almost completely independent of the behaviour at 𝑦, despite the points being very close to each other when 𝜀 > 0 is small. This problem was overcome when Mandelbrot’s original limit lognormal model was finally made rigorous by Kahane in 1985 when he published his seminal article [19]. In the paper did Kahane not only coin the term multiplicative chaos and present a solid mathematical theory capable of justifying and generalizing Mandelbrot’s model, but he also proved many basic properties of the resulting chaos measures. Kahane’s theory allows one to construct multiplicative chaos measures in arbitrary locally compact metric spaces (𝑇, 𝜌). Roughly speaking, Kahane showed how to define random measures of the form 𝛽2. 2. 𝑒𝛽𝑋(𝑥)− 2 𝔼 𝑋(𝑥) 𝑑𝜎 ,. (1.2).

(11) 1.2 history and current trend s. where 𝛽 ∈ ℝ is a parameter, 𝜎 is a reference measure, and 𝑋 is a Gaussian field on 𝑇. The field 𝑋 however is typically not a function but a distribution, and it is not immediately clear how to make sense of (1.2). This will be further elaborated in Chapters 2 and 3. In this introduction we will be focusing on the case where 𝑇 ⊂ ℝ𝑑 and 𝜌 is the usual Euclidean distance, in which case it turns out that the natural Gaussian fields to look at are those whose covariance has a logarithmic singularity on the diagonal, formally 𝔼 𝑋(𝑥)𝑋(𝑦) = log+. 1 + 𝑂(1) . |𝑥 − 𝑦|. This has also been the most important case for applications. Of the articles in this thesis, [I] is written for general metric spaces (with applications in ℝ𝑑 ), while [II] and [III] are written in the Euclidean setting. It took some time before Kahane’s theory started to receive more widespread interest, but today multiplicative chaos has connections to many directions. These include for example the Stochastic Loewner Evolution (sle) [2, 15, 36], two-dimensional quantum gravity [9, 13, 14, 21], and number theory [16, 33]. Along with these rather recent developments, a call for the further study of multiplicative chaos distributions, including variants such as non-Gaussian and complex chaos, has emerged. The three papers included in this thesis provide some answers to this call: In [I] we show that non-atomic real Gaussian multiplicative chaos measures are universal (especially in the so-called critical case), in the sense that various different ways of constructing the chaos actually yield the same result. In [II] we construct non-Gaussian multiplicative chaos for fairly general log-correlated fields that admit a representation as a sum of independent fields. Finally, in [III] we study the basic properties of purely imaginary Gaussian multiplicative chaos, which is perhaps the most elementary variant of complex multiplicative chaos. This introduction discusses prerequisities and results from the literature that are related to the multiplicative chaos theory appearing in [I– III]. At the same time we will also state some selected theorems from the three papers.. 5.

(12)

(13) 𝟚 L O G - C O R R E L AT E D G AU S S IA N F I E L D S. 2.1 Distribution-valued Gaussian fields Let 𝑈 ⊂ ℝ𝑑 be a bounded open set. Classically one would think of a Gaussian field on 𝑈 as a random function 𝑋 ∶ 𝑈 → ℝ, such that for any finite collection of points 𝑥1 , … , 𝑥𝑛 ∈ 𝑈 the random variables 𝑋(𝑥1 ), … , 𝑋(𝑥𝑛 ) are jointly Gaussian. In this chapter we will first shortly discuss how this concept can be generalized to more singular Gaussian fields on 𝑈, whose realizations are not functions but distributions. After this is done, we will define log-correlated Gaussian fields and provide some examples. For simplicity, we will henceforth always assume that our Gaussian random variables are centered, meaning that they have expectation 0. The law of a Gaussian random vector 𝑋 ∈ ℝ𝑛 is then completely determined by its covariance matrix 𝐶, which is an 𝑛 × 𝑛 positive semidefinite matrix with entries 𝐶𝑗,𝑘 = 𝔼 𝑋𝑗 𝑋𝑘 . From this it follows that the law of any Gaussian field 𝑋 on 𝑈 is determined by its covariance function 𝐶𝑋 (𝑥, 𝑦) ≔ 𝔼 𝑋(𝑥)𝑋(𝑦).1 Conversely, given a positive definite function 𝐶 ∶ 𝑈 × 𝑈 → ℝ, one may construct (by using the Kolmogorov extension theorem) a collection {𝑋(𝑥) ∶ 𝑥 ∈ 𝑈} of Gaussian random variables, in such a way that for any 𝑛 ≥ 1 and 𝑥1 , … , 𝑥𝑛 ∈ 𝑈 the random variables 𝑋(𝑥1 ), … , 𝑋(𝑥𝑛 ) are jointly Gaussian with co𝑛 variance matrix given by (𝐶(𝑥𝑗 , 𝑥𝑘 ))𝑗,𝑘=1 . This point of view, while natural, breaks down when we try to define log-correlated fields. Consider the example we had earlier in Chapter 1 where 𝑋 is the Gaussian Fourier series given by (1.1). We will soon see that 𝑋 is an example of a log-correlated field in the sense of the. 1 Recall that for any index set 𝐼 the law of any random variable Ω → ℝ𝐼 is determined by the laws of its finite-dimensional projections.. 7.

(14) 8. l o g-c orrel ated gaussian fie lds. upcoming Definition 2.2 – for now, let us simply note that after a small (formal) computation one finds that 𝔼 𝑋(𝑥)𝑋(𝑦) = log. 1 2|sin(𝜋(𝑥 − 𝑦))|. for all 𝑥, 𝑦 ∈ [0, 1]. Notice that this would imply that 𝑋(𝑥) has infinite variance, so the idea that 𝑋(𝑥) is a Gaussian random variable is not going to work – 𝑋 cannot be a random function. It is however easy to check that 𝑋 makes sense as a random distribution on the unit circle 𝕋 ≅ ℝ/ℤ. Indeed, if 𝜑 ∈ 𝐶∞ (𝕋) is a test function, then its Fourier coefficients decay faster than any polynomial, while the coefficients 𝐴𝑘 /√𝑘 and 𝐵𝑘 /√𝑘 stay almost surely bounded. In fact, using the Borel–Cantelli lemma one easily sees that for any 𝜀 > 0 the random variables 𝐴𝑘 and 𝐵𝑘 are almost surely less than √(2 + 𝜀) log(𝑘) for large enough 𝑘. It follows that 𝑋 can be evaluated against any test function that has Fourier coefficients decaying faster than (1+|𝑘|)−1/2−𝜀 for some 𝜀 > 0. The above example indicates that we should aim for a definition of a random Gaussian distribution. To this end, let S′ be the space of tempered distributions on ℝ𝑑 , where S denotes the Schwartz function space. We say that a real valued random distribution 𝑋 ∈ S′ is an S′valued Gaussian field, if the random variables {𝑋(𝜑) ∶ 𝜑 ∈ S, 𝜑 is real} are jointly Gaussian. We have thus replaced the point evaluations in the earlier definition of a Gaussian random function by evaluations against test functions. If 𝑋 is an S′-valued Gaussian field, we may define a bilinear form 𝐶𝑋 on S by setting 𝐶𝑋 (𝑓, 𝑔) ≔ 𝔼 𝑋(𝑓)𝑋(𝑔). This bilinear form is symmetric and positive definite, and it clearly determines the law of 𝑋. The converse is a bit trickier in this case. Given a symmetric and positive definite linear form 𝐶 on S, we can again find a collection of random variables {𝑋(𝑓) ∶ 𝑓 ∈ S} in such a way that the law of 𝑋 agrees with 𝐶. However, it is not immediately clear that one can do this in such a way that the linear structure 𝑋(𝑓) + 𝑋(𝑔) = 𝑋(𝑓 + 𝑔) is preserved, and more importantly, it is also not clear that.

(15) 2.2 l o g-c orrel at ed fields. one can choose the variables so that 𝑋 ∈ S′ . Fortunately, the following simple corollary of Minlos theorem holds (see e.g. [37, Theorem 1.10] and the discussion following it). Theorem 2.1. Let 𝐶 be a real bilinear form on S that is symmetric, continuous and positive definite. Then there exists an S′-valued Gaussian field 𝑋 on ℝ𝑑 such that 𝐶𝑋 (𝑓, 𝑔) = 𝐶(𝑓, 𝑔) for all 𝑓, 𝑔 ∈ S. 2.2 Log-correlated fields We are now ready to define log-correlated Gaussian fields. Definition 2.2. Let 𝑈 ⊂ ℝ𝑑 be a bounded open set. An S′-valued Gaussian field 𝑋 is log-correlated on 𝑈, if 𝐶𝑋 is given by an integral 𝐶𝑋 (𝑓, 𝑔) = ∫. ℝ𝑑 ×ℝ𝑑. 𝐶𝑋 (𝑥, 𝑦)𝑓(𝑥)𝑔(𝑦) 𝑑𝑥 𝑑𝑦 ,. (𝑓, 𝑔 ∈ S). with kernel 𝐶𝑋 (𝑥, 𝑦) of the form 1 {log |𝑥−𝑦| + 𝑔(𝑥, 𝑦) , 𝐶𝑋 (𝑥, 𝑦) = { {0 ,. if 𝑥, 𝑦 ∈ 𝑈 otherwise,. where 𝑔 ∈ 𝐿1 (𝑈 × 𝑈) is some integrable function that is bounded from below on compact subsets of 𝑈 and bounded from above on all of 𝑈. Remark. There is also an object known as the log-correlated Gaussian field (lgf) on ℝ𝑑 . It has 𝑔 = 0 and is defined on the whole space ℝ𝑑 , albeit only up to an additive constant. See [12] for more details. Remark. We would like to point out that the admittedly abstract Theorem 2.1 is not necessary for constructing log-correlated fields. One could for example consider the Karhunen–Loève expansion of the field and show that it converges in a suitable negative-index Sobolev space. Interested readers may wish to look at [III, Section 2], where this approach is carried out. As already mentioned, (1.1) is an example of a log-correlated field. Let us mention another central example, which is the 2-dimensional Gaussian Free Field.. 9.

(16) 10. l o g-c orrel ated g aussian field s. 5. 0. −5. −10 Figure 2.1. A computer simulation of the gff in the unit square [0, 1]2 .. Definition 2.3. Let 𝑈 ⊂ ℂ be a simply connected bounded domain. The Gaussian Free Field (gff) on 𝑈 with zero boundary conditions is the log-correlated Gaussian field with the covariance kernel. [ May 26, 2018 at 16:32 – classicthesis version 0.1 ]. 𝐶𝑋 (𝑥, 𝑦) = log |. 1 − 𝜑(𝑥)𝜑(𝑦) |, 𝜑(𝑥) − 𝜑(𝑦). where 𝜑 ∶ 𝑈 → 𝔻 is any conformal homeomorphism between 𝑈 and the unit disc 𝔻. The gff appears as the scaling limit of many models in mathematical physics, see [35] for an introduction to the topic. A central feature of the gff is its domain Markov property, which states that the conditional law of a gff 𝑋 on some open subdomain 𝑉 ⊂ 𝑈 given its values outside of 𝑉 is equal to the sum of the harmonic extension of 𝑋|𝜕𝑉 to 𝑉 and an independent gff in 𝑉. When analyzing log-correlated Gaussian fields – and later Gaussian multiplicative chaos – some specific covariance kernels are particularly well-behaved. The exactly scale invariant field on the unit interval [0, 1] has the pure logarithm as its covariance kernel: 𝐶𝑋 (𝑥, 𝑦) = log. 1 . |𝑥 − 𝑦|. (2.1).

(17) 2.3 approxi mations. This field has the property that if 0 < 𝑠 < 1 is a scaling parameter, then the law of 𝑋(𝑠⋅) is the same as the law of 𝑋 plus an indepedent Gaussian random variable with variance log(𝑠−1 ). More generally, the so called ⋆-scale invariant covariance kernels [1, 31] are the ones with a representation ∞. 𝐶𝑋 (𝑥, 𝑦) = ∫. 1. 𝑘((𝑥 − 𝑦)𝑡) 𝑑𝑡 , 𝑡. (2.2). where 𝑘 is a positive definite continuous function with 𝑘(0) = 1. If 𝑘 is in addition (say) compactly supported, then these fields enjoy a useful spatial decorrelation property: The integrand in (2.2) is 0 for 𝑡 ≳ |𝑥 − 𝑦|−1 . 2.3 Approximations The definition of Gaussian multiplicative chaos in the next chapter relies on approximating log-correlated fields with functions. We will do this by using convolution approximations, since they work for any field 𝑋. However, certain other approximation paradigms also deserve to be mentioned. For the Gaussian Fourier series (1.1) a natural approximating sequence of functions is given simply by the partial sums of the series (1.1). This method of approximation has independent increments, but its spatial decorrelation properties are poor since the trigonometric basis functions cos(2𝜋𝑘𝑥) and sin(2𝜋𝑘𝑥) are not localized. Such lack of spatial decorrelation is often an obstacle in proofs, because it makes it hard to use arguments that partition the space and claim that the behaviour of the field should be more or less independent in different parts. Certain fields possess approximation schemes that feature both independent increments and spatial decorrelation. One particularly convenient one is the geometric construction of Emmanuel Bacry and Jean François Muzy [3], which is based on looking at cones of hyperbolic white noise in the upper half plane. Such representations exist – among other fields – both for (1.1) and (2.1). Details of the construction in the. 11.

(18) 12. l o g-c orrel ated g aussian field s. case of (1.1) can be found in [2].2 In turn, for ⋆-scale invariant fields a good approximation is obtained simply by truncating the integral (2.2). We refer to [1] for more information. For the gff, a commonly used natural approximation is to take circle averages 𝑋𝜀 (𝑥) = ⨏. 𝑋(𝑡) 𝑑𝑡. 𝜕𝐵(𝑥,𝜀). of the field. Due to the domain Markov property of the gff these approximations possess spatial independence for distances larger than 2𝜀, and moreover for a fixed point 𝑥 the process 𝜀 ↦ 𝑋𝜀 (𝑥) has the covariance 𝔼 𝑋𝜀 (𝑥)𝑋𝜀′ (𝑥) = log. 1 + log 𝜌(𝑥; 𝑈), max(𝜀, 𝜀′ ). where 𝜌(𝑥; 𝑈) is the conformal radius of 𝑈 as seen from 𝑥. This is essentially a time-scaled Brownian motion. Proofs and details can be found e.g. in [14]. Finally – just to mention yet another scheme – one can also approximate (1.1) using vaguelets, see e.g. [I, 38]. Remark. There is a substantial number of results that have only been proven for ⋆-scale invariant and similar well-approximable fields. This is rectified at least partially by [18, Theorem A], where the authors of [III] show that in fact any log-correlated Gaussian field with sufficiently regular covariance can be locally written as a sum of a ⋆-scale invariant field and a Hölder-regular field.. 2 Strictly speaking the cone construction in [2] gives (1.1) plus an independent Gaussian random variable with variance 2 log(2)..

(19) 𝟛 G AU S S IA N M U LT I P L I C AT I V E C HAO S. 3.1 Definitions Let 𝑋 ∈ S′ be a log-correlated Gaussian field on some bounded domain 𝑈 ⊂ ℝ𝑑 as in Definition 2.2, and fix a parameter 𝛽 > 0. A Gaussian multiplicative chaos (gmc) measure 𝜇𝛽 is formally constructed from 𝑋 by taking a renormalized exponential: 𝛽2. 2. 𝑑𝜇𝛽 (𝑥) ≔ 𝑒𝛽𝑋(𝑥)− 2 𝔼 𝑋(𝑥) 𝑑𝑥 .. (3.1). However, as we saw in Chapter 2, the field 𝑋 is not a function, so (3.1) is not mathematically valid as it is. Therefore, to obtain a rigorous definition of 𝜇𝛽 , we will instead approximate 𝑋 with regular fields for which a renormalized exponential can be defined, and then take the limit of such approximations. 𝑑 Let 𝜑 ∈ 𝐶∞ 𝑐 (ℝ ) be a non-negative bump function with integral 1, and denote 𝜑𝜀 (𝑥) ≔ 𝜀−𝑑 𝜑(𝑥/𝜀) for all 𝜀 > 0. It is easy to check that the functions 𝜑𝜀 form an approximation of the identity for tempered distributions, in the sense that for any ℎ ∈ S′ and 𝑓 ∈ S we have lim⟨𝜑𝜀 ∗ ℎ, 𝑓⟩ = ⟨ℎ, 𝑓⟩ , 𝜀→0. where 𝜑𝜀 ∗ ℎ is to be understood as the function 𝑥 ↦ ⟨ℎ, 𝜑𝜀 (𝑥 − ⋅)⟩. Using 𝜑, we may thus define the approximations 𝑋𝜀 (𝑥) ≔ (𝜑𝜀 ∗ 𝑋)(𝑥) for all 𝜀 > 0 and 𝑥 ∈ ℝ𝑑 . The functions 𝑋𝜀 will be almost surely smooth, and using them in place of 𝑋 makes it possible to make sense of (3.1). Definition 3.1. The gmc measure 𝜇𝛽 related to the log-correlated Gaussian field 𝑋 is given by 𝛽2. 2. 𝑑𝜇𝛽 (𝑥) ≔ lim 𝑒𝛽𝑋𝜀 (𝑥)− 2 𝔼 𝑋𝜀 (𝑥) 𝑑𝑥 , 𝜀→0. 13.

(20) 14. gaussian multiplicat ive chaos. where the limit is in the sense of weak⋆-convergence in probability. When 𝛽 ∈ (0, √𝑑) (the so called 𝐿2 -region) one can check that the definition makes sense and yields a non-trivial limit by showing that for any fixed 𝑓 ∈ 𝐶𝑐 (𝑈) the random variable 𝛽2. 2. 𝑀𝜀 (𝑓) ≔ ∫ 𝑓(𝑥)𝑒𝛽𝑋𝜀 (𝑥)− 2 𝔼 𝑋𝜀 (𝑥) 𝑑𝑥 𝑈. is Cauchy in 𝐿2 (Ω) as 𝜀 → 0. After this is done, one can use the separability of 𝐶𝑐 (𝑈) to show that there exists a limiting measure 𝜇𝛽 . Extending the result to all 𝛽 ∈ (0, √2𝑑) is however non-trivial. Theorem 3.2. The limit in Definition 3.1 exists and is almost surely non-zero when 0 < 𝛽 < √2𝑑. Moreover, the limit does not depend on the choice of 𝜑. A version of Theorem 3.2 was first proven by Kahane in [19], where instead of taking convolution approximations he considered fields that can be represented as a sum of independent fields with sufficiently regular positive covariances (the so called 𝜎-positivity condition). The existence of chaos via convolution approximations was later proven by Raoul Robert and Vincent Vargas in the case that 𝑋 is stationary [32]. Both in [19] and [32] the respective authors also proved the uniqueness of the resulting chaos for their respective approximation schemes: Two different 𝜎-positive decompositions give the same result, as does using two different convolution kernels 𝜑. Today the approach taken by Nathanaël Berestycki in [7] is probably the most straightforward way to prove the existence and uniqueness of chaos when using convolution approximations, while the paper by Alexander Shamov [34] gives a robust novel definition of GMC along with very general existence and uniqueness results. Uniqueness can also be deduced from [I, Theorem 1], where the conditions are less general than in [34], but the result extends also to the critical setting 𝛽 = √2𝑑, which will be discussed in Chapter 4..

(21) 3.2 mom ents and supp ort. 3.2 Moments and support We will next list some properties of the chaos measure 𝜇𝛽 to give a feeling what kind of beast1 we are talking about. Let us start with the moments of the total mass. Theorem 3.3. Let 𝐾 ⊂ 𝑈 be compact and assume that 0 < 𝛽 < √2𝑑. . Then 𝔼 |𝜇𝛽 (𝐾)|𝑝 < ∞ if and only if 𝑝 < 2𝑑 𝛽2 This theorem was proven by Kahane in [19] for 𝑝 ≥ 0, and for negative moments the result follows by mimicking the proof of corresponding theorem for multiplicative cascades [28], see also [32]. An often used strategy when proving results such as Theorem 3.3 is to show that the claim holds for some specific covariance kernels, after which it is possible to use the following fundamental inequality to extend the result to arbitrary kernels. Theorem 3.4 (Kahane’s convexity inequalities [19]). Let 𝑋 and 𝑌 be Hölder-regular2 Gaussian fields such that 𝐶𝑋 (𝑥, 𝑦) ≥ 𝐶𝑌 (𝑥, 𝑦) for all 𝑥, 𝑦 ∈ 𝑈. Then for any concave function 𝑔 ∶ [0, ∞) → [0, ∞) we have 1. 1. 2. 2. 𝔼 [𝑔( ∫ 𝑓(𝑥)𝑒𝑋(𝑥)− 2 𝔼 𝑋(𝑥) 𝑑𝑥)] ≤ 𝔼 [𝑔( ∫ 𝑓(𝑥)𝑒𝑌(𝑥)− 2 𝔼 𝑌(𝑥) 𝑑𝑦)] 𝑈. 𝑈. for any non-negative 𝑓 ∈ 𝐶𝑐 (𝑈). For convex functions 𝑔 with at most polynomial growth at infinity one gets the same inequality with sign reversed. We have thus seen that the total mass is a rather heavy tailed random variable. The next theorem, also by Kahane apart from the exact Hausdorff dimension for which he only proved a lower bound, gives more precise information on the measure itself. Theorem 3.5 ([19, 31]). The chaos measure 𝜇𝛽 is almost surely nonatomic. Moreover, it gives full mass to the set {𝑥 ∈ 𝑈 ∶ lim 𝜀→0. 𝑋𝜀 (𝑥) = 𝛽}, 𝔼 𝑋𝜀 (𝑥)2. 1 Nomenclature borrowed from Andriy Bondarenko. 2 By Hölder-regular we mean that the map (𝑥, 𝑦) ↦ √𝔼 |𝑋(𝑥) − 𝑋(𝑦)|2 is Höldercontinuous.. 15.

(22) 16. gaussian multiplicat ive chaos. which has Hausdorff dimension equal to 𝑑 −. 𝛽2 . 2. There are many further properties of the measures measures 𝜇𝛽 that have been investigated in the literature, for example the computation of its multifractal spectrum and asymptotics for the tail probabilities of the total mass. We refer the reader to the survey article [31] for more detailed information..

(23) 𝟜 C O M P L E X A N D C R I T I C A L G AU S S IA N C HAO S. 4.1 Extending the range of 𝛽 In Chapter 3 we considered Gaussian chaos 𝜇𝛽 for the parameter values 𝛽 ∈ (0, √2𝑑). It is trivial to extend this to the range 𝛽 ∈ (−√2𝑑, √2𝑑), since for 𝛽 = 0 the chaos is just the Lebesgue measure, and for 𝛽 < 0 the measure 𝜇𝛽 has the same law as 𝜇−𝛽 by the symmetricity of 𝑋. A natural question to ask is what happens when 𝛽 is allowed to be complex. It turns out [2, 6] that at least for some specific fields 𝑋, such as the exactly scale invariant field (2.1), the range of subcritical 𝛽 can be extended to the open region int(conv({𝑧 ∈ ℂ ∶ |𝑧| = √𝑑} ∪ {−√2𝑑, √2𝑑})) which is illustrated in Figure 4.1.1 More precisely, the standard martingale normalization yields a non-trivial limit for 𝛽 lying in this eyeshaped domain. The disc in the middle corresponds to the 𝐿2 -phase. 4.2 Purely imaginary chaos One particularly interesting region in Figure 4.1 is the imaginary axis. For notational simplicity, we will keep 𝛽 real in this section and instead consider the parameter 𝑖𝛽. The distribution 𝜇𝑖𝛽 for 0 < 𝛽 < √𝑑 is then formally given by 𝛽2. 2. 𝜇𝑖𝛽 = 𝑒𝑖𝛽𝑋(𝑥)+ 2 𝔼 𝑋(𝑥) ,. (4.1). which is again to be rigorously understood via a regularization procedure. The study of 𝜇𝑖𝛽 is the main topic of [III]. There is a plot of a computer simulation of the real part of an approximation of 𝜇𝑖𝛽 in Figure 4.2. In the simulation the underlying field 𝑋 is the gff in the unit 1 Here for 𝐴 ⊂ ℂ we denote by int(𝐴) and conv(𝐴) the interior and convex hull of 𝐴, respectively.. 17.

(24) 18. c omplex and critic al gaussian chaos. ℑ(𝛽) √𝑑. −√2𝑑. √2𝑑. ℜ(𝛽). −√𝑑 Figure 4.1. The extended subcritical regime for complex 𝛽.. square – actually the same realization as in Figure 2.1. The parameter value is 𝛽 = 1/√2. Notice how the role played by the normalization in (4.1) is quite 26, 2018 at 16:33 – classicthesis version 0.1 ] different from what it was in[ May the case of the real chaos: There we had to apply a normalizing factor that tends to 0 in order to counter the more and more probable very large values of 𝑋𝜀 . In the imaginary case we instead have to renormalize by a factor that blows up, so that the ever more wildly oscillating term exp(𝑖𝛽𝑋𝜀 (𝑥)) does not bring the limit to 0. A central feature of the purely imaginary chaos distributions is that they possess all moments, a fact that is not true for other parameter values. Theorem 4.1 ([III, Theorem 1.3]). For any 𝑓 ∈ 𝐶𝑐 (𝑈) we have 𝔼 |𝜇𝑖𝛽 (𝑓)|𝑝 < ∞ for all 𝑝 ≥ 1. Moreover, the law of 𝜇𝑖𝛽 is determined by its moments. The purely imaginary chaos is an honest distribution and not even a complex measure..

(25) 4.2 purely imaginary chaos. 4 2 0 −2 −4 Figure 4.2. A computer simulation of the real part of the imaginary chaos of the gff in the unit square [0, 1]2 .. Theorem 4.2 ([III, Theorem 1.2]). The distribution 𝜇𝑖𝛽 has infinite total variation and is hence almost surely not a complex measure. It 𝑠 – classicthesis 𝑑 2 0.1 ] 26, space 2018 at 𝐵 16:33 version belongs almost surely to the[ May Besov 𝑝,𝑞 (ℝ ) when 𝑠 < −𝛽 /2, and this bound is sharp except possibly at 𝑠 = −𝛽2 /2. In particular it belongs to the 𝐿2 -Sobolev space 𝐻𝑠 (ℝ𝑑 ) for 𝑠 < −𝛽2 /2. Yet another interesting feature of the imaginary chaos is that when suitably renormalized it becomes white noise as 𝛽 → √𝑑. 𝑑−𝛽2. Theorem 4.3 ([III, Theorem 3.20]). As 𝛽 → √𝑑, we have √ |𝑆𝑑−1 | 𝜇𝑖𝛽 → 𝛽2. 𝑒 2 𝑔(𝑥,𝑥) 𝑊 in law, where 𝑊 is the standard complex white noise on 𝑈 and 𝑔 is the one appearing in Definition 2.2. The purely imaginary measure emerges in the scaling limit of various models in mathematical physics. One of the models we discuss in [III] is the so called xor-Ising model. This model consists of two independent copies of the Ising model with spins multiplied together. We show that the scaling limit of the spin field of the critical xor-Ising model converges in law to the real part of purely imaginary chaos 𝜇𝑖𝛽 constructed from the gff on the domain with parameter 𝛽 = 1/√2. Figure 4.2 corresponds to this situation.. 19.

(26) 20. c omplex and critic al gaussian chaos. 𝛽. √𝑑. 𝛾 −√2𝑑. √2𝑑. −√𝑑. Figure 4.3. The phase diagram of [22].. 4.3 Other types of complex chaos Let us also briefly mention that choosing 𝛽 to be complex is just one [ May 26, at 16:34 – classicthesis versiona0.1 ] way to obtain complex versions of2018 gmc. In [22] the authors study version where the field is of the form 𝛾𝑋(𝑥) + 𝑖𝛽𝑌(𝑥), where 𝑋 and 𝑌 are independent log-correlated Gaussian fields, and 𝛾 and 𝛽 are two real parameters. In this situation they obtain a phase diagram as in Figure 4.3. They call the green part phase I, the red part phase II, and the blue part phase III. Phase I corresponds to the subcritical regime as in Figure 4.1, and here one builds the chaos using the standard way of approximation 𝛾2. 2. 𝑒𝛾𝑋𝜀 (𝑥)+𝑖𝛽𝑌𝜀 (𝑥)− 2 𝔼 𝑋𝜀 (𝑥). +. 𝛽2 𝔼 𝑌𝜀 (𝑥)2 2. ,. which is normalized in such a way that the mean is 1. The same holds at the boundary between phase I and phase II (excluding the critical.

(27) 4.4 critical chaos. points 𝛾 = ±√2𝑑, 𝛽 = 0 and the triple points 𝛾 = 𝛽 = ±√𝑑/2), but at other locations one has to introduce additional normalization factors. The limits outside of phase I or the boundary between phases I and II (excluding critical and triple points) are complex white noise measures with control measures based on real multiplicative chaos, see [22] for details. Yet another version of complex chaos appears in [33]. This is something one could call analytic or Hardy chaos, since it can be seen as the boundary values of a random analytic function. In [33] this analytic function arises from random statistics of the Riemann 𝜁-function on the critical line. 4.4 Critical chaos As mentioned in the previous section, the so called critical chaos corresponds to the situation 𝛽 = √2𝑑 (we are back to the usual normalization with 𝛽 denoting the real parameter). There are two approaches to obtaining a non-trivial measure 𝜇𝛽 in this case: The first one is the so called derivative martingale approach, where one looks at 𝜕 𝛽𝑋𝜀 (𝑥)− 𝛽2 𝔼 𝑋𝜀 (𝑥)2 2 [𝑒 ] 𝛽=√2𝑑 𝜕𝛽 2 √ = (√2𝑑𝔼 𝑋𝜀 (𝑥)2 − 𝑋𝜀 (𝑥))𝑒 2𝑑𝑋𝜀 (𝑥)−𝑑𝔼 𝑋𝜀 (𝑥) .. 𝐷𝜀 (𝑥) ≔ −. The second approach is to use the Seneta–Heyde normalization 2 𝜋 √ 𝑀𝜀 (𝑥) ≔ √ √𝔼 𝑋𝜀 (𝑥)2 𝑒 2𝑑𝑋𝜀 (𝑥)−𝑑𝔼 𝑋𝜀 (𝑥) , 2. where we have introduced an additional renormalizing factor which grows like the square root of the variance of the approximation. The convergence of 𝐷𝜀 (𝑥) 𝑑𝑥 to a non-trivial non-atomic measure 𝜇√2𝑑 was proven in [10] for the natural martingale approximation of ⋆-scale invariant fields, while in the subsequent paper [11] the same authors showed that 𝑀𝜀 converges to the same measure. Further properties such as the exact asymptotics for the tail of the distribution of the total mass were proven in [4].. 21.

(28) 22. c omplex and critic al gaussian chaos. In [I] we show the convergence of the Seneta–Heyde normalization for a large class of approximation schemes, provided that we a priori know the convergence for some approximation to which we can compare. The main tool in this is the following theorem. ̃ ∞ Theorem 4.4 ([I, Theorem 1.1]). Let (𝑋𝑛 )∞ 𝑛=1 and (𝑋𝑛 )𝑛=1 be two sequences of Hölder-regular Gaussian fields on a compact doubling met̃𝑛 (𝑥, 𝑦), reric space (𝑇, 𝑑), with covariance functions 𝐶𝑛 (𝑥, 𝑦) and 𝐶 spectively. Let 𝜌𝑛 be a sequence of non-negative Radon reference measures on 𝑇. Define the sequence of measures 1. 2. 𝑑𝜇𝑛 (𝑥) ≔ 𝑒𝑋𝑛 (𝑥)− 2 𝔼 𝑋𝑛 (𝑥) 𝑑𝜌𝑛 (𝑥) ̃𝑛 instead. Assume that 𝜇̃𝑛 and similarly define 𝜇̃𝑛 by using the fields 𝑋 converges in law to an almost surely non-atomic random measure 𝜇̃. ̃𝑛 satisfy the following two conSuppose that the covariances 𝐶𝑛 and 𝐶 ditions: There exists a finite constant 𝐾 > 0 such that ̃𝑛 (𝑥, 𝑦)| ≤ 𝐾 sup |𝐶𝑛 (𝑥, 𝑦) − 𝐶. for all 𝑛 ≥ 1 ,. 𝑥,𝑦∈𝑇. and lim. ̃𝑛 (𝑥, 𝑦)| = 0 sup |𝐶𝑛 (𝑥, 𝑦) − 𝐶. 𝑛→∞ 𝑑(𝑥,𝑦)>𝛿. for all 𝛿 > 0 .. Then also the measures 𝜇𝑛 converge in distribution to the same measure 𝜇̃. The role of the measures 𝜌𝑛 in Theorem 4.4 is to allow for arbitrary deterministic normalizations, and in the case of Seneta-Heyde normalization one can simply choose 𝜋 𝑑𝜌𝑛 (𝑥) = √ √𝔼 𝑋𝜀𝑛 (𝑥)2 𝑑𝑥 , 2 where (𝜀𝑛 )∞ 𝑛=1 is some sequence tending to 0 from above. There are still a number of situations where uniqueness results are not known, especially in the complex or non-Gaussian setting. For the real critical chaos of ⋆-scale invariant fields, the recent article by Ellen Powell [29] extends the uniqueness to the derivative normalization setting in the case of convolution approximations..

(29) 𝟝 N O N - G AU S S IA N C HAO S. 5.1 History and applications This final chapter concerns multiplicative chaos in a non-Gaussian setting. For multiplicative cascades the non-Gaussian situation was studied already by Kahane and Peyrière in [20], but for log-correlated nonGaussian random fields in ℝ𝑑 the theory is much less understood. In the latter case the research has mainly been focused on infinitely divisible fields [3, 30] and on random multiplicative pulses [5, 6]. Non-Gaussian chaos appears naturally in various applications, since in many cases the model itself is not log-normal, even if it might in the scaling limit converge to gmc. For instance, in [III] we show that the scaling limit of the xor-Ising model is the real part of a purely imaginary chaos distribution, but the xor-Ising model itself is not lognormal. Another example is given in [33], where the model comes from the statistic behaviour of the Riemann 𝜁-function on the critical line, and in the limit one gets a certain gmc-type object times a smooth non-log-normal part. Further examples appear in the study of characteristic polynomials of random matrices [8, 23, 39]. In the above applications the approximations of the gmc are not log-normal, but the limit itself is still a gmc-distribution (perhaps with some additional smooth random factor). In [II] we look at a more general situation, where the resulting chaos itself might not be limit lognormal. A basic example is given by constructing chaos once again using the random Fourier series (1.1) from Chapter 1, but replacing the Gaussian random variables 𝐴𝑘 and 𝐵𝑘 by non-Gaussian ones. This and another example related to a construction of Petteri Mannersalo, Ilkka Norros, and Rudolf Riedi [27] are discussed in [II] as applications of a more general theorem for convergence of non-Gaussian chaos.. 23.

(30) 24. non -gaussian chaos. 5.2 Our approach In [II] we take a martingale approach to non-Gaussian multiplicative chaos, a bit like in the original work [19] by Kahane for the gmc. This way we do not have to care about the field itself, just its approximations. Our proof of existence of non-trivial chaos in this setting is on the other hand inspired by Berestycki’s proof in [7]. Our starting point is a sequence (𝑋𝑘 )𝑘≥1 of real-valued, continuous, independent, and centered random fields on (let’s say) the unit cube 𝐼 ≔ [0, 1]𝑑 ⊂ ℝ𝑑 . We assume that this sequence is log-correlated in the following sense. Definition 5.1. The sequence (𝑋𝑘 )𝑘≥1 has a locally log-correlated structure if the following conditions hold: 2 • sup𝑥∈𝐼 𝔼 𝑋𝑘 (𝑥)2 → 0 and ∑∞ 𝑘=1 𝔼 𝑋𝑘 (0) = ∞.. • There exists a constant 𝛿 > 0 such that for all 𝑛 ≥ 1 and 𝑥, 𝑦 ∈ 𝐼 with |𝑥 − 𝑦| ≤ 𝛿 we have 𝑛. 𝑛. | ∑ 𝔼 𝑋𝑘 (𝑥)𝑋𝑘 (𝑦) − min ( log 𝑘=1. 1 , ∑ 𝔼 𝑋𝑘 (0)2 )| ≤ 𝐶 |𝑥 − 𝑦| 𝑘=1. for some constant 𝐶 > 0. The above definition is motivated by the Gaussian case, where the second point appears in the definition of the so called standard approximation sequence [III, Definition 2.7]. As in the Gaussian case, our goal is to show that the sequence of distributions 𝑛. 𝑒𝛽 ∑𝑘=1 𝑋𝑘 (𝑥) 𝜇𝑛 (𝑥) ≔ 𝑛 𝔼 𝑒𝛽 ∑𝑘=1 𝑋𝑘 (𝑥) has a non-trivial limit when 𝛽 ∈ (0, √2𝑑). In order to prove this, we need to make some additional regularity assumptions on the fields. The first of these conditions ensures that for single points 𝑥 ∈ 𝐼 the sum ∑𝑛𝑘=1 𝑋𝑘 (𝑥) will obey the central limit theorem as 𝑛 → ∞, so that it starts to appear Gaussian in a quantifiable way. ∞. 3. sup ∑ (𝔼 |𝑋𝑘 (𝑥)|3+𝜀 ) 3+𝜀 < ∞ 𝑥∈𝐼 𝑘=1. for some 𝜀 > 0 .. (5.1).

(31) 5.3 provin g c onverge nce. The second condition is used in the proof for a large-deviations estimate on the supremum of the field. 𝑟. 𝑛. 𝑛. 2. 𝔼 | ∑ (𝑋𝑘 (𝑥)−𝑋𝑘 (𝑦))| ≤ 𝐶𝑟 𝑒𝑟 ∑𝑘=1 𝔼 𝑋𝑘 (0) |𝑥−𝑦|𝑟 for 𝑛, 𝑟 ≥ 1 . (5.2) 𝑘=1. Finally, the fields 𝑋𝑘 should have pointwise exponential moments, sup sup 𝑒𝜆𝑋𝑘 (𝑥) < ∞. for all 𝜆 ∈ ℝ .. (5.3). 𝑥∈𝐼 𝑘≥1. The above conditions hold especially for the random Fourier series (1.1), when instead of Gaussianity one simply assumes that the variables 𝐴𝑘 and 𝐵𝑘 are i.i.d. and satisfy 𝔼 𝑒𝜆𝐴1 < ∞ for all 𝜆 ∈ ℝ. The main result of [II] may now be stated as follows. Theorem 5.2. Assume that (𝑋𝑘 )𝑘≥1 is locally log-correlated as in Definition 5.1 and satisfies (5.1), (5.2), and (5.3). Then there exists an open 𝑈 ⊂ ℂ with (0, √2𝑑) ⊂ 𝑈 such that for any compact 𝐾 ⊂ 𝑈 there exists 𝑝 = 𝑝𝐾 > 1 for which the martingale 𝜇𝑛 (𝑓) converges in 𝐿𝑝 (Ω) to a limit 𝜇(𝑓; 𝛽) for all 𝛽 ∈ 𝐾 and continuous 𝑓 ∶ 𝐾 → ℂ. In [II] we also show that the convergence takes place in a suitable Sobolev space, and that for a fixed 𝑓 the map 𝛽 ↦ 𝜇(𝑓; 𝛽) is almost surely analytic with respect to 𝛽. Moreover, the interval (0, √2𝑑) is essentially optimal as in the Gaussian case, in the sense that if 𝛽 > √2𝑑 then the resulting measure is almost surely zero. 5.3 Proving convergence Let us close this final chapter with a more detailed discussion on the proof of Theorem 5.2 in [II]. The same proof of course works also for standard gmc. Like the proof of Berestycki [7], it is based on separately looking at those points where the field is large and those where it is small. However, our refined method also works for complex 𝛽 and it directly yields 𝐿𝑝 -integrability. A small disclaimer is in place, though: The region of convergence in the complex plane is not optimal apart from what happens on the real axis, and the 𝑝 one could extract from the proof is not optimal either.. 25.

(32) 26. non -gaussian chaos. The proof is based on partitioning the points 𝑥 ∈ 𝐼 into classes based on the last level on which the field is large at 𝑥. Assume for simplicity that 𝔼 𝑋𝑘 (𝑥)2 = 1 for all 𝑘 ≥ 1 and 𝑥 ∈ 𝐼, and define 𝑌𝑛 (𝑥) = ∑𝑛𝑘=1 𝑋𝑘 (𝑥). We say that the field is large on level 𝑙 at the point 𝑥, if 𝑌𝑙 (𝑥) ≥ 𝛼𝔼 𝑌𝑙 (𝑥)2 , where 𝛼 > 𝛽 is some fixed constant that is chosen during the proof. Thus if 𝑛 is some large natural number, we say that a point 𝑥 ∈ 𝐼 belongs to the level 𝑙 ≤ 𝑛, if 𝑌𝑙 (𝑥) ≥ 𝛼𝔼 𝑌𝑙 (𝑥)2 and 𝑌𝑘 (𝑥) < 𝛼𝔼 𝑌𝑘 (𝑥)2 for 𝑙 + 1 ≤ 𝑘 ≤ 𝑛. The event 𝑌𝑙 (𝑥) ≥ 𝛼𝔼 𝑌𝑙 (𝑥)2 should be compared with Theorem 3.5. Since 𝛼 > 𝛽, the points for which this happens infinitely often do not contribute to the limit. This is because the probability of the event happening is very small for large 𝑙. On the other hand, assuming that 𝑌𝑘 (𝑥) < 𝛼𝔼 𝑌𝑘 (𝑥)2 for large enough 𝑘 is enough to remove the extreme behaviour that makes the 𝐿2 -norm of 𝜇𝑛 (𝑥) blow up, thus opening the door to 𝐿2 -arguments. Assume that we wish to show that sup𝑛≥1 𝔼 |𝜇𝑛 (𝐼)|𝑝 < ∞. Roughly speaking, the idea is to handle the contribution of the points belonging to a fixed level 𝑙 by dividing 𝐼 into dyadic intervals of length 2−𝑙 . On such an interval 𝐽 the field 𝑌𝑙 (𝑥) will not vary too much, and indeed it turns out that for small enough 𝑝 > 1 we have 𝔼 sup |𝜇𝑙 (𝑥)|𝑝 𝟙{𝑌𝑙 (𝑥)≥𝛼𝔼 𝑌𝑙 (𝑥)2 } ≲ 𝑒−𝜀𝑙. (5.4). 𝑥∈𝐽. for some 𝜀 > 0. The right hand side is summable, so we would be done if we could somehow bound the contribution coming from the 𝑌𝑛 (𝑥)− 𝑌𝑙 (𝑥) part of the field. This can be done by computing the conditional second moment of ∫ 𝜇𝑛 (𝑥)𝟙{𝑌𝑙 (𝑥)≥𝛼𝔼 𝑌𝑙 (𝑥)2 } 𝟙{𝑌𝑘 (𝑥)<𝛼𝔼 𝑌𝑘 (𝑥)2 for all 𝑙+1≤𝑘≤𝑛} 𝑑𝑥 𝐽. with respect to the 𝜎-algebra F𝑙 generated by 𝑋1 , … , 𝑋𝑙 and showing that it is bounded from above by 2−2𝑙𝑑 sup |𝜇𝑙 (𝑥)|2 . 𝑥∈𝐽. (5.5).

(33) 5.4 open questions. 5.4 Open questions Many basic questions are open for our model of non-Gaussian chaos. For example, we do not know whether for complex 𝛽 the subcritical phase is again the one illustrated in Figure 4.1, as it is in the Gaussian case. Interesting would also be to prove the convergence at criticality (using for example the Seneta–Heyde normalization). Another topic related to convergence is universality. As discussed earlier in this introduction, there exist various results in the Gaussian case showing that one obtains the same chaos when using different approximations of the log-correlated field. Similar results in the nonGaussian case are missing – ideally we would like to have robust theorems that do not require the martingale structure to show convergence and that also establish the uniqueness of the resulting chaos in some sense. Finally, it would be interesting to look at the finer properties of nonGaussian chaos distributions. Such properties include the optimal 𝐿𝑝 integrability, Sobolev regularity, and asymptotics for the tail probabilities of the total mass of the chaos, as well as the multifractal spectrum of the chaos measure on the real line.. 27.

(34)

(35) BIBLIOGRAPHY. [1]. R. Allez, R. Rhodes, and V. Vargas. Lognormal ⋆-scale invariant random measures. Probability Theory and Related Fields 155.3–4 (2013), 751–788.. [2]. K. Astala, A. Kupiainen, E. Saksman, and P. Jones. Random conformal weldings. Acta mathematica 207.2 (2011), 203–254.. [3]. E. Bacry and J. F. Muzy. Log-infinitely divisible multifractal processes. Communications in Mathematical Physics 236.3 (2003), 449–475.. [4]. J. Barral, A. Kupiainen, M. Nikula, E. Saksman, and C. Webb. Basic properties of critical lognormal multiplicative chaos. The Annals of Probability 43.5 (2015), 2205–2249.. [5]. J. Barral and B. Mandelbrot. Multifractal products of cylindrical pulses. Probability Theory and Related Fields 124.3 (2002), 409–430.. [6]. J. Barral and B. Mandelbrot. Random multiplicative multifractal measures. Fractal Geometry and Applications: A Jubilee of Benoît Mandelbrot. Proceedings of Symposia in Pure Mathematics. Providence, Rhode Island: American Mathematical Society, 2004.. [7]. N. Berestycki. An elementary approach to Gaussian multiplicative chaos. Electronic Communications in Probability 22 (2017).. [8]. N. Berestycki, C. Webb, and M. D. Wong. Random Hermitian matrices and Gaussian multiplicative chaos. Probability Theory and Related Fields (2017), 1–87.. [9]. F. David, A. Kupiainen, R. Rhodes, and V. Vargas. Liouville quantum gravity on the Riemann sphere. Communications in Mathematical Physics 342.3 (2016), 869–907.. 29.

(36) 30. biblio graphy. [10] B. Duplantier, R. Rhodes, S. Sheffield, and V. Vargas. Critical Gaussian multiplicative chaos: convergence of the derivative martingale. The Annals of Probability 42.5 (2014), 1769–1808. [11] B. Duplantier, R. Rhodes, S. Sheffield, and V. Vargas. Renormalization of Critical Gaussian Multiplicative Chaos and KPZ Relation. Communications in Mathematical Physics 330.1 (2014), 283–330. [12] B. Duplantier, R. Rhodes, S. Sheffield, and V. Vargas. Log-correlated Gaussian fields: an overview. Geometry, Analysis and Probability. Cham: Birkhäuser, 2017, 191–216. [13] B. Duplantier and S. Sheffield. Duality and the Knizhnik–Polyakov–Zamolodchikov relation in Liouville quantum gravity. Physical Review Letters 102.15 (2009), 150603. [14] B. Duplantier and S. Sheffield. Liouville quantum gravity and KPZ. Inventiones mathematicae 185.2 (2011), 333–393. [15] B. Duplantier and S. Sheffield. Schramm-Loewner Evolution and Liouville Quantum Gravity. Physical Review Letters 107.13 (2011), 131–305. [16] A. J. Harper. Moments of random multiplicative functions, I: Low moments, better than squareroot cancellation, and critical multiplicative chaos. arXiv:1703.06654 (2017). [17] R. Høegh-Krohn. A general class of quantum fields without cut-offs in two space-time dimensions. Communications in Mathematical Physics 21.3 (1971), 244–255. [18] J. Junnila, E. Saksman, and C. Webb. Decompositions of log-correlated fields with applications. arXiv:1808.06838 (2018). [19] J.–P. Kahane. Sur le chaos multiplicatif. Comptes rendus de l’Académie des sciences. Série 1, Mathématique 301.6 (1985), 329–332. [20] J.–P. Kahane and J. Peyrière. Sur certaines martingales de Benoit Mandelbrot. Advances in Mathematics 22.2 (1976), 131–145..

(37) biblio gra ph y. [21]. A. Kupiainen, R. Rhodes, and V. Vargas. Integrability of Liouville theory: proof of the DOZZ Formula. arXiv:1707.08785 (2017).. [22]. H. Lacoin, R. Rhodes, and V. Vargas. Complex Gaussian Multiplicative Chaos. Communications in Mathematical Physics 337.2 (2015), 569–632.. [23]. G. Lambert, D. Ostrovsky, and N. Simm. Subcritical multiplicative chaos for regularized counting statistics from random matrix theory. Communications in Mathematical Physics 360.1 (2018), 1–54.. [24]. B. Mandelbrot. Possible refinement of the lognormal hypothesis concerning the distribution of energy dissipation in intermittent turbulence. Statistical Models and Turbulence. Berlin, Heidelberg: Springer, 1972, 333–351.. [25]. B. Mandelbrot. Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier. Journal of Fluid Mechanics 62.2 (1974), 331–358.. [26]. B. Mandelbrot. Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. Comptes Rendus Mathématique. Académie des Sciences. Paris 278 (1974), 289–292, 355–358.. [27]. P. Mannersalo, I. Norros, and R. H. Riedi. Multifractal products of stochastic processes: construction and some basic properties. Advances in Applied Probability 34.4 (2002), 888–903.. [28]. G. M. Molchan. Scaling exponents and multifractal dimensions for independent random cascades. Communications in Mathematical Physics 179.3 (1996), 681–702.. [29]. E. Powell. Critical Gaussian chaos: convergence and uniqueness in the derivative normalisation. Electronic Journal of Probability 23 (2018).. [30]. R. Rhodes, J. Sohier, and V. Vargas. Levy multiplicative chaos and star scale invariant random measures. The Annals of Probability 42.2 (2014), 689–724.. 31.

(38) 32. biblio graphy. [31] R. Rhodes and V. Vargas. Gaussian multiplicative chaos and applications: a review. Probability Surveys 11 (2014), 315–392. [32] R. Robert and V. Vargas. Gaussian multiplicative chaos revisited. The Annals of Probability 38.2 (2010), 605–631. [33] E. Saksman and C. Webb. The Riemann zeta function and Gaussian multiplicative chaos: statistics on the critical line. arXiv:1609.00027 (2016). [34] A. Shamov. On Gaussian multiplicative chaos. Journal of Functional Analysis 270.9 (2016), 3224–3261. [35] S. Sheffield. Gaussian free fields for mathematicians. Probability Theory and Related Fields 139.3–4 (2007), 521–541. [36] S. Sheffield. Conformal weldings of random surfaces: SLE and the quantum gravity zipper. The Annals of Probability 44.5 (2016), 3474–3545. [37] B. Simon. The 𝑃(Φ)2 Euclidean (Quantum) Field Theory. Princeton, New Jersey: Princeton University Press, 2015. [38] N. Tecu. Random conformal weldings at criticality. arXiv:1205.3189 (2012). [39] C. Webb. The characteristic polynomial of a random unitary matrix and Gaussian multiplicative chaos – The 𝐿2 -phase. Electronic Journal of Probability 20 (2015)..

(39)

Viittaukset

LIITTYVÄT TIEDOSTOT

To this day, the EU’s strategic approach continues to build on the experiences of the first generation of CSDP interventions.40 In particular, grand executive missions to

However, the pros- pect of endless violence and civilian sufering with an inept and corrupt Kabul government prolonging the futile fight with external support could have been

1. Box contains 10 balls. Experiment consists of picking 3 balls without replacement. Function f is the density function of a pair of random variables. Calculate the probability

8. Ympyräsektorin  pinta‐ala  A  on  säteen  r  ja  kaarenpituuden  b  avulla  lausuttuna . Uusi  puhelinmalli  tuli  markkinoille  tammikuun  alussa.  Mallia 

*:llä merkityt tehtävät eivät ole kurssien keskeiseltä alueelta. Pisteeseen Q piirretty ympyrän tangentti leikkaa säteen OP jatkeen pisteessä R. Auringon säteet

että Suomen itsenäisyyspäivä (6.12.) on satunnaisesti eri viikonpäivinä. a) Kääntöpuolen taulukot esittelevät kevään 1976 ylioppilastutkinnon lyhyen matematiikan

Mikäli kaivantojen reunoille ja/tai pohjNn jää maa-ainesta, jonka haitta ainepitoisuudet ylittävät valtioneuvoston asetuksen 214/2007 mukaiset aiemmat ohjearvotasot, on

Voittajan tulee kaiverruttaa palkintoon vuosiluku, koiran ja omistajan nimi, sekä toimittaa palkinto yhdistyksen sihteerille vähintään kaksi (2) viikkoa ennen