Subcritical phase of $d$-dimensional Poisson-Boolean percolation and its vacant set

We prove that the Poisson-Boolean percolation on $\mathbb{R}^d$ undergoes a sharp phase transition in any dimension under the assumption that the radius distribution has a $5d-3$ finite moment (in particular we do not assume that the distribution is bounded). More precisely, we prove that: -In the whole subcritical regime, the expected size of the cluster of the origin is finite, and furthermore we obtain bounds for the origin to be connected to distance $n$: when the radius distribution has a finite exponential moment, the probability decays exponentially fast in $n$, and when the radius distribution has heavy tails, the probability is equivalent to the probability that the origin is covered by a ball going to distance $n$. - In the supercritical regime, it is proved that the probability of the origin being connected to infinity satisfies a mean-field lower bound. The same proof carries on to conclude that the vacant set of Poisson-Boolean percolation on $\mathbb{R}^d$ undergoes a sharp phase transition. This paper belongs to a series of papers using the theory of randomized algorithms to prove sharpness of phase transitions.


H U G O D U MINIL -C O PIN A R A N R A O U F I V IN C E N T TA S SIO N S U B C R ITIC A L P H A S E O F d-DI M E N SIO N A L P OIS S O N -B O O L E A N P E R C O L ATIO N A N D ITS VA C A N T S E T
equivalent to the probability that the origin is covered by a ball going to distance n (this result is new even in two dimensions). In the supercritical regime, it is proved that the probability of the origin being connected to infinity satisfies a mean-field lower bound. The same proof carries on to conclude that the vacant set of Poisson-Boolean percolation on R d undergoes a sharp phase transition.

Definition of the model
Bernoulli percolation was introduced in [BH57] by Broadbent and Hammersley to model the diffusion of a liquid in a porous medium. Originally defined on a lattice, the model was later generalized to a number of other contexts. Of particular interest is the development of continuum percolation (see [MR96] for a book on the subject), whose most classical example is provided by the Poisson-Boolean model (introduced by Gilbert [Gil61]). It is defined as follows.
Fix a positive integer d 2 and let R d be the d-dimensional Euclidean space endowed with the 2 norm · . For r > 0 and x ∈ R d , set B x r := {y ∈ R d : y−x r} and ∂B x r := {y ∈ R d : y − x = r} for the ball and sphere of radius r centered at x. When x = 0, we simply write B r and ∂B r . For a subset η of R d × R + , set Let µ be a measure on R + (below, we use the notation µ[a, b] to refer to µ([a, b)), where a, b ∈ R ∪ {∞}) and λ be a positive number. Let η be a Poisson point process of intensity λ · dz ⊗ µ, where dz is the Lebesgue measure on R d . Write P λ for the law of η and E λ for the expectation with respect to P λ . The random variable O(η), where η has law P λ is called the Poisson-Boolean percolation of radius law µ and intensity λ. A natural hypothesis in the study of Poisson-Boolean percolation is to assume d th moment on the radius distribution: Indeed, as observed by Hall [Hal85], the condition (1.1) is necessary in order to avoid the entire space to be almost surely covered, regardless of the intensity (as long as positive) of the Poisson point process.

ANNALES HENRI LEBESGUE
Remark 1.1. -The techniques of this paper do not rely heavily on the round shape of the balls used to construct O(η). We think that the results below extend to general (even random) shapes under proper assumptions (such as non-empty interior and boundedness).

Main result
Two points x and y of R d are said to be connected (by η) if there exists a continuous path in O(η) connecting x to y or x = y. This event is denoted by x ←→ y. For X, Y ⊂ R d , the event {X ←→ Y } denotes the existence of x ∈ X and y ∈ Y such that x is connected to y (when X = {x}, we simply write x ←→ Y ). Define, for every r > 0, the two functions of λ θ r (λ) := P λ [0 ←→ ∂B r ] and θ(λ) := lim r→∞ θ r (λ).
Another critical parameter is often introduced to discuss Poisson-Boolean percolation. Define λ c = λ c (d) by the formula This quantity is of great use as it enables one to initialize renormalization arguments; see e.g. [Gou08,GT18] and references therein. As a consequence, a lot is known for Poisson-Boolean percolation with intensity λ < λ c . We refer to Theorems 1.4 and 1.5 and to [MR96] for more details on the subject.
Under the minimal assumption (1.1), Gouéré [Gou08] proved that λ c and λ c are nontrivial. More precisely he proved 0 < λ c λ c < ∞. The main result of this paper is the following.
Theorem 1.2 (Sharpness for Poisson-Boolean percolation). -Fix d 2 and assume that The case of bounded radius is already proved in [MRS94,ZS85] (see also [MR96]), and we refer the reader to [Zie16] for a new proof. Theorems stating sharpness of phase transitions for percolation models in general dimension d were first proved in the eighties for Bernoulli percolation [AB87,Men86] and the Ising model [ABF87]. The proofs of sharpness for these models (even alternative proofs like [DCT16]) harvested independence for Bernoulli percolation and special structures of the randomcurrent representation of the Ising model. In particular, they were not applicable to other models of statistical mechanics. In recent years, new methods were developed to prove sharpness for a large variety of statistical physics models in two dimensions [ATT16,BDC12,BR06]. These methods rely on general sharp threshold theorems for Boolean functions, but also on planar properties of crossing events. In particular, the proofs use planarity in an essential way and seem impotent in higher dimensions. Recently, the authors proved the sharpness of phase transition for random-cluster model [DCRT17b] and Voronoi percolation [DCRT17a] in arbitrary dimensions. The shared theme of the proofs is the use of randomized algorithms to prove differential inequalities for connection probabilities.
Here, we do not prove that connectivity probabilities decay exponentially fast in the distance below criticality since this is simply not true in general for Poisson-Boolean percolation. Indeed, if the tail of the radius distribution is slower than exponential, then the event that a large ball covers two given points has a probability larger than exponential in the distance between the two points. Instead, we show by contradiction that λ c = λ c , without ever referring to exponential decay, by controlling the derivative of θ r (λ) in terms of "pivotality events" for λ in the "fictitious" regime ( λ c , λ c ) and by deriving a differential inequality on θ r (λ) (using randomized algorithms) which is valid only in this regime. The regime ( λ c , λ c ) is referred to as the fictitious regime since one eventually concludes from our proof that the interval is empty.
We wish to highlight that while the proof below also harvests randomized algorithms, we consider that the main novelty of the paper lies in the comparison between the derivative of θ r (λ) and pivotal events. In many percolation models constructed from disordered systems with long range interaction, the pivotality events that typically appear in Russo's formula for Bernoulli percolation get replaced by more complicated quantities. For instance, in our model probability of large ball being pivotal also enter into the expression of the derivative, in percolation models built out of Gaussian processes such as Gaussian Free Field or Bargmann-Fock, the probability of pivotality conditioned on the field being equal to a certain value are relevant, etc. It is not true that these quantities are comparable to the pivotality probabilities for arbitrary values of λ, but one may show with some work that this is indeed the case in the fictitious regime. We believe that the introduction of this fictitious regime is therefore a powerful tool towards a better understanding of these models. To illustrate recent applications of similar techniques that were inspired by the present paper, we refer e.g. to [DCGRS19, DCGR + 18].

Decay of θ r (λ) when λ < λ c
For standard percolation, sharpness of the phase transition refers to the exponential decay of connection probabilities in the subcritical regime. In Poisson-Boolean percolation with arbitrary radius law µ, we mentioned above that one cannot expect such behavior to hold in full generality. In order to explain why the theorem above is still called "sharpness" in this article, we provide below some new results concerning the behavior of Poisson-Boolean percolation when λ < λ c .
Let us start with the following easy Proposition 1.3. Proposition 1.3. -The following properties hold: • For λ < λ c , lim r→∞ P λ [B r ←→ ∂B 2r ] = 0. • There exists c > 0 such that for every λ λ c and r 1, Notice that the previous proposition immediately implies that for every λ < λ c , the expected size of the cluster of the origin is finite (see Remark 4.3). Now, remark that for every λ > 0, θ r (λ) is always bounded from below by whose decay may be arbitrarily slow. Nevertheless, one may expect the following phenomenology when λ < λ c : • If µ[r, ∞] decays exponentially fast, then so does θ r (λ) (but not necessarily at the same rate of exponential decay). • Otherwise, the decay of θ r (λ) is governed by φ r (λ), in the sense that it is roughly equivalent to it.
The first item above is formalized by the following theorem.
Theorem 1.4. -If there exists c > 0 such that µ[r, ∞] exp(−cr) for every r 1, then, for every λ < λ c , there exists c λ > 0 such that for every r 1, Giving sense to the second item above is not easy in full generality, for instance when the law of µ is very irregular (one can imagine distributions µ that do not decay exponentially fast, but for which large range of radii are excluded). In Section 4.5, we give a general condition under which a precise description of θ r (λ) can be obtained. To avoid introducing technical notation here, we only give two applications of the results proved in Section 4.5. We believe that these applications bring already a good idea of the general phenomenology. The proof mostly relies on new renormalization inequalities. We believe that these renormalization inequalities can be of great use to other percolation models such as long-range Bernoulli percolation models. The Theorem 1.4 claims that the cheapest way for 0 to be connected to distance r is if a single huge ball covers 0 and intersects the boundary of B r .
Then, for every λ < λ c , This theorem is new even in two dimensions. The proofs of Theorem 1.4 and Theorem 1.5 are independent of the proof of Theorem 1.2 and can be read separately. TOME 3 (2020)

Vacant set of the Poisson-Boolean model
Another model of interest can also be studied using the same techniques. In this model, the connectivity of the points is given by continuous paths in the complement of O(η). Write x * ←→ y for the event that x and y are connected by a continuous path in R d \ O(η), and X * ←→ Y if there exist x ∈ X and y ∈ Y such that x and y are connected. For λ and r 0, define (Note that θ * (λ) is decreasing in λ.) We define the critical parameter λ * c (see e.g. [Pen17] or [ATT17] for the fact that it is positive) by the formula λ * c := sup{λ 0 : θ * (λ) > 0} = inf{λ 0 : θ * (λ) = 0}. We have the following Theorem 1.6. Theorem 1.6 (Sharpness of the vacant set of the Poisson-Boolean model). -Fix d 2 and assume that the radius distribution µ is compactly supported. Then, for all λ < λ * c , there exists c λ > 0 such that for every r 1, . Since the proof follows the same lines as in Theorem 1.2, we omit it in the article and leave it as an exercise to the reader.

Strategy of the proof of Theorem 1.2
Let us now turn to a brief description of the general strategy to prove our main theorem. Theorem 1.2 is a consequence of the following Lemma 1.7.
Lemma 1.7. -Assume the moment condition (1.2) on the radius distribution. Fix λ 0 > λ c . There exists c 1 > 0 such that for every r 0 and λ c < λ < λ 0 , where Σ r (λ) := r 0 θ s (λ)ds. The whole point of Section 3 will be to prove Lemma 1.7. Before that, let us mention how it implies Theorem 1.2.
The two items imply that λ 1 = λ c . Yet, the second item implies that λ 1 λ c , since clearly exponential decay would imply that for λ ∈ ( λ c , λ 1 ), Since λ 1 = λ c λ c , we deduce that λ 1 = λ c = λ c and the proof of Theorem 1.2 is finished.
Remark 1.8. -Note that we did not deduce anything from Lemma 1.7 about exponential decay since eventually λ 1 = λ c . It is therefore not contradictory with the case in which µ[r, ∞] does not decay exponentially fast.
The proof of Lemma 1.7 relies on the (OSSS) inequality, first proved in [OSSS05], connecting randomized algorithms and influences in a product space. Let us briefly describe this inequality. Let I be a finite set of coordinates, and let Ω = i∈I Ω i be a product space endowed with product measure π = ⊗ i∈I π i . An algorithm T determining f : Ω → {0, 1} takes a configuration ω = (ω i ) i∈I ∈ Ω as an input, and reveals the value of ω in different i ∈ I one by one. At each step, which coordinate will be revealed next depends on the values of ω revealed so far. The algorithm stops as soon as the value of f is the same no matter the values of ω on the remaining coordinates. Define the functions δ i (T) and Inf i (f ), which are respectively called the revealment and the influence of the i th coordinate, by where ω denotes the random element in Ω I which is the same as ω in every coordinate except the i th coordinate which is resampled independently.
where Var π is the variance with respect to the measure π.
This inequality is used as follows. First, we write Poisson-Boolean percolation as a product space. Second, we exhibit an algorithm for the event {0 ↔ ∂B r } for which we control the revealments. Then, we use the assumption λ > λ c to connect the influences of the product space to the derivative of θ r . Altogether, these steps lead to (1.4).

Organization of the article
The next Section 2 contains some preliminaries. Section 3 contains the proof of Theorem 1.2 while the last Section 4 contains the proofs of Theorems 1.4 and 1.5. TOME 3 (2020)

Background
We introduce some notation and recall three properties of the Poisson-Boolean percolation that we will need in the proofs of the next sections.

Further notation
In order to apply the OSSS inequality, we wish to write our probability space as a product space. To do this, we introduce the following notation. For any integer n 1 and which corresponds to all the balls of η centered at a point in S x with radius in [n − 1, n). All the constants c i below (in particular in the lemmata) are independent of all the parameters.

Insertion tolerance
We will need the following insertion tolerance property. Consider r * and r * such that where D x is the event that there exists (z, R) ∈ η with z ∈ S x and B x r * ⊂ B z R ⊂ B x r * . Without loss of generality (the radius distribution may be scaled by a constant factor), we further assume that r * and r * satisfy the following conditions (these will be useful at different stages of the proof): While the quantity c IT varies with λ, the dependency will be continuous and therefore irrelevant for our arguments. We will omit to refer to this subtlety in the proofs to avoid confusion.

FKG inequality
An increasing event A is an event such that η ∈ A and η ⊂ η implies η ∈ A. The FKG inequality for Poisson point processes states that for every λ > 0 and every two increasing events A and B,

Russo's formula
For x ∈ Z d and an increasing event A, define the random variable Russo's formula [LP17,Equation 19.2] yields that

Proof of Lemma 1.7
The next subsection contains the proof of Lemma 1.7 conditioned on two lemmatas, Lemmas 3.3 and 3.4 which are proved in the next two Subsections 3.1 and 3.2.

Proof of Lemma 1.7
As mentioned above, the proof of the Lemma 1.7 is obtained by applying the (OSSS) inequality to a truncated version of the probability space generated by the independent variables (η (x,n) ) x∈Z d ,n 1 . In this section, we fix L 2r > 0. Set A := {0 ←→ ∂B r } and f = 1 A . Define I L := (x, n) ∈ Z d × N such that x L and 1 n L .
Also, for r 0, let η g denote the union of the η (x,n) for (x, n) / ∈ I L . For i being either (x, n) or g, set Ω i to be the space of possible η i and π i the law of η i under P λ .
For 0 s r, apply the (OSSS) inequality (Theorem 1.9) to • the product space ( i∈I Ω i , ⊗ i∈I π i ) where I := I L ∪ {g}, • the indicator function f = 1 A considered as a function from i∈I Ω i onto {0, 1}, • the algorithm T s,L defined below.
is smaller than n, then set i t+1 to be the smallest such (x, n) for some fixed ordering. • If such an (x, n) does not exist, halt the algorithm.
Remark 3.2. -Roughly speaking, the algorithm checks O(η g ) and then discovers the connected components of ∂B s . Theorem 1.9 implies that By construction, the random variable η g is automatically revealed so its revealment is 1. Also, so that this quantity tends to 0 as L tends to infinity (thanks to the moment assumption (1.1) on µ).
Let us now bound the revealment for (x, n) ∈ I L . The random variable η (x,n) is revealed by T s,L if the Euclidean distance between S x and the connected component of ∂B s is smaller than n. Let S x n be the union of the boxes S y that contain a point at distance exactly n from S x . We deduce that Overall, plugging the previous bounds (3.2) and (3.3) on the revealments of the algorithms T s,L into (3.1), we obtain (Above, o(1) denotes a quantity tending to 0 as L tends to infinity.) Letting L tend to infinity (note that the influence Inf (x,n) does not depend on the algorithms T s,L ), we find In order to conclude the proof, we need the following two lemmata 3.3 and 3.4. First, a simple union bound argument allows us to bound P λ [S x n ←→ ∂B s ] in terms of the one-arm probability. More precisely, consider the following Lemma 3.3, which will be proved at the end of the Section 3.1.
Lemma 3.3. -Fix λ 0 > 0. There exists c 2 > 0 such that for every λ λ 0 , x ∈ Z d and n 1, Integrating (3.4) for radii between 0 and r and using the Lemma 3.3 above gives The most delicate step is to relate the influences in the equation above with the pivotal probabilities appearing in the derivative formula (Russo). This is the content of the following Lemma 3.4, which will be proved in Section 3.3.
Lemma 3.4. -There exists c 3 > 0 such that for every x ∈ Z d , every n 1 and every λ > λ c ,

ANNALES HENRI LEBESGUE
Dividing (3.5) by Σ r , then applying the lemma above, and then using λ < λ 0 gives r This implies Lemma 1.7. Before proving Lemmatas 3.3 and 3.4, we would like to make several remarks concerning the proof above.
Remark 3.5. -The role of L is the following. The interest of working with T s,L is to have a finite number (depending on L) of coordinates for the algorithm. We could have stated an OSSS inequality valid for countably many states and get (3.4) directly, but we believe the previous strategy to be shorter and thriftier. The role of s on the other hand is purely technical and one could have avoided it by working with a randomized algorithm.
Remark 3.6. -Lemma 3.3 may a priori be improved. Indeed, the union bound in the first inequality is quite wasteful. Nonetheless, with the moment assumption on the radius distribution, the claim above is sufficient.
Remark 3.7. -We refer to λ λ 0 in Lemma 3.3 instead of λ > λ c (even though the lemma is anyway used in this context) to illustrate that the only place where λ > λ c is used is in Lemma 3.4.
We finish this section with the proof of Lemma 3.3. Proof of Lemma 3.3. -Let Y be the subset of Z d such that We have that where in the last inequality, we used the FKG inequality, (IT) and the fact that if S y ←→ ∂B s and the event D y defined below (IT) occur, then y is connected to ∂B s . Integrating on s between 0 and r and observing that y is at distance |s − y | of ∂B s , we deduce that Since the cardinality of Y is bounded by a constant times n d−1 , the result follows.

A technical statement regarding connection probabilities above λ c
The following Lemma 3.8 will be instrumental in the proof of Lemma 3.4. It is the unique place where we use the assumption λ > λ c .
Below, X Z ←→ Y means that X is connected to Y in O(η Z ), where η Z is the set of (z, R) ∈ η such that B z (R) ⊂ Z. We highlight that this is not the same as the existence of a continuous path in O(η) ∩ Z connecting x and y, since such a path could a priori pass through regions in Z which are only covered by balls intersecting R d \ Z.
Lemma 3.8. -There exists a constant c 4 > 0 such that for every λ λ c and r r * , Remark 3.9. -Before diving into the proof, let us first explain how we will use the assumption λ > λ c . Fix r > 0. If B r is connected to ∂B 2r , then there must exist Since above λ c , the probability of the former is bounded away from 0 uniformly in r and λ thanks to Proposition 1.3, and since the probabilities of the latter events are all smaller than P λ [B 1 ←→ ∂B r ], the union bound gives the existence of c 5 > 0 such that for every r 1 and λ λ c , The argument will rely on the following observation. As explained in (3.7), the probability that B 1 is connected to ∂B r does not decay quickly. This event implies the existence of y such that B 1 is connected in B r to B y r * , and one ball B z R (with (z, R) ∈ η) covering B y r * and intersecting the complement of B r . The problem is that this site y may be quite far from ∂B r . Nonetheless, this seems unlikely since the cost (by the moment assumption on µ) of having a large ball B z R intersecting B y r * is overwhelmed by the probability that the latter is connected to ∂B r in B y r− y by a path in O(η) using a priori smaller balls (and maybe even only balls included in B r ). The proof below will harvest this idea though with some important variations due to the fact that all the balls under consideration must remain inside B r . One of the key idea is the introduction of a new scale r * * and an induction on the probability that B 1 is connected to B x r * * in B r , where x ∈ ∂B r . Proof. -We will prove that there exist r * * > 0 and c 6 > 0 such that This is amply sufficient to prove the Lemma 3.8 since, if T denotes the set of t ∈ Z d such that B t r * ∩ B x 2r * * = ∅ and B t r * ⊂ B r , then We therefore focus on the proof of (3.8). We will fix r * * 2r * sufficiently large (see the end of the proof). Introduce Y := Z d ∩ B r−r * * and a finite set Z ⊂ ∂B r such that the union of the balls B y r * and B z r * * with y ∈ Y and z ∈ Z cover the ball B r . Note that if B r * is connected to ∂B r , then either one of the z ∈ Z is such that B r * is connected to B z r * * in B r , or there exists y ∈ Y such that the event A(y) := B r * Br ←→ B y r * ∩ ∃(u, R) ∈ η such that B u R intersects both B y r * and ∂B r occurs. The union bound therefore implies that For y ∈ Y , introduce x := (r/ y )y ∈ ∂B r and r = r − y . The event on the right of the definition of A(y) is independent of the event on the left. Since any point in B y r * is at a distance at least r − r * of ∂B r , we deduce that In the second inequality, we used the moment assumption on µ and the fact that r − r * r/2 since r * * 2r * . Using this latter assumption one more time implies that (3.10) From now on, define U (r) := P λ [B r *

Br
←→ B x r * * ] for x ∈ ∂B r (the choice of x is irrelevant by invariance under the rotations). Now, one may choose Z in such a way that |Z| c 9 r d−1 . Plugging (3.10) in (3.9), we obtain the following inequality Using that there exists c 11 > 0 such that U (r) c 11 > 0 for every r r * * , we deduce (3.8) by induction on r ∈ Z + , provided r * * is chosen large enough to start with.

Proof of Lemma 3.4
We are now in a position to prove Lemma 3.4. Fix x ∈ Z d and r, n 1. Define the connected components of 0 and ∂B r in O(η). Our first goal is to prove that, conditionally on P x (n), the probability that C and C are close to each other in B x 3n is not too small.
c 11 n 3d−2 · P λ [P x (n)]. Before proving this claim, let us conclude the proof of the Lemma 3.4. If the event on the left occurs, there must exist y ∈ Z d within a distance at most r * of both C ∩ B x 3n and C : simply pick a point z ∈ R d at a distance smaller than r * /2 of both C ∩ B x 3n and C , and then take y ∈ Z d within distance √ d of it (we used the inequality on the right in (2.1)). In particular, P y (r * ) must occur for this y and we deduce that To conclude, observe that the definition of r * given by (IT) implies that for all y, Piv x,A (η) 1 η∈Py(r * ) which, after integration, gives Now, one easily gets that (Simply observe that P x (n + √ d) must occur in η, and that the resampled Poisson point process η (x,n) must contain at least one point). Plugging (3.11) (applied to n + √ d) in (3.13), then using (3.12), and finally summing over every x ∈ Z d gives the Lemma 3.4.
Proof of the Claim 3.10. -The proof consists in expressing the probability that C and C come within a distance r * of each other in terms of the probability that they remain at a distance at least r * of each other.
Fix y ∈ Z d . For a non-empty subset C of R d , let u C = u C (y) be the point of S y furthest to C, and v C = v C (y) the point of C closest to u C (in case there is more than one, select one according to a predetermined rule). Consider the event E y := {C ∩ B x n = ∅ and d(u C , C) r * } . Note that the event E y is measurable in terms of C. Let us study, on E y , the conditional expectation with respect to C. Introduce the three events where D y is extended to each y ∈ R d by taking the translate by y of D 0 . See Figure 3.1 for an example of a simultaneous occurrence of all the above events. On the event E y , conditioned on C, η R d \C has the same law as η R d \C for some independent realization η of the Poisson point process (recall the definition of η Z from the previous Section 3.2). Also, the distance between u C and v C (or equivalently C) is at least r * so that F y is measurable with respect to η R d \C . Therefore, we may use (IT), Lemma 3.8 and the FKG inequality to get the following inequality between conditional expectations: Now, observe that if E y and H y occur simultaneously, then P x (n) occurs (we use r * √ d). Furthermore if F y and G y also occur simultaneously with them, then B v C r * ∩ C = ∅. Since by construction v C ∈ B x 3n , we deduce that d(C ∩ B x 3n , C ) < r * . Integrating (3.14) on E y , we deduce that Observe that if P x (n) and d(C ∩ B x n , C ) r * occur, then there exists y ∈ Z d such that the event on the right-hand side occurs. In particular, S y must intersect B x n for H y to occur, so that there are c 6 n d possible values for y. Summing over all these values, we therefore get that c IT c 4 n 2d−2 P λ [P x (n) and d(C ∩ B x n , C ) r * ], which implies the claim 3.10 readily.

Renormalization of crossing probabilities
In this section, we prove Theorems 1.4 and 1.5. The proof is quite different in the regime where µ[r, ∞] decays exponentially fast (light tail), and in the regime where it does not (heavy tail). Also, since in the heavy tail regime the renormalization argument performed below may be of some use in the study of other models, or for different distributions µ, we prove a quantitative lemma which we believe to be of independent interest.
In this section, we reboot the count for the constants c i starting from c 1 . The section is organized as follows. In Section 4.1, we prove a renormalization lemma, Lemma 4.2 which will enable us to derive the theorems. In Sections 4.2 and 4.5, we derive Theorem 1.4 and Theorem 1.5 respectively.

The renormalization lemma
Introduce, for every δ, α, λ 0 and r 1, the two functions . Note that π 0 r (λ) = φ r (λ) and θ 0 r (λ) = θ r (λ) by definition. Remark 4.1. -The quantity π δ r (λ) is expressed in terms of µ as follows: let c d be the area of the sphere of radius 1 in R d , then The following Lemma 4.2 will be the key to the proofs of the Theorems 1.4 and 1.5.
Lemma 4.2 (Renormalization inequality). -For every 0 < α δ 1/4, there exists c 1 > 0 such that for every λ, r > 0, Note that the smaller the α and δ, the larger the entropic factor c 1 /(δ 2 α) d . We will see that the choices of α and δ are important in the applications. Except for the exponential bound on θ r (λ) in the proof of Theorem 1.4, we will always pick δ = α.
Proof. -Fix 0 < α δ 1/4 and λ, r > 0. Set (1 − 2δ)r} and index the elements of E in the increasing order by r 0 < r 1 < · · · < r . Recall the notation η Z for the set of (z, R) ∈ η with B z R ⊂ Z, and introduce the three events If B αr is connected to ∂B r but A does not occur, then B k ∩ C k must occur for some 0 k < (we use that δ α). Furthermore, by construction, B k and C k are independent since C k is measurable with respect to O(η \ η Br k ). These two observations together imply that Fix 0 k < . Let X denote a set of points x ∈ ∂B αr such that the union of the balls B x αδr for x ∈ X covers ∂B αr . Choose X such that |X| c 2 δ −(d−1) . Applying the union bound, we find Similarly, using a covering of ∂B r k+1 with at most c 2 (αδ) 1−d balls of radius αδr centered on ∂B r k , we obtain (4.5) Plugging (4.4) and (4.5) in (4.3) and using the fact that 2/(αδ) concludes the proof of Lemma 4.2.
The previous Lemma 4.2 leads to Proposition 1.3.

Proof of Theorem 1.4
Consider λ < λ c . Fix δ ∈ (0, 1 2 ) and observe that since µ[r, ∞] exp(−cr), (4.1) implies that there exists c > 0 such that π δ r (λ) exp(−c r) for every r 1. We now proceed in two steps: we first show that θ r (λ) r −d−1 for r large enough, and then improve this estimate to an exponential decay.
(The second constraint is satisfied thanks to the polynomial bound derived in the first part of the proof.) By induction on k, one can show that for every k 0, ∀ r ∈ [δr 0 , r 0 /(1 − δ) k ] c 7 r d θ r (λ) exp(−r/r 0 ).