Derivation of the stochastic Burgers equation with Dirichlet boundary conditions from the WASEP

We consider the weakly asymmetric simple exclusion process on the discrete space $\{1,...,n-1\}$, in contact with stochastic reservoirs, both with density $\rho\in{(0,1)}$ at the extremity points, and starting from the invariant state, namely the Bernoulli product measure of parameter $\rho$. Under time diffusive scaling $tn^2$ and for $\rho=\frac12$, when the asymmetry parameter is taken of order $1/ \sqrt n$, we prove that the density fluctuations at stationarity are macroscopically governed by the energy solution of the stochastic Burgers equation with Dirichlet boundary conditions, which is shown to be unique and different from the Cole-Hopf solution.


Context
A vast amount of physical phenomena were first described at the macroscopic scale, in terms of the classical partial differential equations (PDEs) of mathematical physics. Over the last decades the scientific community has tried to give a precise understanding of their derivation from first principles at the microscopic level in order to identify the limits of their validity. Typically, the microscopic systems are composed of a huge number of atoms and one looks at a very large time scale with respect to the typical frequency of atom vibrations. Mathematically, this corresponds to a space-time scaling limit procedure.
The macroscopic laws that can arise from microscopic systems can either be partial differential equations (PDEs) or stochastic PDEs (SPDEs) depending on whether one is looking at the convergence to the mean or at the fluctuations around that mean. Among the classical SPDEs is the Kardar-Parisi-Zhang (KPZ) equation which has been first introduced more than thirty years ago in [KPZ86] as the universal law describing the fluctuations of randomly growing interfaces of one-dimensional stochastic dynamics close to a stationary state (as for example, models of bacterial growth, or fire propagation). Since then, it has generated an intense research activity among the physics and mathematics community. In particular, the weak KPZ universality conjecture [Cor12,QS15,Spo17] states that the fluctuations of a large class of one-dimensional microscopic interface growth models are ruled at the macroscopic scale by solutions of the KPZ equation, which reads as follows: consider a time variable t and a one-dimensional space variable u, then the evolution of the height Z(t, u) of the randomly growing interface can be described by where A, B, C are thermodynamic constants depending on the model and ∂ t W is a space-time white noise. Note that the non-linearity (∇Z(t, u)) 2 makes the KPZ equation (1.1) ill-posed, essentially because the trajectory of the solution lacks space regularity (due to the presence of the white noise), and therefore the square of its distributional derivative is not defined. One possible way to solve this equation is to consider its Cole-Hopf transformation Φ, which solves a stochastic heat equation with a multiplicative noise (SHE) and is related to Z through a log-transformation. Since the SHE is linear, its solution can easily be constructed and it is unique, and the solution to the KPZ equation can then simply be defined as the inverse Cole-Hopf transformation of Φ. However, the solution to the SHE is too irregular to allow for a change of variable formula, and a priori there is no meaningful equation associated to its inverse Cole-Hopf transformation. Only recently Hairer has developed a meaningful notion of solution for the KPZ equation and proved existence and uniqueness of such solutions with periodic boundary conditions, see [Hai13]. His approach uses rough path integrals to construct the nonlinearity (∇Z(t, u)) 2 , and it has inspired the development of new technologies (regularity structures [Hai14] and paracontrolled distributions [GIP15]) for the so-called singular stochastic partial differential equations (SPDEs).
The first breakthrough towards the weak KPZ universality conjecture is due to Bertini and Giacomin: in their seminal paper [BG97], they show that the Cole-Hopf solution can be obtained as a scaling limit of the weakly asymmetric exclusion process (WASEP) (which will be defined ahead). Their approach consists in performing the Cole-Hopf transformation at the microscopic level, following [Gär87], and then showing that this microscopic Cole-Hopf transformation solves a linear equation (similarly to what happens at the macroscopic level). Since then, this strategy has been used in more sophisticated models, see [CS18,CST18,CT17,DT16], however the applicability of the microscopic Cole-Hopf transformation is limited to a very specific class of particle systems.
Another way to look at the KPZ equation is via the stochastic Burgers equation (SBE), which is obtained from (1.1) by taking its derivative: if Y t = ∇Z t , then Y t satisfies (1.2) dY(t, u) = A∆Y(t, u) dt + B∇ Y 2 (t, u) dt + √ Cd∇W t , which has of course the same regularity issues as the KPZ equation. Nevertheless, this formulation is well adapted to derive KPZ behavior from microscopic models. Indeed, the work initiated by Gonçalves and Jara in [GJ14] has introduced a new tool, called second order Boltzmann-Gibbs principle (BGP), which makes the non-linear term ∇(Y 2 (t, u)) of the SBE naturally emerge from the underlying microscopic dynamics. The authors have first proved the BGP for general weakly asymmetric simple exclusion processes, and shortly thereafter it has been extended to a wider class of microscopic systems, such as zero-range models [GJS15], integrable Hamiltonian one-dimensional chains [GJS17], non-degenerate kinetically constrained models [BGS16], exclusion processes with defects [FGS16], non-simple exclusion processes [GJ17], semilinear SPDEs [GP16], or interacting diffusions [DGP17]. From the BGP, it comes naturally that some suitably rescaled microscopic quantity, called density fluctuation field (see below for a precise meaning) subsequentially converges, as the size of the microscopic system goes to infinity, to random fields which are solutions Y of a generalized martingale problem for (1.2), where the singular non-linear drift ∇(Y 2 (t, u)) is a well-defined space-time distributional random field. Gonçalves and Jara in [GJ14] (see also [GJ13]) called them energy solutions. Recently, Gubinelli and Perkowski [GP18a] proved uniqueness of energy solutions to (1.2) and as a significant consequence, the proof of the weak KPZ universality conjecture could be concluded for all the models mentioned above. We note that the energy solutions, compared to the methods of [BG97] or [Hai14,GIP15], are strongly based on stationarity, and in particular the weak universality was so far only shown for stationary initial conditions or bounded entropy perturbations thereof. On the other hand, none of the models mentioned above admit a microscopic Cole-Hopf transformation, which prevents the use of the methods of [BG97], and in many cases they do not have the structure of a semilinear SPDE, which means that the pathwise approach of [Hai14,GIP15] does not apply either.

Purposes of this work
Our goal in this article is to go beyond the weak KPZ universality conjecture and to derive a new SPDE, namely, the KPZ equation with boundary conditions, from an interacting particle system in contact with stochastic reservoirs. Indeed, the presence of boundary conditions in evolution equations often lacks understanding from a physical and mathematical point of view. Here we intend to legitimate the choice done at the macroscopic level for the KPZ/SBE equation from the microscopic description of the system. For that purpose, we first prove two main theorems: Theorem 3.17. -We construct a microscopic model (inspired by the WASEP mentioned above) from which the energy solution naturally emerges as the macroscopic limit of its stationary density fluctuations.
This gives a physical justification for the Dirichlet boundary conditions in (1.2). We also introduce the notion of energy solutions to two related SPDEs: the KPZ equation with Neumann boundary conditions and the SHE with Robin boundary conditions; we prove their existence and uniqueness, and we rigorously establish the formal links between the equations discussed above. This is more subtle than expected, because the boundary conditions do not behave canonically: a formal computation suggests that the Cole-Hopf transform of the KPZ equation with Neumann boundary conditions should also satisfy Neumann boundary conditions, but we show that instead it satisfies Robin boundary conditions.
We also associate an interface growth model to our microscopic model, roughly speaking by integrating it in the space variable, and show that it converges to the energy solution of the KPZ equation, thereby giving a physical justification of the Neumann boundary conditions. In the remaining lines we go into further details and explain these results.
As for the KPZ/Burgers equation on the real line, our equation can be (and has been) solved also with the Cole-Hopf transform [CS18,Par19] or regularity structures [GH18]. However, just as on the real line there are many models with boundary conditions to which our approach applies and that are not accessible with either of these methods, for example kinetically constrained exclusion processes, exclusion processes with (finite variance) long jumps, or the interacting diffusions of [DGP17]. Moreover, since the solution cannot be evaluated pointwise the implementation of the Dirichlet boundary conditions is actually rather subtle, and it is one of the contributions of this work to sort out the links between the different formulations (which in general correspond to different processes).

WASEP with reservoirs
At the microscopic level, the model that we consider is the following: let a discrete system of particles evolve on the finite set Λ n = {1, . . . , n − 1} of size n ∈ N. For any site x ∈ Λ n there is at most one particle, and we denote by η(x) ∈ {0, 1} its occupation variable, namely: η(x) = 1 if there is a particle at site x, and 0 otherwise. We then define a continuous-time Markov process η t = {η t (x) ; x ∈ Λ n } on the state space {0, 1} Λn using the following possible moves: for any x / ∈ {1, n − 1}, a particle at site x can attempt to jump to its neighbouring sites x − 1 or x + 1, provided that they are empty. Similarly, a particle at site 1 can jump to its right neighbour 2 or it can leave the system, and the particle at site n − 1 can jump to its left neighbour n − 2 or it can leave the system. Moreover, attached to the extremities of Λ n there are two reservoirs of particles: one at site 0, which can send a particle to site 1 (if this site is empty), and the other one at site n, which can send a particle to site n − 1 (if this site is empty). Another interpretation of the boundary dynamics could be given as follows: particles can either be created (resp. annihilated) at the sites x = 1 or x = n − 1 if the site is empty (resp. occupied).
An invariant measure for these dynamics is the Bernoulli product measure on {0, 1} Λn of parameter ρ, which we denote by ν n ρ , and whose marginals satisfy: ν n ρ η(x) = 1 = 1 − ν n ρ η(x) = 0 = ρ. We start the Markov process η t under the invariant measure ν n ρ and we look at the evolution on the diffusive time scale tn 2 , where t 0 is the macroscopic time. The microscopic density fluctuation field is then defined as where δ x/n is the Delta dirac distribution, and therefore ( · ) is meant to be a test function. We prove in Theorem 3.17 two main results on the large n limit of that field, assuming that the initial density is ρ = 1 2 (this assumption, which aims at removing a transport phenomenon inside the system, will be explained in detail in Remark 3.21): • if ε n = E/ √ n (for some E > 0), the sequence of processes Y n converges, in a suitable space of test functions, towards the unique energy solution Y to (1.2) with Dirichlet boundary conditions, with parameters A = 1, B = E and C = 1 2 ; • whenever √ nε n → 0 as n → ∞, the sequence Y n converges towards the unique Ornstein-Uhlenbeck (OU) process with Dirichlet boundary conditions, which is equivalent to the unique energy solution to (1.2) with parameters A = 1, B = 0 and C = 1 2 . As we will discuss in Remark 3.19 below it is easy to generalize these convergence results to initial conditions that are bounded entropy perturbations of the invariant measure.

The second order Boltzmann-Gibbs principle
The main ingredient that we use at the microscopic level is the BGP, that we have to reprove completely in our particular setting. Indeed, this is the first time that the BGP is established in a space with boundaries.
This tool, which was first introduced in [GJ14], permits to investigate the spacetime fluctuations of the microscopic density fluctuation field, and to prove that, when properly recentered, the latter is close in the macroscopic limit to a quadratic functional of the density field (1.7). It was originally proved for general weakly asymmetric simple exclusion processes by a multi-scale argument (also given in [Gon08]), which consists in replacing, at each step, a local function of the dynamics by another function whose support increases and whose variance decreases, and its proof used a key spectral gap inequality which is uniform in the density of particles, and is not available for many models. Later in [GJS15] it is assumed that the models satisfy a spectral gap bound which does not need to be uniform in the density of particles and allows more general jump rates. More recently in [FGS16,GJS17], and then in [BGS16], a new proof of the BGP has permitted to extend the previous results to models which do not need to satisfy a spectral gap bound, as, for example, symmetric exclusion processes with a slow bond, and microscopic dynamics that allow degenerate exchange rates. In this paper, we adapt that strategy, which turns out to be quite robust, to our finite model with stochastic boundary reservoirs. This is the goal of Theorem 4.2.
1.2.3. Boundary behavior at the microscopic scale As we already mentioned, the KPZ equation and SBE are closely related, and this relation can be seen also at the microscopic level. There is a natural height function h(x) associated to the occupation variable η(x), and in particular its increments satisfy: Of course, this only determines h up to the addition of a constant, and we will carefully define a microscopic height process {h t (x) ; t 0, x = 1, . . . , n}, which satisfies h t (x + 1) − h t (x) = η(x) − ρ for the Markov process η t from above, and which has nice path properties. We refer the reader to Section 2.4.2 for the rigorous definition. Similarly to (1.7), we are interested in the macroscopic limit of height fluctuations starting the evolution from ν n ρ and looking in the time scale tn 2 . In this case the averaged local height at site x ∈ {1, . . . , n} and microscopic time tn 2 is equal to c n t, where c n is related to the initial density ρ and the strength of the asymmetry ε n , as follows: c n = n 2 ε n ρ(1 − ρ). Therefore, the (renormalized) microscopic height fluctuation field is defined as which means, formally, that Y n t = −∇Z n t . In the same spirit as Theorem 3.17, for ε n = E/ √ n and ρ = 1 2 , we prove: Theorem 3.20. -The sequence of processes Z n converges, in a suitable space of distributions, towards the unique energy solution Z to (1.1) with Neumann boundary conditions, with the same parameters A = 1, B = E and C = 1 2 . A closely related convergence result was recently shown by Corwin and Shen [CS18], who consider the height function associated to a variant of the WASEP in contact with reservoirs, apply Gärtner's microscopic Cole-Hopf transform, and show that in the scaling limit the process converges to a solution of the SHE with Robin boundary conditions. Corwin and Shen study a range of parameters and do not need to know the invariant measure of their system explicitly (in general it will not be a Bernoulli product measure). However, the model we study here has parameters that are not admissible in [CS18], because in their formulation the parameters A and B have to be positive (see e.g. [CS18, Proposition 2.7 and p. 14]), while we would get A = B = −E 2 /8 (see Proposition 3.13 below and also Remark 3.11). The extension of the Cole-Hopf approach to general parameters A, B and thus also to the model that we consider here was completed only a couple of weeks after this paper by Parekh [Par19]. We stress however that our methods are very different from the ones used in [CS18,Par19] and we study the microscopic density fluctuation field without relying on the microscopic Cole-Hopf transform.
1.2.4. Uniqueness of energy solutions and boundary behavior at the macroscopic scale The convergence proofs described above show relative compactness of the microscopic density fluctuations field (resp. the microscopic height fluctuation field) under rescaling, and that any limiting point is an energy solution to the stochastic Burgers equation with Dirichlet boundary conditions (resp. the KPZ equation with Neumann boundary conditions). To conclude the convergence, it remains to prove the uniqueness of energy solutions. We achieve this following the same strategy used in [GP18a] for proving the uniqueness of energy solutions on the real line: We mollify an energy solution Y to the SBE, Y n , find a suitable anti-derivative Z n of Y n , and let Φ n = e Z n . Now we take the mollification away and show that Φ = lim n Φ n solves (a version of) the SHE. Since uniqueness holds for solutions to the SHE, Y = ∇ log Φ must be unique. But while the strategy is the same as in [GP18a], the technical details are considerably more involved because our space is no longer translation invariant and many of the tools of [GP18a] break down. Moreover, the dynamics of Φ n contain a singular term that converges to Dirac deltas at the boundaries, a new effect which can be interpreted as a change of boundary conditions: Formally we would expect that ∇Φ t (0) = e Zt(0) ∇Z t (0) = 0 because ∇Z t (0) = 0 by the Neumann boundary conditions for Z. However, the singular term in the dynamics changes the boundary conditions to Robin's type, ∇Φ t (0) = −DΦ t (0), ∇Φ t (1) = DΦ t (1) for a constant D ∈ R which depends on the parameters A, B, C.
In that sense Z can be interpreted as a Cole-Hopf solution to the KPZ equation with inhomogeneous Neumann boundary conditions, and then the question arises which of the two formulations (inhomogeneous or homogeneous Neumann conditions) accurately describes the behavior of our stochastic process at the boundary. While of course the main difficulty with the KPZ equation is that its solutions are nondifferentiable and in particular we cannot evaluate ∇Z t (0) pointwise, we show in Proposition 3.14 that after averaging a bit in time there are canonical interpretations for ∇Z t (0) and ∇Z t (1), and both indeed vanish. We also show in Proposition 3.15 that the formal change of the boundary conditions from Neumann to Robin in the exponential transformation Φ = e Z is reflected in the "pointwise" boundary behavior of Φ (again after averaging in time).
This should be compared with the recent work of Gerencsér and Hairer [GH18] who show that the classical solution Z ε to the KPZ equation, with Neumann boundary condition ∇Z ε t (0) = ∇Z ε t (1) = 0 and where W ε is a mollification of W and {c ε } ε>0 is a sequence of diverging constants, may converge to different limits satisfying different boundary conditions as ε → 0, depending on which mollifier was used for W ε . But if the noise is only mollified in space and white in time, then the limit is always the same and it agrees with the Cole-Hopf solution with formal Neumann boundary condition ∇Z t (0) = ∇Z t (1) = 0.
Since our results show that the Cole-Hopf solution with formal boundary condition ∇Z t (0) = −∇Z t (1) = −E 2 /4 satisfies ∇Z t (0) = ∇Z t (1) = 0 in the "physical" sense, this suggests that in order to obtain the correct limit it is not only necessary to subtract a large diverging constant c ε in (1.9), but additionally one should perform a boundary renormalization. Indeed, under boundary conditions the solution Z ε to (1.9) is not spatially homogeneous, and therefore there is no reason to renormalize it by subtracting a constant c ε and instead the renormalization might also be spatially inhomogeneous.

Outline of the paper
In Section 2 we give the precise definition of our microscopic dynamics and its invariant measures, we also introduce all the spaces of test functions where the microscopic fluctuation fields, namely the density and the height, will be defined and we give the proper definition of these fields. Section 3 contains all the rigorous definitions of solutions to the SPDEs that we obtain, namely the OU/SBE with Dirichlet boundary conditions, the KPZ equation with Neumann boundary conditions and the SBE with linear Robin boundary conditions, and we explain how these equations are linked. In Section 4 we prove the convergence of the microscopic fields to the solutions of the respective SPDEs, namely Theorems 3.17 and 3.20. In Section 5 we prove the second order Boltzmann-Gibbs principle, which is the main technical result that we need at the microscopic level in order to recognize the macroscopic limit of the density fluctuation field as an energy solution to the SBE. Finally, in Section 6 we give the proof of the uniqueness of solutions to the aforementioned SPDEs. The appendices contain some important aside results that are needed along the paper, but to facilitate the reading flow we removed them from the main body of the text. In particular, Appendix E sketches how one could prove that the microscopic Cole-Hopf transformation of the microscopic density fluctuation field converges to the SHE, and in particular we show that already at the microscopic level the Cole-Hopf transformation changes the boundary conditions from Neumann to Robin's type.
carried out while NP was employed at Humboldt-Universität zu Berlin and Max-Planck-Institut für Mathematik in den Naturwissenschaften, Leipzig.
We would like to thank the anonymous referees for their very precise and detailed comments, which helped improve the presentation of the paper. We are also grateful to Martin Hairer for pointing out the observation explained in Remark 3.16, and we would like to thank Ivan Corwin and Hao Shen for sending and discussing with us their work [CS18] when it was still a draft.

The microscopic dynamics: WASEP in contact with stochastic reservoirs
For n a positive integer, we define Λ n = {1, . . . , n − 1} and Ω n = {0, 1} Λn as the state space of a Markov process {η n t ; t 0}, whose dynamics is entirely encoded into its infinitesimal generator, denoted below by L n . More precisely, our process belongs to the family of well-known weakly asymmetric simple exclusion processes. Here we consider that the strength of asymmetry is ruled by a parameter γ 1 2 , and we put the system of particles in contact with two stochastic reservoirs, whose rates of injection/removal of particles from the bulk depend on a parameter ρ ∈ (0, 1), which is fixed. To keep notation simple we omit the dependence on ρ in all the quantities that we define ahead and in order to facilitate future computations, we write the generator as L n = L bulk n + L bnd n , where L bulk n and L bnd n given below encode respectively the dynamics on the bulk, and on the left/right boundaries. For any E > 0 and γ 1 2 we define L bulk n and L bnd n acting on functions f : Ω n → R as follows: first, and σ x,x+1 η is the configuration obtained from η after exchanging η(x) with η(x + 1), namely Second, the generator of the dynamics at the boundaries is given by where From now on to simplify notation we denote the rates that appear above by r x,x+1 (η): precisely, at any x ∈ {0, . . . , n − 1}, and assuming the convention η(0) = η(n) = ρ, they are given by We refer to Figure 2.1 for an illustration of the dynamics. Throughout the paper, a time horizon line T > 0 is fixed. We are interested in the evolution of this exclusion process in the diffusive time scale tn 2 , thus we denote by {η n tn 2 ; t ∈ [0, T ]} the Markov process on Ω n associated to the accelerated generator n 2 L n . The path space of càdlàg trajectories with values in Ω n is denoted by D([0, T ], Ω n ). For any initial probability measure µ on Ω n , we denote by P µ the probability measure on D([0, T ], Ω n ) induced by µ and the Markov process {η n tn 2 ; t ∈ [0, T ]}. The corresponding expectation is denoted by E µ .
Notation. -Throughout the paper, for any measurable space (U, ν) we denote by L 2 (ν) or L 2 (U ) the usual L 2 -space with norm · L 2 (ν) and scalar product · , · L 2 (ν) . Whenever the integration variable u may be not clear to the reader, we enlighten it by denoting L 2 u (ν) or L 2 u (U ). We also write f g or g f if there exists a constant C > 0, independent of the parameters involved in f and g, such that f Cg. If f g and f g, we also write f g. Finally we denote by N = {1, 2, . . .} the set of positive integers and by N 0 = {0} ∪ N the set of non-negative integers.

Invariant measures and Dirichlet form
For λ ∈ (0, 1), let ν n λ be the Bernoulli product measure on Ω n with density λ: its marginals satisfy ν n λ η : η(x) = 1 = 1 − ν n λ η : η(x) = 0 = λ, for each x ∈ Λ n . When λ = ρ, the measure ν n ρ is invariant but not reversible with respect to the evolution of {η n t ; t 0}. To ensure the invariance, it is enough to check that L n f (η) ν n ρ (dη) = 0 , for any function f : Ω n → R. This is a long albeit elementary computation, which is omitted here.
In what follows we consider that the initial measure of the Markov process {η n tn 2 ; t ∈ [0, T ]} is the invariant measure ν n ρ . For short, we denote the probability measure P ν n ρ by P ρ and the corresponding expectation by E ρ . The Dirichlet form D n is the functional acting on functions f : Ω n → R which belong to L 2 (ν n ρ ) as: Invoking a general result [KL99, Appendix 1, Proposition 10.1] we can rewrite D n as

The spaces of test functions
Let C ∞ ([0, 1]) be the space of real valued functions ϕ : [0, 1] → R such that ϕ is continuous in [0, 1] as well as all its derivatives. We denote by d k ϕ the k-th derivative of ϕ, and for k = 1 (resp. k = 2) we simply denote it by ∇ϕ (resp. ∆ϕ).
Before defining the fluctuation fields associated to our process, we first need to introduce the suitable space for test functions for each one of the fields that we will analyze. First of all, let S Dir and S Neu be the following spaces of functions: both equipped with the family of seminorms {sup u∈[0,1] |d k ϕ(u)|} k∈N 0 . Then S Dir and S Neu are Fréchet spaces, and we write S Dir and S Neu for their topological duals. Now, let −∆ be the closure of the Laplacian operator −∆ : S Dir → L 2 ([0, 1]) as an unbounded operator in the Hilbert space L 2 ([0, 1]). It is a positive and selfadjoint operator whose eigenvalues and eigenfunctions are given for any integer m 1, respectively by λ m = (mπ) 2 and e m (u) = √ 2 sin(mπu). We note that the set {e m ; m 1} forms an orthonormal basis of L 2 ([0, 1]). We denote by {T Dir t ; t 0} the semi-group associated to ∆.
For k ∈ N, let us define H k Dir = Dom((−∆) k/2 ), endowed with the inner product

ANNALES HENRI LEBESGUE
By the spectral theorem for self-adjoint operators, H k Dir equals where ϕ 2 k = (ϕ, ϕ) k and for ϕ, ψ ∈ L 2 ([0, 1]) we also have Moreover, if H −k Dir denotes the topological dual space of H k Dir , then where Y 2 −k = (Y, Y) −k and the inner product ( · , · ) −k is defined as with Y(e m ) denoting the action of the distribution Y on e m . Finally, we define ∆ Neu (resp. ∆ Dir ) as the Neumann (resp. Dirichlet) Laplacian acting on ϕ ∈ S Neu and Y ∈ S Neu (resp. ϕ ∈ S Dir and Y ∈ S Dir ) as follows: Let H −k Neu be the topological dual space of H k Neu , both defined similarly to H k Dir and H −k Dir but replacing the basis e m (u) = √ 2 sin(mπu) by e m (u) = √ 2 cos(mπu). In the next sections we will also need to consider one last operator, denoted by ∇ Dir and defined as follows: given k > 0, Y ∈ S Neu and ϕ ∈ S Dir we set (2.7) ∇ Dir Y(ϕ) = −Y(∇ϕ).

Fluctuation fields
Now we introduce all the microscopic fluctuation fields for which we will prove convergence to some infinite dimensional stochastic partial differential equations (SPDEs). In the following, for any space S of distributions, we denote by D([0, T ], S) (resp. C([0, T ], S) the set of càdlàg (resp. continuous) trajectories taking values in S and endowed with the Skorohod topology.
2.4.1. Density fluctuation field Since the particle system is stationary, the microscopic average E ρ [η n tn 2 (x)] is constantly equal to ρ. We are therefore looking at the fluctuations of the microscopic configurations around their mean, namely: Definition 2.1. -For any t > 0, let Y n t be the density fluctuation field which acts on functions ϕ ∈ S Dir as In the following, we will see that the right space one has to consider in order to prove the convergence of the field Y n t is the space H −k Dir , with k > 5 2 . Note that, for any t > 0 and any k > 5 2 , Y n t is indeed an element of H −k Dir and Y n · ∈ D([0, T ], H −k Dir ).

Height function
Alternatively, instead of working with the density fluctuation field of the particle system, we may also consider the height function associated to it. Roughly speaking, the height function integrates the density fluctuation field in the space variable, i.e. it describes a curve h from {1, . . . , n} to R which satisfies h( , which however has the disadvantage that if a new particle enters at site 1, then h(x) increases by 1 for all x. Therefore, we set where, by definition, h n 0 (1) = 0 and • h n t (1) increases by 1 whenever a particle at site 1 leaves the system to the left, • h n t (1) decreases by 1 whenever a new particle enters the system at the site 1. In other words h n t (1) is exactly the net number of particles removed from the system at the left boundary until time tn 2 .
The weak asymmetry in the particle system causes the height function to slowly decrease because E > 0 (it would grow if E < 0), and at the first order the decrease is of order c n t for (2.9) c n = −n 2−γ ρ(1 − ρ)E.
For instance, with our stationary dynamics, one can easily see (see (4.24) for the case x = 1 but note that for the other values of x we also have a similar expression) that for any x ∈ {1, . . . , n}.
In the case γ = 1 2 (where we will see KPZ behavior) note that c n = −n 3/2 ρ(1 − ρ)E. So while on the microscopic scale the decrease is negligible, it becomes important on time scales of order n 2 . We want to investigate the fluctuations around the flat interface c n t, namely: Definition 2.2. -For any t > 0, let Z n t be the height fluctuation field which acts on functions ϕ ∈ S Neu as Remark 2.3. -We will see, as expected, that Z n t and Y n t are related. Note that: if ϕ ∈ S Dir , then ∇ϕ ∈ S Neu and, a simple computation, based on a summation by parts, shows that Z n t ( ∇ n ϕ) can be written as −Y n t (ϕ) plus the boundary terms: Above ∇ n ϕ is essentially a discrete gradient, see (4.25) below. Note that since ϕ ∈ S Dir , the last expression vanishes in L 2 (P ρ ) as n → ∞, as a consequence of Lemma 4.5 given ahead.
Below, we will prove that the convergence of the field Z n t is in the space H −k Neu , with k > 5 2 and we note that Z n · ∈ D([0, T ], H −k Neu ).

Solutions to non-linear SPDEs and statement of the results
In this section, we first define properly the notion of solutions for four stochastic partial differential equations, all with boundary conditions (Sections 3.1, 3.2, 3.3 and 3.4), and we connect them to each other, especially through their boundary conditions (Section 3.5). These SPDEs are going to describe the macroscopic behavior of the fluctuation fields of our system, for different values of the parameter γ ruling the strength of the asymmetry: the precise statements are then given in Section 3.6. For the sake of clarity, the proofs will be postponed to further sections.

Ornstein-Uhlenbeck process with Dirichlet boundary condition
Let us start with the generalized Ornstein-Uhlenbeck process described by the formal SPDE with A, D > 0 and where {W t ; t ∈ [0, T ]} is a standard S Neu -valued Brownian motion, with covariance The following proposition aims at defining in a unique way the stochastic process solution to (3.1): Proposition 3.1. -There exists a unique (in law) stochastic process {Y t ; t ∈ [0, T ]}, with trajectories in C([0, T ], S Dir ), called Ornstein-Uhlenbeck process solution of (3.1), such that: (1) the process {Y t ; t ∈ [0, T ]} is "stationary", in the following sense: for any t ∈ [0, T ], the random variable Y t is a S Dir -valued space white noise with covariance given on ϕ, ψ ∈ S Dir by (2) for any ϕ ∈ S Dir , is a continuous martingale with respect to the natural filtration associated to Y · , namely .
Moreover, for every function ϕ ∈ S Dir , the stochastic process dr.
Proof of Proposition 3.1. -The proof of this fact is standard, and we refer the interested reader to [KL99,LMO08].

Stochastic Burgers equation with Dirichlet boundary condition
Now we define the notion of stationary energy solutions for the stochastic Burgers equation with Dirichlet boundary condition, written as with A, D > 0 and E ∈ R, and where {W t ; t ∈ [0, T ]} is a S Neu -valued Brownian motion with covariance (3.2). Note that ∇ Dir Y 2 t so far has not a precise meaning but this will be rigorously given below in Theorem 3.3. More precisely, we are going to adapt to our case (i.e. adding boundary conditions) the notion of energy solutions as given for the first time in [GJ14,GJ13].
The following theorem, which we prove in this paper, aims at defining uniquely the stochastic process solution to (3.4): (1) the process {Y t ; t ∈ [0, T ]} is "stationary", in the following sense: for any t ∈ [0, T ], the random variable Y t is a S Dir -valued space white noise with covariance given on ϕ, ψ ∈ S Dir by (2) the process {Y t ; t ∈ [0, T ]} satisfies the following energy estimate: there exists κ > 0 such that for any ϕ ∈ S Dir , any δ, ε ∈ (0, 1) with δ < ε, and any (3) for any ϕ ∈ S Dir and any t ∈ [0, T ], the process is a continuous martingale with respect to the natural filtration (3.3) associated to Y · , of quadratic variation where the process {A t ; t ∈ [0, T ]} is obtained through the following limit, which holds in the L 2 -sense: Proof of Theorem 3.3. -The proof is somewhat lengthy and we give it in Section 6.3.
Remark 3.4. -Note that there is a way to make sense of Y s (ι ε (u)) (which is a priori not obvious since ι ε (u) is not a test function), as explained in [GJ14, Remark 4].
Remark 3.5. -Note that the definition of the Ornstein-Uhlenbeck process Y given in Proposition 3.1 and of the energy solution to the stochastic Burgers equation (3.4) when E = 0, are equivalent. Indeed, the only part which is not obvious is that the Ornstein-Uhlenbeck process Y satisfies conditions (2) and (4) in Theorem 3.3. But both of these will follow from our convergence result Theorem 3.17 with E = 0.

KPZ equation with Neumann boundary condition
Here we define the notion of solution for the KPZ equation with Neumann boundary condition, which is formally given by with A, D > 0 and E ∈ R, and where {W t ; t ∈ [0, T ]} is a standard S Neu -valued Brownian motion with covariance (3.2) and (∇Z t ) 2 denotes a renormalized square which will have a precise meaning in Theorem 3.7 below.
Definition 3.6. -For ε > 0 and Z ∈ C([0, 1]), let us define The following theorem aims at defining uniquely the stochastic process solution to (3.8): There exists a unique (in law) stochastic process , called almost stationary energy solution of (3.8), such that: (1) law(Z 0 ) = law(Z); (2) there exists a stationary energy solution (3) for any ϕ ∈ S Neu and any t ∈ [0, T ], the process is a continuous martingale with respect to the natural filtration associated to Z · , defined as in (3.3) but with ϕ ∈ S Neu and with Z s in the place of Y s , of quadratic variation where the process {B t ; t ∈ [0, T ]} is obtained through the following limit, which holds in the L 2 -sense: where ∇ ε Z s (u) has been defined in (3.9).
Proof of Theorem 3.7. -The proof of that result is also contained in Section 6.3.
Remark 3.8. -Note that we did not require Z to be a continuous function in u, so it is not obvious that ∇ ε Z(u) as defined in (3.9) is well defined. But of course ∇ ε Z(u) = Y(ι ε (u)), and as discussed before the right hand side can be made sense of with the arguments of [GJ14, Remark 4].
Remark 3.9. -By "almost stationary" we mean that the derivative of Z is stationary: It is a (multiple of the) white noise for all times, so Z is a Brownian motion for all times. But the starting point Z t (0) of this Brownian motion will change with time.

Stochastic heat equation with Robin boundary conditions
Finally we define the notion of solutions to the stochastic heat equation with Robin boundary condition, written as 2), and where we want to impose (formally): To see how we should implement these boundary conditions, consider f ∈ C 2 ([0, 1]) and ϕ ∈ S Neu , and assume that f satisfies the Robin boundary conditions ∇f (0) = αf (0), and ∇f (1) = βf (1). Integrating by parts twice and using the Neumann boundary conditions for ϕ and the Robin boundary conditions for f , we obtain This suggests to define the operator and in principle this leads to a definition of solutions to (3.11). But for technical reasons we do not want to a priori assume our solution to be continuous in (t, u), and this means we could change the values of Φ t (0) and Φ t (1) without changing Φ t (ϕ), so we cannot hope to get uniqueness without further assumptions. Let us introduce a suitable class of processes for which the boundary term is well defined: we write and if for all t > 0 the process u → Φ t (u) is continuous in L 2 (P), where P is the law of the process Φ · and above E is the expectation w.r.t. to P. For Φ ∈ L 2 C ([0, T ]) we cannot change the value of Φ t (0) without changing Φ t in an environment of 0, and this would also change Φ t (ϕ) for some test functions ϕ.
Then there exists at most one (law of a) process , called weak solution to (3.11) with boundary condition (3.12), such that: (1) law(Φ 0 ) = law(F ); (2) for any ϕ ∈ S Neu and any t ∈ [0, T ], the process is a continuous martingale of quadratic variation Proof. -The proof of this proposition is given in Appendix D.
Remark 3.11. -In [CS18, Proposition 2.7] the authors show uniqueness for (the mild solution of) the same equation, but they require α > 0 and β < 0, while here we will need to take α = −E 2 /2 and β = E 2 /2 (where E > 0 is the parameter ruling the weak asymmetry). A couple of weeks after this paper was displayed, Parekh in [Par19] extended the approach of [CS18] to general parameters α, β. As the probabilistic representation given in [Fre85,Pap90] shows, the key difference is that our boundary conditions correspond to sources at the boundary, while the case of [CS18] corresponds to sinks. Such sink terms make the Robin Laplacian negative and in particular its spectrum is negative, while for the Robin Laplacian with a source term some eigenvalues may be positive. In particular, the method used in Section 4.1 of [CS18] for proving heat kernel estimates breaks down for our choice of parameters. Theorem 3.4 of [Pap90] gives L 1 and L ∞ bounds for the heat kernel that are sufficient for our purposes, but it is not quite obvious whether the heat kernel is in C 1,2 ((0, ∞) × [0, 1]) and satisfies the Robin boundary condition in the strong sense, which is what we would need here. While we expect this to be true, we avoid the problem by formulating the equation slightly differently: we do not work with the Robin heat kernel but we use the Neumann heat kernel instead and deal with the resulting boundary corrections by hand.
Remark 3.12. -It would be possible to weaken the assumptions on the initial conditions (say to allow for a Dirac delta initial condition), and of course we would also be able to prove existence and not just uniqueness. Since given the computations in Appendix D these results follow from standard arguments as in [Qua12,Wal86] and we do not need them here, we choose not to include the proofs.
We conclude this section by making the link between the KPZ equation (3.8) and the stochastic heat equation (3.11), with their respective boundary conditions. This link is, as expected, done through the Cole-Hopf transformation, although the boundary conditions are linked in a more complicated way then one might guess naively: Proposition 3.13. -Let Z be the almost stationary energy solution of the KPZ equation (3.8) with Neumann boundary condition as defined in Theorem 3.7. Then, for any t ∈ [0, T ] we have where Φ solves the stochastic heat equation with the Robin boundary condition (1), and where above N t has been defined in item (3) of Theorem 3.7.
Proof of Proposition 3.13. -This proof is contained in Section 6 (see in particular Subsection 6.3).

Boundary behavior for singular SPDEs
The stochastic Burgers equation is an important example of a singular SPDE, a class of equations for which tremendous progress was made in the last years [Hai14,GIP15]. The vast majority of all papers on singular SPDEs treats domains without boundaries (mostly the torus, sometimes Euclidean space), and only quite recently some articles appeared that deal with boundaries [CS18,GH18].
But in some cases it is not quite obvious how to formulate the boundary conditions. For example, the solution to the stochastic Burgers equation is almost surely distribution valued and therefore we cannot evaluate it at the boundary. In Theorem 3.3 we proposed a formulation for the notion of solutions to the stochastic Burgers equation with Dirichlet boundary condition Y t (0) ≡ Y t (1) ≡ 0 (for any t ∈ [0, T ]), that seems natural to us. But as we saw in Proposition 3.13, we have where W ε is a mollification of W, may converge to different limits satisfying different boundary conditions as ε → 0, depending on which mollifier was used for W ε . But if the noise is only mollified in space and white in time, then the limit is always the same and it agrees with the Cole-Hopf solution of [CS18].
So it is not obvious whether there is a "canonical" way of formulating singular SPDEs with boundary conditions, and in the case of the stochastic Burgers equation with Dirichlet boundary conditions it may seem that the Cole-Hopf solution is the most canonical solution. But below, in Proposition 3.14, we will see that our solution Y indeed satisfies Y t (0) ≡ Y t (1) ≡ 0, as long as we do not try to evaluate Y t (0) at a fixed time but allow for a bit of averaging in time instead. We also show, in Lemma 3.15, that the (approximate) Cole-Hopf transformation Ψ of Y satisfies the Robin boundary condition averaging in time. This sheds some light on the actual boundary behavior of solutions to singular SPDEs and indicates that our formulation of the equation is maybe more natural than the Cole-Hopf formulation of [CS18]. Although the approximation result in (3.14) looks natural at first sight, note that not renormalizing ∇ Dir (Y ε t ) 2 means that we renormalize (Y ε t ) 2 with a constant which is killed by ∇ Dir and therefore does not appear in the equation. But of course Y ε is not spatially homogeneous, so there is no reason why the renormalization should be spatially constant. And as our results show, taking it constant actually leads to an unnatural boundary behavior in the limit.
-We prove this proposition in Appendix B.
This is an integral kernel which will be used in Section 6.3 to map the energy solution Y to an approximate solution of the stochastic heat equation (1) . We set Z t (u) = Y t (Θ u ), so that ∇ Dir Z = Y, and then Ψ t (u) = exp( E A Z t (u)). Then one could compute formally But we will show now that this formal computation is incorrect and the boundary condition for Ψ is not of Neumann type (i.e. ∇Ψ t (0) ≡ ∇Ψ t (1) ≡ 0), but rather of Robin type, more precisely Proof of Proposition 3.15. -The proof will be exposed in Appendix B.
(1) In the notation of Section 6.3 we have Θ ε Remark 3.16. -When we naively apply the chain rule in the formal derivation "∇Z t (1) = 1 and therefore ∇ exp( E , we implicitly assume that Z t ( · ) allows for a Stratonovich type calculus. But in the space variable Z t ( · ) is D/(2A) times a Brownian motion, so maybe it is more natural to apply Itô's formula (as Martin Hairer kindly pointed out to us), with which we get which is exactly the shift to Robin boundary conditions that we see in Proposition 3.13. At the left boundary the Robin condition has a negative sign, which means that here we rather have to apply the backward Itô's formula to get the correct transformation of the boundary conditions. See Remark 6.15 for the precise point where the correction shows up in our computations.

Statement of the convergence theorems
From now and for the rest of the paper we assume ρ = 1 2 . We are now ready to state all the convergence results for the different fields that we defined previously in Section 2.4: Theorem 3.17 (Convergence of the density field). -Fix T > 0, k > 5 2 and ρ = 1 2 . Then, the sequence of processes 2 , the Ornstein-Uhlenbeck process solution of (3.1) with Dirichlet boundary conditions (as defined in Proposition 3.1) with A = 1 and D = 1 2 ; (2) if γ = 1 2 , the unique stationary energy solution of the stochastic Burgers equation (3.4) with Dirichlet boundary conditions (as defined in Theorem 3.3) with A = 1, E = E and D = 1 2 . Remark 3.18. -We note that the previous theorem, when the strength of the asymmetry is taken with γ = 1, has been already proved in [GLM17] by using a different strategy, namely, considering the microscopic Cole-Hopf transformation (which avoids the derivation of a Boltzmann-Gibbs principle) but in a wide scenario since there the initial measure is quite general.
Remark 3.19. -It is possible to weaken the assumptions on the initial conditions and to start the dynamics in bounded entropy perturbations of the invariant measure. More precisely, we could assume that Y n 0 converges weakly in H −k Dir to a random variable Y 0 and that the relative entropy of ν n = law(η n 0 ) with respect to ν n 1/2 is (asymptotically) bounded in n: Indeed, the methods of [GJS15] (essentially the entropy inequality) apply here as well, and under this assumption they allow us to show the convergence of the sequence of processes {Y n t ; t ∈ [0, T ]} n∈N to a non-stationary energy solution, which however has finite entropy with respect to a stationary energy solution (as a measure on the Borel sets of C([0, T ], H −k Dir )). The uniqueness for such bounded entropy perturbations is established in [GP18b] on the real line, but the proof in that paper easily carries over to our situation.
2 , the unique almost stationary energy solution of the KPZ equation (3.8) with Neumann boundary conditions (as defined in Theorem 3.7) with A = 1, E = 0 and D = 1 2 ; (2) if γ = 1 2 , the unique almost stationary energy solution of the KPZ equation (3.8) with Neumann boundary conditions (as defined in Theorem 3.7) with A = 1, E = E and D = 1 2 . The strategy of the proof of Theorem 3.17 is quite well known and has been widely used in the past literature. Let us recall here the main steps: (1) first, prove that the sequence of probability measures {Q n } n∈N , where Q n is induced by the density fluctuation field Y n and the initial measure ν n ρ , is tight. Note that Q n is a measure on the Skorokhod space D([0, T ], H −k Dir ). This is the purpose of Section 4.3 below; (2) second, write down the approximate martingale problem satisfied by Y n t in the large n limit, and prove that it coincides with the martingale characterization of the solutions of the SPDEs given in Theorem 3.17. The closure of the martingale problem is explained in Section 4.1. In the case γ = 1 2 , we need to prove an additional important tool, the so-called second order Boltzmann-Gibbs principle, which is stated in Section 4.2 below and proved in Section 5. The strategy of the proof of Theorem 3.20 is completely similar to the one described above.
Remark 3.21. -Most of our arguments work for any ρ ∈ (0, 1). However, the restriction ρ = 1 2 is not just for convenience of notation: otherwise we would pick up an additional diverging transport term in the martingale decomposition for Y n (ϕ), roughly speaking E(1 − 2ρ)n 3/2−γ Y n (∇ϕ). In the periodic case or if the underlying lattice is Z we can kill that term by observing our system in a moving frame, see [GJ14] for instance, but of course this does not work in finite volume with boundary conditions. Therefore, we need to assume either γ 3 2 or ρ = 1 2 . Since we are mostly interested in the case γ = 1 2 , we take ρ = 1 2 .
From now on and up to the end of the paper we will mainly assume γ = 1 2 but we will point out the differences with respect to the case γ > 1 2 . We also essentially focus on the convergence of Y n , since the convergence of Z n will follow by very similar arguments.

Proof of Theorem 3.17 and Theorem 3.20
We start by giving all the details on the proof of Theorem 3.17, and at the end of this section we present the only necessary steps which need to be adapted for the proof of Theorem 3.20. They mainly concern the control of boundary terms for the height fluctuation field.
The section is split in the following way. In Section 4.1 we write down the martingale decomposition which is associated to the density fluctuation field. In Section 4.2 we state the second order Boltzmann-Gibbs principle, whose proof will be given later in Section 5. This principle is needed to control the term in the martingale decomposition which gives rise in the regime γ = 1 2 to the Burgers non-linearity. In Section 4.3 we prove tightness of the density fluctuation field Y n , and finally in Section 4.4 we characterize the limit point as a solution to the corresponding SPDE. In Section 4.5 we give the martingale decomposition for the field Z n , and we present the estimate that is needed in order to control extra terms that appear at the boundary.
For the sake of clarity from now on we denote η tn 2 := η n tn 2 .

Martingale decomposition for the density fluctuation field Y
Fix a test function ϕ ∈ S Dir so that ϕ(0) = ϕ(1) = 0. By Dynkin's formula, we know that are martingales. The computations from Appendix A.1 show that the integral term in the first martingale (4.1) rewrites as x n η sn 2 (x)η sn 2 (x + 1) ds, (4.5) and where, above, ∇ + n ϕ and ∆ n ϕ are the two functions that approximate on the discrete line the gradient and Laplacian of ϕ, respectively. They are defined for x ∈ Λ n by: Moreover, in (4.5) we have used a short notation for the centered variable defined as: . It is quite easy to see that in the macroscopic limit, the integral term I n t shall correspond to the diffusive macroscopic term ∆ Dir Y t . Moreover, when γ = 1 2 , A n t shall give rise to the non-linear term in the stochastic Burgers equation, as explained in the next lines, and it will disappear when γ > 1 2 . We also note that since ϕ ∈ S Dir , a simple computation shows that the integral term in the second martingale (4.2) rewrites as

Case γ = 1 2 : second order Boltzmann-Gibbs principle
In this section we state another important result of this work, which is essential to the proof of Theorem 3.17, since we will be able to treat the term A n t (ϕ) given in (4.5). We focus on the case γ = 1 2 , but ahead we make some comments on the case γ > 1 2 . Before proceeding, we need to introduce some notations. Definition 4.1. -For any x ∈ Λ n and 1 ∈ N that satisfy x + 1 ∈ Λ n (resp. 2 ∈ N that satisfy x − 2 ∈ Λ n ), we denote by − → η 1 (x) (resp. ← − η 2 (x)) the average centered configuration on a box of size 1 (resp. 2 ) situated to the right (resp. left) of the site x ∈ Λ n : For any measurable function v : Λ n → R, let us define v 2 2,n = 1 n x∈Λn v 2 (x). From now on and up to the end, C > 0 is a constant that does not depend on t > 0, nor on n, ∈ N, and that may change from line to line. Finally, we denote the static compressibility of the system by χ(ρ) = (η(1) − ρ) 2 ν n ρ (dη) = ρ(1 − ρ).
Theorem 4.2 (Second order Boltzmann-Gibbs principle). -Let v : Λ n → R be a measurable function. There exists C > 0 such that, for any n, ∈ N such that < n 4 and any t > 0, Remark 4.3. -Notice that the assumption < n 4 ensures that one of the two conditions in (4.9) is always satisfied and − → η (x) and ← − η (x) are always well defined.
Moreover, the function Q(x, , · ) is centered, namely Q(x, , η)ν n ρ (dη) = 0. Proof of Theorem 4.2. -The proof is quite involved and is postponed to Section 5. Now, let us apply Theorem 4.2 when = εn (which actually means εn with some abuse of notation): this choice makes the right hand side of (4.8) vanish when we let first n → ∞ and then ε → 0.
As a consequence, when γ = 1 2 , A n t (ϕ) is well approximated in L 2 (P ρ ) by the time integral of the following quantity: Since ϕ(0) = ϕ(1) = 0, the second term of (4.10) is of order 1/n and therefore vanishes when we let n → ∞. We also note that for the case γ > 1 2 the previous term has a factor √ n/n γ in front of it and for that reason it vanishes as n → ∞. Finally, the computation above (and more precisely the first term of (4.10)) motivates the definition of energy solutions as defined in Theorem 3.3, that is the definition of the macroscopic field A t as given in (3.7). Indeed, putting all these considerations together in the case γ = 1 2 , we see that (4.6) rewrites as where o n t (1) is deterministic and satisfies sup t∈[0,T ] |o n t (1)| → 0 as n → ∞. This computation will be useful to characterize limit points of the density fluctuation field (see Section 4.4 below). Before that, let us prove tightness.

Tightness of the density fluctuation field
In this section, for the sake of completeness we show tightness of the sequence {Y n t ; t ∈ [0, T ]} n∈N , following closely [KL99]. The main difference is the presence of the extra term A n t in the martingale decomposition. Tightness is a consequence of the following lemma.
and · −k has been defined in (2.6).
Proof of Lemma 4.4. -We split the proof of this lemma into two steps. To prove (1), from Markov's inequality it is enough to notice that Now we compute the expectation at the right hand side of (4.12) using the martingale decomposition (4.6) for ϕ = e m which makes sense since e m ∈ S Dir . First, we have by independence under ν n ρ , The martingale term can be easily estimated by Doob's inequality as Finally, in order to estimate the remaining term A n t (e m ), we sum and subtract to η(x)η(x + 1) the term Q(x, , η), and from the elementary inequality (x + y) 2 2x 2 + 2y 2 , it remains to bound the following two expectations: Therefore, to conclude the proof we just have to show that for any j ∈ N and ε > 0, In fact we prove that for every m 1 and ε > 0 from which (4.17) follows. Now, as before, we recall (4.6) so that the previous result is accomplished if we derive the same result for each term in the martingale decomposition of Y n t (e m ). We start by the most demanding one, which is the term that involves the martingales, more precisely, we show that for every m 1 and ε > 0 for every ε > 0, were T T denotes the family of all stopping times bounded by T . Using Tchebychev's inequality together with the optional stopping Theorem, the last probability is bounded from above by which, by definition of the quadratic variation of the martingale can be bounded from above by ε −2 Cm 2 δ, and vanishes as δ → 0. Now, we compute the remaining term that involves I n t (e m ). We have to show that for every ε > 0 By Tchebychev's inequality and Cauchy-Schwarz's inequality, we obtain that the last probability is bounded by T δm 4 /ε 2 , which vanishes as δ → 0. For the last term involving A n t (e m ) we can repeat the computations that we did above: we sum and subtract the term Q(x, , η) as in (4.15) and (4.16), then we chose = Cn, C > 0, and we prove that each contribution is of order T δm 2 /ε 2 , and therefore it goes to zero as δ → 0, which finishes the proof.

Characterization of limiting points
In this section we prove that any limit point of the tight sequence {Y n t ; t ∈ [0, T ]} n∈N concentrates on stationary energy solutions of (3.4) as defined in Theorem 3.3. Up to extraction, one can consider that the four sequences respectively. First, one can repeat the argument taken from [GJ14, Section 5.3] to prove that the limit point {Y t ; t ∈ [0, T ]} has continuous trajectories and it is stationary in the sense of Theorem 3.3 (see item (1)). The characterization will be complete if we prove that this limit process also satisfies the remaining three items of Theorem 3.3. This is what we explain briefly in the next paragraphs, since the argument is now standard and is given for example in [FGS16,GJ14,GJS15].

Proof of item (2)
We give a few proof elements for the sake of completeness. First note that Since Y n · converges to Y · and since we can approximate the function ι ε ( x n ) by proper functions in the space S Dir , we get Now, rewriting the Boltzmann-Gibbs principle stated in Theorem 4.2, in terms of A n t (ϕ), we see that there exists C > 0 such that, for any ϕ ∈ S Dir , The last claim is proved as follows: in A n t (ϕ) given in (4.5) we sum and subtract ∇ n ϕ( x n )Q(x, εn, η sn 2 ) inside the sum, and we use a standard convexity inequality in order to treat two terms separately. The first one is handled using the Boltzmann-Gibbs principle, the second one is estimated thanks to the computation (4.10). Then, since L 2 -bounds are stable under the convergence in law, from the previous estimate we conclude that .
Finally, the energy estimate (3.5) is a trivial consequence of (4.19), since it follows from adding and subtracting the quantity (A t (ϕ) − A s (ϕ)) inside the square.

Proof of item (3)
This point is now a straightforward consequence of the martingale decomposition given in (4.11) and in (4.7), in which one can pass to the limit n → ∞, together with the previous paragraph.

Proof of item (4)
This last property can be obtained easily by considering the reversed dynamics with the adjoint of the infinitesimal generator L n with respect to the Bernoulli product measure ν n ρ and repeating the same exact arguments as we did above.

Sketch of the proof of Theorem 3.20
As mentioned previously, the proof of Theorem 3.20 is essentially the same as for Y n . Let us give here some hints and follow the sketch from the previous paragraphs.
First, let us note that the analogue of the martingale decomposition (4.3) contains also one boundary term. Indeed, fix a test function ϕ ∈ S Neu , and let n 2 L ⊗ n denote the generator of the joint process {η tn 2 (x), h n tn 2 (1)} x∈Λn ; t 0 . This generator acts on functions f : Ω n × Z → R as follows: where r x,x+1 has been already defined in (2.1). From the computations in Appendix A.2 we get that , is a martingale. Above the terms R n t and B n t are given by: . Moreover, in (4.21), o n t (1) is a deterministic sequence of real numbers that vanishes as n → ∞, uniformly in t ∈ [0, T ], and we also have used the notation ∆ n ϕ to denote the following approximation of the Laplacian:

ANNALES HENRI LEBESGUE
Note that, if ϕ ∈ S Neu and therefore satisfies ∇ϕ(1) = 0, then ∆ n ϕ is indeed an approximation of the usual Laplacian as n → ∞. Let us start with B n t . Note that we have for any test function ϕ, n is defined similarly to ∇ + n except that the discrete gradient is shifted, namely: ∇ − n ϕ( x n ) = n(ϕ( x n ) − ϕ( x−1 n )). As a consequence, this term can be treated as A n t , using the Boltzmann-Gibbs principle (Theorem 4.2): it gives rise to the KPZ non-linearity as soon as γ = 1 2 and vanishes when γ > 1 2 . Next, in (4.21), the term R n t (which does not depend on γ) comes from boundary effects, but does not contribute to the limit if ϕ ∈ S Neu , as a consequence of the following lemma. and in particular, for any ϕ ∈ S Neu , the term R n (ϕ) defined in (4.22) converges to 0 in L 2 (P ρ ), locally uniformly in time.
Proof of Lemma 4.5. -Since h n (1) increases by 1 whenever a particle leaves the system and decreases by 1 whenever a particle enters, one can easily write where c n = −n 2−γ E/4 has been defined in (2.9) and M n is a martingale with predictable quadratic variation The Burkholder-Davis-Gundy inequality implies for all p 1 the following: there exists C > 0 such that where ∆ t M n is the jump of M n at time t and therefore bounded by 1. In the integrand we can bound −η n t (1) from above by 0 and therefore n −3/2 M n tn 2 vanishes in the limit. From (4.24) we get n −3/2 h n tn 2 (1) − c n t = t 0 √ n 1 + E 2n γ η sn 2 (1)ds + n −3/2 M n tn 2 , and therefore we are left with bounding √ n t 0 η sn 2 (1)ds. With the Kipnis-Varadhan Lemma given for instance in [KL99, Proposition A.1.6.1] we estimate where D n (f ) is the Dirichlet form defined in (2.2). From the decomposition (2.3) we easily obtain for any ε > 0, that we choose now such that (4ε) −1 = (ρ ∧ (1 − ρ)) n 2 . And therefore we obtain The bound for h n (n) is shown with the same arguments. To conclude the proof, take ϕ ∈ S Neu , so that ∇ϕ(0) = ∇ϕ(1) = 0. In that case There are two remaining steps, the first one being tightness of the sequence {Z n t ; t ∈ [0, T ]} n∈N . We let the reader repeat the same proof as in Lemma 4.4, noting that since the height fluctuation field is now defined in H −k Neu , the basis that one has to use is e m given in Section 2.3. The arguments remain unchanged, we only note that the restriction k > 5 2 comes from the analog of (4.14). Finally, for the characterization of limit points we essentially use the relation between Z n t and Y n t , which reads: for any ϕ ∈ S Dir , where ∇ n ϕ( x n = 1 {1,...,n−1} (x) ∇ + n ϕ( x n , in particular ∇ n ϕ( n n ) = 0. Since ϕ ∈ S Dir , Lemma 4.5 implies that the last two terms in (4.25) vanish in L 2 (P ρ ) as n → ∞ uniformly in t ∈ [0, T ]. If Z is the limit point of Z n then passing to the limit in (4.25), we get: for any ϕ ∈ S Dir , where we used the definition of ∇ Dir given in (2.7). From this, we deduce item (1) of Theorem 3.7. The last item (3) is obtained similarly combining (4.25) with the two martingale decompositions (4.6) and (4.21). For the quadratic variation we observe that by Dynkin's formula is a martingale. Here we used that the drift −c n t gives rise to a first order differential operator G n which satisfies Leibniz's rule and therefore the difference G n Z n s (ϕ) 2 − 2Z n s (ϕ)G n Z n s (ϕ) vanishes. A simple computation shows that last integral term can be rewritten as By taking expectation w.r.t. ν n ρ and sending n → ∞ we conclude item (3).

Proof of the second order Boltzmann-Gibbs principle
In this section we prove the Boltzmann-Gibbs principle stated in Theorem 4.2. The proof uses similar arguments as in the original ones [GJS17, BGS16,FGS16]. The main difference is the separation of the lattice sites into two sets: {1, . . . , n − 2 − 1} and {n − 2 , . . . , n − 1}, which are treated separately. This technical novelty is necessary in order to take into account the presence of fixed endpoints.
Let us illustrate how the proof of this principle works: let us choose a site x which is not too close to the right boundary, in the sense that there are at least 2 sites between x and n − 1, then we can replace the local function η(x)η(x + 1) by the square of the average to its right ( − → η (x)) 2 (see Figure 5.1). The main reason to keep at least 2 sites between x and n − 1 is because the proof makes use of the sites situated between x + + 1 and x + 2 , as explained in Section 5.1 below. Otherwise, when x + 2 > n − 1, we replace the same local function by the square of the average to its left ( ← − η (x)) 2 (see Figure 5.2). Before going into the proof details, let us introduce some notations: for a function g : Ω n → R we denote by g 2 its L 2 (ν n ρ )-norm: Ωn g 2 (η) ν n ρ (dη).
In the following, C = C(ρ) denotes a constant that does not depend on n nor on t nor on the sizes of all boxes involved, and that may change from line to line. We fix once and for all a measurable function v : Λ n → R, for which v 2 2,n < ∞. We say that a function g : {0, 1} Z → R has its support (denoted below by Supp(g)) included in some subset Λ ⊂ Z if g depends on the configuration η only through the variables {η(x) ; x ∈ Λ}. We denote by τ x the usual shift operator, that acts on functions g : Ω n → R as follows: τ x g(η) = g(τ x η), which is well defined for any x such that Supp(g) ⊂ {1, . . . , n − 1 − x}. To keep the presentation as clear as possible, we define two quantities that are needed in due course: Definition 5.1. -Let m ∈ Λ n be an integer such that m < n 2 , and let • g → : {0, 1} Z → R be a function whose support is included in {0, .., m}, • g ← : {0, 1} Z → R be a function whose support is included in {−m, . . . , 1}.

Let us define
With this definition, (4.8) follows from showing that for any n, ∈ N such that < n 4 , and any t > 0, where the two local functions g → and g ← are given by

Strategy of the proof
We prove (5.1) and (5.2) separately. For both of them, we need to decompose g → and g ← as sums of several local functions, for which the estimates are simpler. With a small abuse of language, we say that, at each step, we replace a local function with another one. More precisely, let 0 and assume first (to simplify) that = 2 M 0 for some integer M ∈ N. Denote k = 2 k 0 for k ∈ {0, . . . , M }. One can easily check the decomposition For example, in (5.5) we say that we replace η(1) by − → η 0 ( 0 ), while η(0) is considered as fixed. Seven terms appear from (5.5) to (5.11). Let us denote them by order of appearance as follows: . The decomposition above can naturally be written for τ x g → (x ∈ Λ n ) by translating any term. Let us now illustrate the first steps of the decomposition: in Figure 5.3 below, we use the arrows as symbols for the replacements we perform, and we illustrate the consecutive replacements from (5.5) to (5.7), the latter corresponding to − → η k (x + k ) → − → η 2 k (x + 2 k ).
Simultaneously, one can see in (5.8) that − → η k (x) is replaced with − → η 2 k (x) as we illustrate now in Figure 5.4: The role of the pre-factors − → η k (x) in (5.7) and − → η 2 k (x+2 k ) in (5.8) can be roughly understood as follows: these local functions have a variance of order ( k ) −1 under ν n ρ , which compensates the price to pay when one tries to replace − → η k (x + k ) by − → η 2 k (x + 2 k ) and − → η k (x) by − → η 2 k (x). More precisely, this compensation is optimal if the support of the pre-factor does not intersect the set of sites which are used in the replacement: for example, the support of − → η k (x) is {x + 1, . . . , x + k } and it does not intersect {x + k + 1, . . . , x + 4 k }, which corresponds to the sites used in the replacement − → η k (x + k ) → − → η 2 k (x + 2 k ), see (5.7). More details are given below. The decomposition which works for g ← is very similar and there is no difficulty to find it out, following closely (5.5)-(5.11).
Let us go back to our goal estimate (5.1). From the standard convexity inequality (a 1 + · · · + a p ) 2 p(a 2 1 + · · · a 2 p ), one can see that (5.1) follows from seven independent estimates. More precisely, it is enough to prove that (5.12) I left t,n (g → w ) Ct n + tn 2 v 2 2,n , for any w ∈ {I, II, III, IV, V, VI, VII}.

ANNALES HENRI LEBESGUE
There is one further step in two particular cases: for w = III and w = IV , we also use Minkowski's inequality, so that we can bound as follows: Finally, the proof of (5.12) can almost be resumed in one general statement, which we are going to apply several times. Let us state here our main estimate: Proposition 5.2. -Let A, B be two subsets of Λ n , and let us denote by #B the cardinality of B. We assume that: for all x ∈ A, the translated set satisfies τ x B ⊂ Λ n . Consider g : Ω n → R a local function whose support does not intersect B, namely: Supp(g) ∩ B = ∅, and which has mean zero with respect to ν n ρ . Then, there exists C > 0 such that, for any n ∈ N and t > 0, Proof of Proposition 5.2. -We prove it in Section 5.3.
Before that, let us apply it to our purposes.

End of the proof of Theorem 4.2
First, let us prove that we can apply Proposition 5.2 in order to estimate I left t,n (g → w ), for w ∈ {I, . . . , VI}. The only estimate that has to be considered separately is the one involving g → VII . We prove that the assumptions of Proposition 5.2 are satisfied for g → III and g → IV (see also (5.13) and (5.14)) and we let the reader to check the other ones. First, TOME 3 (2020) recall (5.13) and notice that Note that the above identity can be easily obtained by splitting each average − → η k (x+ k ) and − → η 2 k (x + 2 k ) in two parts, as follows: and subtracting the sums one by one. Let us first deal with (5.16): we can use Proposition 5.2 with Note that g 2 2 = C/ 3 k and #B = 2 k , and remember that k = 2 k 0 . Next, we deal with (5.17): we only need to change B which now reads B = k y=1 y + k , . . . , y + 3 k − 1 , hence #B = 2 2 k .
Therefore, one can see from (5.13) and Proposition 5.2 that Let us treat similarly g → IV : recall (5.14) and write Here we choose y, . . . , y + k − 1 and then apply Proposition 5.2, which gives the same bound as before: Performing similar arguments and using Proposition 5.2 together with Minkowski's inequality, we get that, for any , n ∈ N such that < n 4 , and any t > 0, Ct n v 2 2,n , for any w ∈ {III, IV, V, VI}.
Finally, we have to estimate the last remaining term involving g → VII , which is treated separately. More precisely, in Section 5.4 we will prove the following: Proposition 5.3. -For any , n ∈ N such that < n 4 , and any t > 0, Putting together the previous estimates into our decomposition (5.5)-(5.11) of g → , we obtain straightforwardly the final bound (5.1). We let the reader repeat all the arguments above to obtain the second part, namely (5.2). Theorem 4.2 then easily follows.
The next two sections are devoted to the proofs of Proposition 5.2 and Proposition 5.3.

Proof of Proposition 5.2
Take A, B two subsets of Λ n such that, for all x ∈ A, τ x B ⊂ Λ n , and take g : Ω n → R a mean zero function with respect to ν n ρ such that Supp(g) ∩ B = ∅. From [KLO12, Lemma 2.4], we bound the left hand side of (5.15) by where D n is the Dirichlet form introduced in (2.2). We write the previous expectation as twice its half, and in one of the integrals we make the change of variables η → σ z,z+1 η to rewrite it as With our assumption, we have Supp(τ x g) ∩ τ x B = ∅ for every x ∈ A. Therefore: , for all z ∈ τ x B, and as a consequence the last expression equals For any x ∈ A and z ∈ τ x B, we use Young's inequality with ε x > 0, and we bound the previous expression from above by x∈A z∈τxB Now, since ν n ρ is invariant under translations and since A ⊂ Λ n , it is easy to see that 1 As a result, if we chose 2ε x = #B/n 2 , we have that (5.21) is bounded by n 2 D n (f ), and (5.20) is bounded by This ends the proof.

Proof of Proposition 5.3
We have I left t,n (g → VII ) 2I left t,n (g 1 ) + 2I left t,n (g 2 ), where we define Let us start with the easiest term to estimate, namely I left t,n (g 2 ). From the Cauchy-Schwarz inequality, together with the independence of η(x) and η(y) under the invariant measure ν n ρ (as soon as x = y), one can easily show that Now let us look at the term with g 1 . From [KLO12, Lemma 2.4], we bound I left t,n (g 1 ) by We note that For each term of the last sum above, we do the following procedure: we write it as twice its half, and in one of the integrals we make the change of variables η to σ z,z+1 η (for some suitable z), for which the measure ν n ρ is invariant. After doing this, one can check that the last expression equals Note that the last term (5.26) comes from the change of variables η to σ x,x+1 η in the first term (5.24) above, as well as (5.25). The whole sum can be rewritten as The integral in (5.22) is exactly equal to the sum of (5.23) and (5.28), therefore it is bounded by the first term in the previous expression, namely (5.27). Now, we use the same arguments as in the proof of Proposition 5.2, namely, Young's inequality with 2ε x = /n 2 and we bound it by C(ρ) n v 2 2,n + n 2 D n (f ), which proves that I left t,n (g 1 ) Ct n v 2 2,n , so that the proof ends.

Uniqueness of energy solutions
In this section we give all the details for the proof of Theorem 3.3 and we show how the same arguments also apply to the proof of Theorem 3.7.
Recall that we are interested in energy solutions to where So from now on we assume without loss of generality that A = 1 and D = 2, and to simplify notation we write E instead of E, and we show in this section that the equation has a unique solution for any E ∈ R, where the notion of Dirichlet boundary conditions has been properly defined in Theorem 3.3.
To prove the uniqueness of energy solutions we will use the exact same strategy as in [GP18a] and we will sometimes refer to that paper for additional details. The main idea consists in: first, mollifying an energy solution, then mapping the mollified process through the Cole-Hopf transform to a new process, and then taking the mollification away in order to show that the transformed process solves in the limit the linear multiplicative stochastic heat equation with certain boundary conditions. However, even using the strategy of [GP18a], we have to redo all computations because our setting is somewhat different, and the boundary condition of the stochastic heat equation actually changes as we pass to the limit: in particular, it is not equal to the one we would naively guess.
Remark 6.1. -Let us notice that the definition given in [GP18a] is not exactly the same as the one we adopted in Theorem 3.3, but it is not difficult to check that they are indeed equivalent (see [GJS17,Proposition 4]), and therefore the same strategy can be implemented.
This section is split as follows: in Section 6.1 we give some tools that will be used in Section 6.2 in order to show that in the definition of the Burgers non-linearity (namely the process A of Theorem 3.3) we can replace ι ε by different approximations of the identity. We conclude in Section 6.3 with the proof of uniqueness for the energy solution Y, by developing the strategy explained above. In the following we always denote by µ the law of the standard white noise on S Dir . If f : R n → R is multidimensional, then we denote by ∂ α f its derivative of order α ∈ N n 0 .

Preliminaries
In this section we give two ways to handle functionals written in the form · 0 F (Y s )ds, where Y shall be the energy solution to (6.2) as defined in Theorem 3.3, and F belongs to some general class of functions.

Itô trick and Kipnis-Varadhan Lemma
We write C for the space of cylinder functions F : S Dir → R, which are such that there exist d ∈ N and ϕ i ∈ S Dir (i = 1, . . . , d) with where f ∈ C 2 (R d ) has polynomial growth of all partial derivatives up to order 2. For F ∈ C we define the operator L 0 as and its domain Dom(L 0 ) is defined as the closure in L 2 (µ) of C with respect to the norm First, let us take Y as the Ornstein-Uhlenbeck process with Dirichlet boundary conditions as defined in Proposition 3.1 (with A = 1, D = 2) -or, equivalently, an energy solution of (6.2) with E = 0 (as defined in Theorem 3.3). Then, we have for every F ∈ C, where M F is a continuous martingale with quadratic variation and where ∇ u is the usual derivative w.r.t. u and D u F denotes the Malliavin derivative defined in terms of the law of the white noise, i.e.
In the following, for any Y ∈ S Dir we denote . Now, let Y be an energy solution to (6.2) the stochastic Burgers equation with Dirichlet boundary conditions, as defined in Theorem 3.3. Recall that Y 0 is a S Dirvalued white noise, hence has law µ. Since (3.5) implies that A has zero quadratic variation (see [GJS17, Proposition 4] for a proof) the Itô trick for additive functionals of the form · 0 L 0 F (Y s )ds follows by the same arguments as in [GP18a, Proposition 3.2], i.e. by applying Itô's formula to the forward and backward process and adding up the resulting expressions to obtain a sort of "Lyons-Zheng decomposition" for additive functionals · 0 L 0 F (Y s )ds as sums of a forward and a backward martingales. Thus, we can prove that for all F ∈ C and p 1 and for p = 2 we get in particular For the sake of clarity, let us define from now on: , where the second equality follows from the Gaussian integration by parts rule, see [Nua06, Lemma 1.2.1].
From this, let us now define two Hilbert spaces which will be useful in controlling additive functionals of Y.
Definition 6.2. -Let us introduce an equivalence relation on C by identifying F and G if F − G 1,0 = 0, so that · 1,0 becomes a norm on the equivalence classes. We write H 1 0 for the completion of the equivalence classes with respect to · 1,0 . For F ∈ C we define and we identify F and G if F − G −1,0 = 0 and F −1,0 < ∞. We write H −1 0 for the completion of the equivalence classes with respect to · −1,0 . Remark 6.3. -It is possible to show that F 1,0 = 0 if and only if F is constant, but we do not need this.
Let us now extend the Itô trick to the entire domain of L 0 : Lemma 6.4 (Itô trick). -Let F ∈ Dom(L 0 ). Then F ∈ H 1 0 and If also E(F ) ∈ L p/2 (µ) for p 1, then Proof of Lemma 6.4. -Since F ∈ Dom(L 0 ), there exists a sequence of cylinder functions {F n } n∈N in C such that F n − F L 2 + L 0 (F n − F ) L 2 converges to zero. But then also
Remark 6.5. -If moreover F (Y 0 ) has a finite chaos expansion of length d (see Subsection 6.1.2 below for the definition of the chaos expansion), then E(F )(Y 0 ) has a chaos expansion of length 2 × (d − 1), and therefore all its moments are comparable and we can estimate Corollary 6.6 (Kipnis-Varadhan inequality).
where the p-variation norm f p−var of a function f : [a, b] → R is defined as Proof of Corollary 6.6. -The usual proof by duality works. The statement about the p-variation is shown as in [GP18a, Corollary 3].
As a result, with Lemma 6.4 and Corollary 6.6 we are provided with two important tools, which allow us to control in some sense · 0 F (Y s )ds. Note that Lemma 6.4 is convenient only if one is able to write F as L 0 G, which may not be easy. If one cannot solve the Poisson equation F = L 0 G, then one relies on the variational norm of F given by Corollary 6.6.
The next paragraph is devoted to constructing solutions to the Poisson equation L 0 G = F using the Gaussian structure of L 2 (µ), which is now standard and fully detailed in [GP18a, Section 3.2].

Gaussian analysis
In the following we develop some Gaussian analysis that is helpful for estimating the · 1,0 and · −1,0 norms from above. We refer the reader to [Nua06,Jan97] for details on Malliavin calculus and chaos decompositions.
Let Y be a white noise on S Dir and write σ(Y) for the sigma algebra generated by Y. Then we can define a chaos expansion in L 2 (σ(Y)) as follows: (y 1 , . . . , y d )Y(y 1 ) · · · Y(y d ) dy 1 · · · dy d , see [Nua06, Section 1.1.2] for the construction; occasionally we will simply write where S d is the symmetric group on {1, . . . , d}.
TOME 3 (2020) The chaos expansion of F ∈ L 2 (σ(Y)) is then given by Sym ([0, 1] d )} is also the closure of the span of all random variables of the form Y → H d (Y(ϕ)), where H d is the d-th Hermite polynomial and ϕ ∈ S Dir with ϕ L 2 ([0,1]) = 1, it will be convenient to write down the action of L 0 onto these random variables. Indeed, it is well-known that From here let us define Then, from (6.9), the same arguments as in [GP18a,Lemma 3.7] show that for all symmetric functions f d ∈ S Dir ([0, 1] d ) we have (6.10) with ∆ := d k=1 ∂ kk . Therefore, the operator L 0 leaves the d-th chaos invariant, and this will be useful to solve the Poisson equation. It only remains to compute its norm W d (f d ) 1,0 , which is the goal of the remainder of this section. To do so, let us introduce other notations: similarly to our definitions in Section 2.3, we denote by H 1 Dir ([0, 1] d ) the completion of S Dir ([0, 1] d ) with respect to the norm .
Moreover, for general (not necessarily symmetric) f d ∈ S Dir ([0, 1] d ) we obtain the inequality .
Proof of Lemma 6.7. -Integration by parts works without boundary terms because f d ∈ S Dir ([0, 1] .

Burgers/KPZ non-linearity and existence of the process A/B
All along this section, the integration spaces denoted L p , if not made precise, are in fact L p ([0, 1]).
Lemma 6.10. -There exists a unique process · 0 Y 2 s ds ∈ C(R + , S Neu ) such that for all ψ ∈ S Neu , all p 1 and all ρ ∈ L ∞ ([0, 1] 2 ) the following bound holds: Proof of Lemma 6.10. -Let us define The strategy to define the process ( · 0 Y 2 s ds)(ψ) is to obtain it as the limit of (6.13) · 0 1 0 Y 2 s (ρ m (u, · ))ψ(u)duds for some suitable sequence {ρ m } m∈N . First, observe that (6.14) So if {ρ m } m∈N is a sequence of functions in L ∞ ([0, 1] 2 ), we can use Corollary 6.9 to solve the Poisson equation with (6.14) and ρ = ρ m and then estimate the norm of the solution with the Itô trick given in Lemma 6.4. This gives, for any m, n ∈ N E sup .
To bound the norm on the right hand side we argue by duality and apply (6.11): let f be a symmetric function in S Dir ([0, 1] 2 ) and consider where δ a ( · ) denotes the Dirac delta function at point a. Both terms on the right hand side are of the same form, so we argue for the first one only. For this purpose we decompose and again only treat the first contribution. In the following list of inequalities, each step will be made clear by using a notation of the form ( * ) over the inequality in order to explain where does the inequality come from. We are going to: sum and subtract f (u, v 2 ) and use the triangular inequality (denoted by ±f (u, v 2 )); use the Cauchy-Schwarz inequality (C-S); use L ∞ bounds (L ∞ ); and finally, use the fact that for any ϕ ∈ S Dir , ϕ L ∞ ∇ϕ L 2 , which will be denoted below by ( ). Let us bound as follows: where in the last step we used the definition of the norm · H   The other properties of p Dir are given in Appendix C: from Lemma C.1 we obtain that . Therefore, the sequence (6.13) is Cauchy and there exists a limit in C(R + , R) which we denote with ( · 0 Y 2 s ds)(ψ). Making use of the bound in terms of ψ , similar arguments as in Section 4.3 show that even ψ → · 0 1 0 Y 2 s (ρ m (u, · ))ψ(u)duds converges in C(R + , S Neu ). By the computation above the limit satisfies (6.12) and clearly that estimate identifies · 0 Y 2 s ds uniquely as the limit of · 0 Y 2 s (ρ m (u, · ))ds. Corollary 6.11. -We have −( · 0 Y 2 s ds)(∇ϕ) = A · (ϕ) for all ϕ ∈ S Dir , where A denotes the process defined in (3.7) from the statement of Theorem 3.3.
By a similar interpolation argument as in the proof of [GP18a, Corollary 3.17] we get the following result: In particular, it follows together with Corollary 6.11 that B(ϕ) has zero quadratic variation.

Mapping to the stochastic heat equation and conclusion
To prove the uniqueness of our energy solution Y to (6.2), we would like to apply the Cole-Hopf transform to map Y to a solution of the well posed stochastic heat equation. To do so, we should integrate Y in the space variable and then exponentiate the resulting process. But since we only have an explicit description of the dynamics of Y after testing against a test function ϕ ∈ S Dir , we should first mollify Y with a kernel in S Dir before carrying out this program.
We find it convenient to mollify Y with the Dirichlet heat kernel p Dir which was defined in (6.15), and we set Note that Θ ε u ∈ S Dir for all u ∈ [0, 1] and ε > 0, so to integrate Y ε in the space variable we set Z ε t (u) = Y t (Θ ε u ), which is a smooth function with ∇Z ε = Y ε . Then we obtain , where A is the non-linearity that has been previously defined and , where p Neu is the heat kernel with Neumann boundary conditions, namely for u, v ∈ [0, 1] and ε > 0. To shorten the notation we write for the deterministic function In the next section we will show the following results concerning the three additional terms R ε t (ϕ), Q ε t (ϕ) and K ε u : Lemma 6.13. -We have for all p > 2 and ϕ ∈ C([0, 1]) To control the term Q ε (ϕ), we need to assume that ϕ satisfies suitable boundary conditions. But this will not be a problem because the boundary condition is compatible with our formulation of the stochastic heat equation with Robin boundary condition, as defined in Appendix 3.4.
Remark 6.15. -In the definition of Q ε t (ϕ) we have to subtract the Dirac deltas at 0 and 1 in order to see the convergence to zero. Since these terms are not present in the formula for dΨ ε t (ϕ) we have to add them back in (times the prefactor E 2 /2). This is exactly the correction to the boundary condition that we discussed to in Remark 3.16.
This uniform continuity in u also gives Ψ t (u) = lim ε→0 Ψ ε t (u). Given these lemmas, the same arguments as in the proof of [GP18a, Theorem 2.4] (see more precisely [GP18a, Section 4.1]) show that the process Ψ solves for all Therefore, setting we have Φ 0 = Ψ 0 and d X t = E 2 2dt (recall that B, 1 has zero quadratic variation). Moreover, for ϕ ∈ S Neu , Itô's formula gives where we used that Moreover, Φ is locally uniformly bounded in L 2 (P), and more precisely Φ ∈ L 2 C ([0, T ]), as given by the following: Lemma 6.18. -Consider the process Φ defined in (6.21). There exists T > 0 such that sup Proof of Lemma 6.18. -The proof is basically the same as the one given for [GP18a, Lemma B.1] and therefore we omit it.
In other words, {Φ t ; t ∈ [0, T ]} is a weak solution to the stochastic heat equation with Robin boundary conditions as defined in Proposition 3.10. By the uniqueness property given in that proposition, on [0, T ] the process Φ is equal to the unique weak solution of (3.11). Then Lemma D.
which can be easily verified for Y ε and Ψ ε and then carries over to the limit ε → 0, the uniqueness of Y follows from that of Φ.
As in the proof of [GP18a, Theorem 2.10] we also obtain the uniqueness of the almost stationary energy solution to the KPZ equation, i.e. Theorem 3.7.

Convergence of the remainders
In this section we prove successively Lemma 6.13, Lemma 6.14 and Lemma 6.16. 6.4.1. Proof of Lemma 6.13 Recall that R ε (ϕ) was defined in (6.17) and that B t (ψ) = ( t 0 Y 2 s ds)(ψ). Given δ > 0 we approximate R ε by where K ε,δ will be defined in equation (6.25) below. Provided that K ε,δ converges to K ε in L 2 ([0, 1]) as δ → 0 we obtain from Lemma 6.12 that which holds because we can control ds as a Young integral: indeed, by Lemma 6.12 the integrator converges in α-Hölder norm for any α < 3 4 and the integrand is almost surely β-Hölder continuous for any β < 1 2 and α + β > 1; see [You36] or [LCL07, Section 1.3] for details on the Young integral. So, to prove the convergence claimed in Lemma 6.13, it suffices to show that R ε,δ (ϕ) vanishes in the same sense as in (6.20), as first δ → 0 and then ε → 0.
By Corollary 6.6 it suffices to show lim ε→0 lim δ→0 r ε,δ ( · ) −1,0 = 0, where the random variable r ε,δ was defined in (6.23). Note that r ε,δ satisfies the assumption of Corollary 6.6: it is clearly in L 2 (µ) because all the kernels appearing in its definition are bounded continuous functions. The fact that r ε,δ is in H −1 Dir is not obvious but will be a consequence of our estimates below.
The following lemma, which is a simple consequence of the estimates we derived so far, will be useful for bounding both constants A ε,δ and B ε,δ of Lemma 6.19: Dir is uniformly bounded in (δ, u) ∈ (0, 1] × [0, 1] and satisfies, for all ε ∈ (0, 1] Proof of Lemma 6.21. -From Lemma 6.20 (note that g ε,δ u is symmetric in its two arguments) we obtain where the last estimate comes from Lemma C.1 and Lemma C.2 of the Appendix.
Finally we are now able to state and prove that A ε,δ and B ε,δ vanish: Proof of Lemma 6.22. -Recall that Ψ ε 0 (u) = e EZ ε 0 (u) = e EY 0 (Θ ε u ) and therefore its . As a result, so from Lemma 6.21 together with the dominated convergence theorem we get To complete the proof of Lemma 6.13 we need to control A ε,δ , which is achieved in the next lemma. Proof of Lemma 6.23. -We expand , (6.26) and as in the proof of [GP18a, Lemma 4.7] we get from an integration by parts .
For the second term on the right hand side we simply estimate from the Cauchy-Schwarz inequality where in the last step we used Lemma 6.21 and the symmetry of g ε,δ u . So when we plug this contribution into (6.26) inside the integration w.r.t. u and u , one easily shows that it vanishes for ε → 0. We are left with bounding the first contribution coming from (6.27). We set V ε (u) = |ϕ(u)|E[Ψ ε 0 (u) 2 ] 1/2 and obtain from Lemma 6.20 and Lemma C.2 as well as Lemma C.1: which also vanishes as ε → 0.
Now, let us take ϕ ∈ S Neu and as before denote ϕ x = ϕ( x n ). We start by treating the first term in (A.7), and more precisely ∆h(x). Since our goal is to see the height fluctuation field appear, as long as h(x) is replaced with the time dependent configuration h n sn 2 (x), we have to recenter the heights as follows: using the fact that ∆c = 0 for any constant c, a simple computation (discrete integration by parts) shows that where ∆ n ϕ x has been defined in (4.23) as: Note that the term (A.8) corresponds to Z n s ( ∆ n ϕ), while the term (A.9) and (A.10), when integrated in time between 0 and t, will give the contribution R n t (ϕ) (see (4.22)). We are left with the second term in (A.7), that we put directly into the martingale decomposition as follows: from the computation above, and from Dynkin's formula, the following quantity is a martingale: where the last term (A.12) cancels out with the term coming with (− 1 4 ) in (A.11) (since c n = −En 2−γ /4), and then (4.21) follows.

Appendix B. Proofs of Proposition 3.14 and Proposition 3.15: boundary behavior
As explained in Section 6, we may assume without loss of generality that A = 1 and D = 2, and recall that we denote E = E. Therefore, in Proposition 3.15 we have D(E) 2 4A 3 = E 2 2 . Proof of Proposition 3.14. -Let Y and ρ ε be as in the assumptions of Proposition 3.14. The map Y(ρ ε ) = W 1 (ρ ε )(Y) is in the first chaos, so Lemma 6.4 and and as in the proof of Proposition 3.14 we see that the right hand side converges to zero for ε → 0. To treat the second term in (B.1) recall that we showed in the proof of Lemma 6.24 that lim δ→0 Θ 2δ u (u) = u − 1 2 for all u ∈ (0, 1). Therefore, where we used the reflection principle for the Brownian motion and standard tail estimates for the normal distribution. From here we get | p Dir t (u, · ), 1 −1| t 1/2 u(1−u) ∧1, which leads to For the Neumann heat kernel we have p Neu t (u, · ) − 1 ≡ 1, so the last bound is trivial. The remaining bounds for the Neumann kernel follow once we know that p Neu t (u, v) t −1/2 e −(u−v) 2 /4t uniformly in t ∈ (0, 1], which is basically (3.7) in [Wal86].
Lemma C.2. -The difference between Neumann and Dirichlet heat kernel is bounded in L 1 : . We know that p Neu (resp. p Dir ) is the transition density of a Brownian motion that is reflected (resp. killed) in 0 and 1, both with speed 2. We write P Neu u respectively P Dir u for the law of the reflected respectively killed Brownian motion with speed 2, both started in u, while P u is the law of the (usual) Brownian motion with speed 2, started in u. Then we have for all Borel sets A ∈ B(R) and therefore where the last step follows as in the proof of Lemma C.1. We now take the supremum in A ∈ B([0, 1]) and get a bound for the total variation norm and thus for the L 1 -norm of the difference of the densities. where the last step follows from Lemma C.1. Since p > 6 we have q = p/(p − 2) < 3 2 , and therefore (t − s) 1/2−q is integrable on [0, t] and the integral is uniformly bounded in t ∈ [0, T ]. Thus, we have shown that V t = sup u∈[0,1] E[|Φ t (u)| p ] satisfies on [0, T ] V t M + where the last step is a simple computation: after expanding the product the most complicated integrand is (t − r) −1/2 (s − r) −1/2 , for which the integral is computed in [Wal86,p. 315]. Now the claim follows from Gronwall's Lemma.
• If, however, ϕ ∈ S Neu , then n(ϕ 1 − ϕ 0 ) → 0 as n → ∞, and therefore only one term remains in the second term of (E.12), which reads In the macroscopic limit, the last term will correspond to

E.3. Exponential moments and quadratic variation
Finally, one might check that the quadratic variation (E.13) converges to where Φ · is the limit of the current field J n · in D([0, T ], C([0, 1]). Heuristically, this is indeed the case if one is able to replace ξ n sn 2 (x) 2 in (E.13) with (E[ξ n sn 2 (x)]) 2 . This could be proved by using some ideas taken from [GLM17, Lemma 4.3] which permit to control all the exponential moments sup x∈{1,...,n} E ρ ξ n sn 2 (x) − E[ξ n sn 2 (x)] k , with k ∈ N.