Probabilistic cellular automata with memory two: invariant laws and multidirectional reversibility

We focus on a family of one-dimensional probabilistic cellular automata with memory two: the dynamics is such that the value of a given cell at time $t+1$ is drawn according to a distribution which is a function of the states of its two nearest neighbours at time $t$, and of its own state at time $t-1$. Such PCA naturally arise in the study of some models coming from statistical physics ($8$-vertex model, directed animals and gaz models, TASEP, etc.). We give conditions for which the invariant measure has a product form or a Markovian form, and we prove an ergodicity result holding in that context. The stationary space-time diagrams of these PCA present different forms of reversibility. We describe and study extensively this phenomenon, which provides families of Gibbs random fields on the square lattice having nice geometric and combinatorial properties.

from statistical physics. We review from a PCA approach some results on the 8-vertex model and on the enumeration of directed animals, and we also show that our methods allow to find new results for an extension of the classical TASEP model. As another original result, we describe some families of PCA for which the invariant measure can be explicitly computed, although it does not have a simple product or Markovian form.

Introduction
Probabilistic cellular automata (PCA) are a class of random discrete dynamical systems. They can be seen both as the synchronous counterparts of finite-range interacting particle systems, and as a generalization of deterministic cellular automata: time is discrete and at each time step, all the cells are updated independently in a random fashion, according to a distribution depending only on the states of a finite number of their neighbours.
In this article, we focus on a family of one-dimensional probabilistic cellular automata with memory two (or order two): the value of a given cell at time t + 1 is drawn according to a distribution which is a function of the states of its two nearest neighbours at time t, and of its own state at time t − 1. The space-time diagrams describing the evolution of the states can thus be represented on a two-dimensional grid.
We study the invariant measures of these PCA with memory two. In particular, we give necessary and sufficient conditions for which the invariant measure has a product form or a Markovian form, and we prove an ergodicity result holding in that context. We also show that when the parameters of the PCA satisfy some conditions, the stationary space-time diagram presents some multidirectional (quasi-)reversibility property: the random field has the same distribution as if we had iterated a PCA with memory two in another direction (the same PCA in the reversible case, or another PCA in the quasi-reversible case). This can be seen has a probabilistic extension of the notion of expansivity for deterministic CA. For expansive CA, one can indeed reconstruct the whole space-time diagram from the knowledge of only one column. In the context of PCA with memory two, the criteria of quasi-reversibility that we obtain are reminiscent of the notion of permutivity for deterministic CA. Stationary space-time diagrams of PCA are known to be Gibbs random fields [GKLM89,LMS90]. The family of PCA that we will describe thus provide examples of Gibbs fields with i.i.d. lines in many directions and nice combinatorial and geometric properties.
The first theoretical results on PCA and their invariant measures go back to the seventies [BGM69, KV80,Vas78], and were then gathered in a survey which is still today a reference book [TVS + 90]. In particular, it contains a detailed study of binary PCA of memory one with only two neighbours, including a presentation of the necessary and sufficient conditions that the four parameters defining the PCA must satisfy in order to have an invariant measure with a product form or a Markovian form. Some extensions and alternative proofs were proposed by Mairesse and Marcovici in a later article [MM14b], together with a study of some properties of the random fields given by stationary space-time diagrams of PCA having a product form invariant measure (see also the survey on PCA of the same authors [MM14a]). The novelty was to highlight that these space-time diagrams are i.i.d. along many directions, and present a directional reversibility: they can also be seen as being obtained by iterating some PCA in another direction. Soon after, Casse and Marckert have proposed an in-depth study of the Markovian case [Cas16,CM15]. Motivated by the study of the 8-vertex model, Casse was then led to introduce a class of one-dimensional PCA with memory two, called triangular PCA [Cas18].
In the present article, we propose a comprehensive study of PCA with memory two having an invariant measure with a product form, and we show that their stationary space-time diagrams share some specificities. We first extend the notion of reversibility and quasi-reversibility to take into account other symmetries than the time reversal and, in a second time, we characterize PCA with an invariant product measure that are reversible or quasi-reversible. Even if most one-dimensional positiverates PCA are usually expected to be ergodic, the ergodicity of PCA is known to be a difficult problem, algorithmically undecidable [BMM13, TVS + 90]. In Section 3, after characterizing positive-rates PCA having a product invariant measure, we prove that these PCA are ergodic (Theorem 3.3). A novelty of our work is also to display some PCA for which the invariant measure has neither a product form nor a Markovian one, but for which the finite-dimensional marginals can be exactly computed (Corollaries 4.10 and 5.7). In Section 5, we study PCA having Markov invariant measures. Section 6 is then devoted to the presentation of some applications of our models and results to statistical physics (8-vertex model, enumeration of directed animals, TASEP). In particular, we introduce an extension of the TASEP model, in which the probability for a particle to move depends on the distance of the previous particle and of its speed. It can also be seen as a traffic flow model, more realistic than the classical TASEP model. Finally, we give on one si When describing the family of PCA presenting some given directional reversibility or quasi-reversibility property, for each family of PCA involved, we give the conditions that the parameters of the PCA must satisfy in order to present that behaviour, and we provide the dimension of the corresponding submanifold of the parameter space, see Table 2.1. Our purpose is to show that despite their specificity, these PCA build up rich classes, and we set out the detail of the computations of the dimensions in the last section.

Introductory example
In this paragraph, we give a first introduction to PCA with memory two, using an example motivated by the study of the 8-vertex model [Cas18]. We present some properties of the stationary space-time diagram of this PCA: although it is a nontrivial random field, it is made of lines of i.i.d. random variables, and it is reversible. In the rest of the article, we will study exhaustively the families of PCA having an analogous behaviour. We will also come back to the 8-vertex model in Section 6.
Let us set Z 2 e = {(i, t) ∈ Z 2 : i + t ≡ 0 mod 2}, and introduce the notations: Z t = 2Z if t ∈ 2Z, and Z t = 2Z + 1 if t ∈ 2Z + 1, so that the grid Z 2 e can be seen as the union on t ∈ Z of the points {(i, t) : i ∈ Z t }, that will contain the information on the state of the system at time t. Note that one can scroll the positions corresponding to two consecutive steps of time along an horizontal zigzag line: . . . (i, t), (i + 1, t + 1), (i + 2, t), (i + 3, t + 1) . . . This will explain the terminology introduced later.
Let us assume that initially, (η 0 , η 1 ) is distributed according to the uniform product measure λ = B(1/2) ⊗Z 0 ⊗ B(1/2) ⊗Z 1 . Then, we can show that for any t ∈ N, (η t , η t+1 ) is also distributed according to λ. We will say that the PCA has an invariant Horizontal Zigzag Product Measure. By stationarity, we can then extend the spacetime diagram to a random field with values in S Z 2 e . The study of the space-time diagram shows that it has some peculiar properties, which we will precise in the next sections. In particular, it is quasi-reversible: if we reverse the direction of time, the random field corresponds to the stationary space-time diagram of another PCA. Furthermore, the PCA is ergodic: whatever the distribution of (η 0 , η 1 ), the distribution of (η t , η t+1 ) converges weakly to λ (meaning that for any n ∈ N, the restriction of (η t , η t+1 ) to the cells of abscissa ranging between −n and n converges to a uniform product measure). For q = r, the stationary space-time diagram presents even more symmetries and directional reversibilities: it has the same distribution as if we had iterated the PCA in any other of the four cardinal directions. In addition, any straight line drawn along the space-time diagram is made of i.i.d. random variables, see Figure 2.2 for an illustration. We will show that this PCA belongs to a more general class of PCA that are all ergodic and for which the stationary space-time diagram share specific properties (independence, directional reversibility).

PCA with memory two and their invariant measures
In this article, we only consider PCA with memory two for which the value of a given cell at time t + 1 is drawn according to a distribution which is a function of the states of its two nearest neighbours at time t, and of its own state at time t − 1. We thus introduce the following definition of transition kernel and of PCA with memory two.
Definition 2.1. -Let S be a finite set, called the alphabet. A transition kernel is a function T that maps any (a, b, c) ∈ S 3 to a probability distribution on S. We denote by T (a, b, c; ·) the distribution on S which is the image of the triplet (a, b, c) ∈ S 3 , so that: A probabilistic cellular automaton (PCA) with memory two of transition kernel T is a Markov chain of order two (η t ) t 0 such that η t has values in S Zt , and conditionally on η t and η t−1 , for any i ∈ Z t+1 , η t+1 (i) is distributed according to Figure 2.1 for an illustration. We say that a PCA has positive rates if its transition kernel T is such that ∀ a, b, c, d ∈ S, T (a, b, c; d) > 0. One can consider a PCA of order 2 on S as a PCA of order 1 on S 2 , but the resulting PCA then does not have positive rates, which leads to significant difficulties. We thus introduce specific tools for the study of PCA of order 2.
Let µ be a distribution on S Z t−1 × S Zt , and let us introduce the two basis vectors u = (−1, 1) and v = (1, 1) of Z 2 e . We denote by σ v (µ) the distribution on S Zt × S Z t+1 which is the image of µ by the mapping: When considering the distribution µ as living on two consecutive horizontal lines of the lattice Z 2 e , corresponding to times t − 1 and t, the distribution σ v (µ) thus corresponds to shifting µ by a vector v = (1, 1). Similarly, we denote by σ v − u (µ) the distribution on S Z t−1 × S Zt which is the image of µ by the application: For our specific context of PCA with memory two, we introduce the following definitions.
• The distribution µ on S Z 0 × S Z 1 is an invariant distribution of a PCA with memory two if the PCA dynamics is such that: By a standard compactness argument, one can prove that any PCA has at least one invariant distribution which is shift-invariant. In this article, we will focus on such invariant distributions. Note that if µ is both a shift-invariant measure and an invariant distribution of a PCA, then we also have (η 0 , η 1 ) ∼ µ =⇒ (η 1 , η 2 ) ∼ σ u (µ).
Observe that we do not specify t in the notation, since there will be no possible confusion. By Definition 2.2, π p is invariant for a PCA if:

Stationary space-time diagrams and directional (quasi-)reversibility
Let A be a PCA and µ one of its invariant measures. Let G k = (η t (i) : t ∈ {−k, . . . , k}, i ∈ Z t ) be a space-time diagram of A, started at time t = −k under its invariant measure µ, until time t = k. Then (G k ) k 0 induces a sequence of compatible measures on Z 2 e and, by Kolmogorov's extension theorem, defines a unique measure on Z 2 e , that we denote by L(A, µ).
We will use the notation G(A, µ) to represent a stationary space-time diagram of A taken under µ, that is, a random field with distribution L(A, µ).
We denote by D 4 the dihedral group of order 8, that is, the group of symmetries of the square. We denote by r the rotation of angle π/2 and by h the horizontal reflection. We denote the vertical reflection by v = r 2 • h, and the identity by id. For a subset E of D 4 , we denote by E the subgroup of D 4 generated by the elements of E.
Definition 2.5. -Let A be a positive-rates PCA, and let µ be an invariant measure of A which is shift-invariant. For g ∈ D 4 , we say that (A, µ) is g-quasireversible, if there exists a PCA A g and a measure µ g such that the associated stationary space time-diagrams satisfy In this case, the pair (A g , µ g ) is the g-reverse of (A, µ). If, moreover, (A g , µ g ) = (A, µ), then (A, µ) is said to be g-reversible.
For a subset E of D 4 , we say that A is E-quasi-reversible (resp. E-reversible) if it is g-quasi-reversible (resp. g-reversible) for any g ∈ E.
Classical definitions of quasi-reversibility and reversibility of PCA correspond to time-reversal, that are, h-quasi-reversibility and h-reversibility. Geometrically, the stationary space-time diagram (A, µ) is g-quasi-reversible if after the action of the isometry g, the random field has the same distribution as if we had iterated another PCA A g (or the same PCA A, in the reversible case). In particular, if (A, µ) is r-quasi-reversible (resp. r 2 , r 3 ), it means that even if the space-time diagram is originally defined by an iteration of the PCA A towards the North, it can also be described as the stationary space-time diagram of another PCA directed to the East (resp. to the South, to the West).
The stationary space-time diagram of a PCA (see Definition 2.4) is a random field indexed by Z 2 e . For a point x = (i, t) ∈ Z 2 e , we will also use the notation η(x) = η(i, t) = η t (i), and for a family L ⊂ Z 2 e , we define η(L) = (η(x)) x∈L . The following Lemma 2.6 proves that the space-time diagram of a positive-rate PCA characterizes its dynamics. Precisely, if two positive-rates PCA A and A have the same space-time diagram (in law) taken under their respective invariant measures µ and µ , then A = A and µ = µ .
t ∈ Z, i ∈ Z t ) be two space-time diagrams of law L(A, µ) and L(A , µ ). By definition, . As a consequence of the fact that A has positive rates, we have ∀ (a, b, c) ∈ S 3 , µ(a, b, c) > 0. Thus, for any a, b, c, d ∈ S, we have The same relation holds for A and as G By Lemma 2.6, if a PCA is g-quasi-reversible (see Definition 2.5), its g-reverse is thus unique. Let us now enumerate some easy results on quasi-reversible PCA and reversible PCA. Although the following proposition is quite straightforward, we are not aware of any reference formalizing the notion of directional reversibility and the properties below.
Proposition 2.7. -Let A be a positive-rates PCA and let µ be one of its invariant measures.
(2) (A, µ) is v-quasi-reversible and the v-reverse PCA is defined by the transition kernel T v (c, b, a; d) = T (a, b, c; d).
(3) For any g ∈ D 4 , if (A, µ) is g-quasi-reversible, then its g-reverse (A g , µ g ) is g −1 -quasi-reversible and (A, µ) is the g −1 -reverse of (A g , µ g ). (4) If (A, µ) is g-quasi-reversible and (A g , µ g ) is its g-reverse and if (A g , µ g ) is g -quasi-reversible and (A g g , µ g g ) is its g -reverse, then (A, µ) is g g-quasireversible and (A g g , µ g g ) is its g g-reverse.
Remark 2.8. -Since r, v = D 4 , a consequence of the last point of Proposition 2.7 is that if (A, µ) is r and v-reversible, then it is D 4 -reversible. Table 2.1 presents a summary of the results that will be proven in the next sections, concerning PCA having a p-HZPM invariant distribution and their stationary space-time diagrams. For each possible (quasi-)reversibility behaviour, we give the conditions that the parameters of the PCA must satisfy (see Section 4 for details), and provide the number of degrees of freedom left by these equations, that is, the dimension of the corresponding submanifold of the parameter space (see Section 9). In a similar fashion, Table 2.2 synthesized the main results about PCA having a Markovian invariant measure (see Section 5).

Invariant product measures and ergodicity
In this section, we lay the groundwork for the study of PCA with memory two having an invariant measure with a product form. First, Theorem 3.1 gives the necessary and sufficient condition for a PCA to have a p-HZPM invariant distribution

Conditions
Property on the parameters of the PCA . This condition will be extensively used in the continuation of the article. We then prove that any PCA satisfying this condition is ergodic, meaning that whatever the initial distribution for times t = 0 and t = 1, when iterating the dynamics, the PCA converges to the product measure of parameter p. Finally, we show that the stationary space-time diagrams of PCA having a p-HZPM invariant distribution share the following property: not only all the horizontal zigzag lines are distributed according to the product measure of parameter p, but also more general zigzag lines (in the sense of Definition 3.4).

Conditions for having an invariant HZPM
To start with, next Theorem 3.1 gives a characterization of PCA with memory two having a p-HZPM invariant distribution.
Theorem 3.1. -Let A be a positive-rates PCA with transition kernel T , and let p be a probability vector on S. The p-HZPM distribution π p is invariant for A if and only if: Note that since A has positive rates, if π p is invariant for A, then the vector p has to be positive. For a positive probability vector p on S, we define T S (p) as the set of positive-rates PCA for which the measure π p is invariant. We denote by T S the set of all positive-rates PCA with set of symbols S having an invariant p-HZPM, for some positive probability vector p on S.
As an immediate consequence of Theorem 3.1, one obtains the following result.
Corollary 3.2. -Let A be a positive-rates PCA with transition kernel T . Then, A ∈ T S if and only if for any a, c ∈ S, the left eigenspace E a,c of matrices (T (a, b, c; d)) b,d∈S related to the eigenvalue 1 is the same. In that case, the invariant ANNALES HENRI LEBESGUE HZPM is unique: it is the measure π p defined by the unique vector p such that E a,c = Vect(p) for all a, c ∈ S and b∈S p(b) = 1.
Proof of Theorem 3.1. -Let p be a positive vector such that π p is invariant by A and assume that (η t−1 , η t ) ∼ π p . Then, on the one hand, since π p is invariant by A, we have And on the other hand, by definition of the PCA, Cond. 1 follows.
Conversely, assume that Cond. 1 is satisfied, and that (η t−1 , η t ) ∼ π p . For some given choice of n ∈ Z t , let us denote: X i = η t−1 (n + 1 + 2i), Y i = η t (n + 2i), Z i = η t+1 (n + 1 + 2i), for i ∈ Z, see Figure 3.1 for an illustration. Then, for any k 1, we have As a consequence of Corollary 3.2, A ∈ T S if and only if A ∈ T S (p) for the unique p given by the corollary. TOME 3 (2020)

First properties of the space-time diagram
Let us now focus on the space-time diagram G(A, π p ) of a PCA A ∈ T S (p), taken under its unique invariant measure π p . By definition, any horizontal line of that space-time diagram is i.i.d. The following proposition extends this result to other types of lines. Observe that Proposition 3.5 implies that (bi-)infinite zigzag polylines are also made of i.i.d. random variables with distribution p.
Proof. -The proof is done by induction on T = max(t i ) − min(t i ). If T = 1, then the zigzag polyline is an horizontal zigzag, and since A ∈ T S (p), the result is true. Now, suppose that the result is true for any zigzag polyline such that max(t i ) − min(t i ) = T , and consider a zigzag polyline (i, t i ) m 1 i m 2 such that max(t i ) − min(t i ) = T + 1. Then, there exists t such that min(t i ) = t and max(t i ) = t + T + 1. Let M = {i ∈ {m 1 , . . . , m 2 } : t i = t + T + 1}. For any i ∈ M , we have t i±1 = t + T (we assume that m 1 , m 2 / ∈ M , even if it means extending the line). So, by induction, we have (η(i, TOME 3 (2020)

Directional (quasi-)reversibility of PCA having an invariant product measure
We now explore the directional reversibility and quasi-reversibility properties of PCA having an invariant p-HZPM. First, for any transformation g of the dihedral group D 4 , we give the necessary and sufficient condition under which the PCA is g-quasi-reversible or g-reversible, thus proving the results that were announced in Table 2.1. We then present some further properties of the stationary space-time diagrams that hold in this context, beyond Proposition 3.5. Finally, these results will allow us to exhibit a family of PCA having an invariant measure that can be computed explicitly, although it does not have a product form or a Markovian one.

(Quasi-)reversible PCA with invariant p-HZPM
In this section, we characterize PCA of T S (p) that are g-quasi-reversible, for each possible g ∈ D 4 . Let A ∈ T S (p), and g ∈ D 4 . From the transition kernel T of A, we define a map T g : where in the above expression, we use some abuse of notation when denoting by g(a, b, c) and g(d) the images of the vertices by the permutation induced by the transformation g ∈ D 4 (see Figure 2.1). For example, in the case where g = r, we have g(a, b, c) = (d, a, b), g(d) = c, and the expression above stands for: The expressions of T r 2 , T h , and T r −1 can be found in Table 2.1. Observe that T g is not necessarily a transition kernel. For example, T r is a transition kernel if and only if: which is equivalent to Cond. 2, see again Table 2.1. Analogously, T r −1 is a transition kernel if and only if Cond. 3 is satisfied. And it appears that as soon as a PCA belongs to T S (p), T r 2 and T h are transition kernels, since Cond. 1 is satisfied.
Before proving Theorem 4.1, let us present two corollaries that derive from it. First, the next result is a direct consequence of Theorem 4.1.
For any stationary Markov chain, we can define a time-reversed chain, which still has the Markov property. But in general, the time-reversed chain of a stationary PCA is no more a PCA. Indeed, nothing ensures that for the time-reversed process, the states of different cells are independently distributed, conditionally on the states of their neighbours at the previous time step. So, it is restrictive to ask that the time reversal of the stationary Markov chain associated to a PCA of transition kernel T is still distributed according to some transition kernel T . Here, Corollary 4.2 shows that any PCA in T S (p) is r 2 -quasi-reversible, which means that the time-reversed chain of a PCA of T S (p) is still a PCA with memory two, which furthermore belongs to T S (p), since it preserves the measure π p .
We also have the following characterization of reversible PCA.
As a direct consequence, we obtain easily all the conditions for reversibility that are given in Table 2.1.
Let us now prove Theorem 4.1. We begin the proof with the case g = r 2 . The cases g = id and g = v are obvious by Proposition 2.7, and the case g = h will directly follow from the case g = r 2 .
Proof of Theorem 4.1, case g = r 2 . -If A ∈ T S (p), its transition kernel T satisfies Cond. 1, and thus, T r 2 is a transition kernel. It only remains to prove that A is indeed r 2 -quasi-reversible, and that T r 2 is the transition kernel of the r 2 -reverse of A.
For some given choice of n ∈ Z t , let us denote again: X i = η t−1 (n + 1 + 2i), The following computation proves the result wanted.
Now, we will characterize PCA in T S (p) that are r-quasi-reversible.
We give now the proof of Theorem 4.1 in the case g = r. The case g = r −1 is similar and the cases g = r • v, g = r −1 • v then follow.
Proof of Theorem 4.1, case g = r. -Let us first prove that if A is r-quasireversible, then Cond. 2 is satisfied and the transition kernel T of the r-reverse A r of A is equal to T r . Let us recall the notations u = (−1, 1) and v = (1, 1). Since Hence, the transition kernel T of A r satisfies: For some x ∈ Z 2 e , let us introduce the pattern L = (x, x + u, x + v, x + 2 u, x + u + v, x + 2 u + v), see Figure 4.1. For a 0 , b 0 , b 1 , c 0 , c 1 , d 0 ∈ S, we are interested in the quantity: On the one hand, using the fact that we have a portion of the space-time diagram G(A, π p ), Proposition 3.5 implies that: We thus obtain On the other hand, using the fact that A is r-quasi-reversible, we have

ANNALES HENRI LEBESGUE
It follows that: .
After replacing in (4.2), we obtain: Summing over d 0 ∈ S on both sides and simplifying gives: Let us now assume that T r is a transition kernel, meaning that Cond. 2 holds.
For some x ∈ Z 2 e , and m ∈ N let us define the pattern M = (x + i u +j v) 0 i,j m . Using Proposition 3.5, for any (a i,j ) 0 i,j m ∈ S {0,1,...,m} 2 , we have This computation is represented on Figure 4.3 (a). The points for which p(a i,j ) appears in the product are marked by black dots, while the black vertical arrows represent the values that are computed through the transition kernel T . Now, by definition of T r , we have . It means that in the product above, we can perform flips as represented in Figure 4.2, where an arrow to the right now represents a computation made with the transition kernel T r . We say that such a use of (4.3) is a flip of (c, d). By flipping successively the cells from right to left and bottom to top: first (a 0,m , a 1,m ), then (a 0,m−1 , a 1,m−1 ), (a 1,m , a 2,m ), and (a 0,m−2 , a 1,m−2 ), (a 1,m−1 , a 2,m−1 ), (a 2,m , a 3,m ) etc., we finally obtain (see Figure 4.3 for an illustration):  represent the values that are computed through the transition kernel T . Now, by definition of T r , we have It means that in the product above, we can perform flips as represented in Figure 4.2, where an arrow to the right now represents a computation made with the transition kernel T r . We say that such a use of (4.3) is a flip of (c, d). By flipping successively the cells from right to left and bottom to top: first (a 0,m , a 1,m ), then (a 0,m−1 , a 1,m−1 ), (a 1,m , a 2,m ), and (a 0,m−2 , a 1,m−2 ), (a 1,m−1 , a 2,m−1 ), (a 2,m , a 3,m ) etc., we finally obtain (see Figure 4.3 for an illustration): Let us define the vertical lines: Let us define the vertical lines:

ANNALES HENRI LEBESGUE
From (4.4), we deduce that: Since this is true for any x ∈ Z 2 e and any m ∈ N, it follows that the rotation of G(A, π p ), the space-time diagram of A, by the rotation r is a space-time diagram of A r , whose transition kernel is T r , under one of its invariant measure that we denote by µ (observe that we do not specify the dependence on A in that last notation, although the measure depends on A). Furthermore, we can express explicitly the finite-dimensional marginals of µ. For any m ∈ N, we have Note that in Theorem 4.1 (case g = r), the reverse PCA is not necessarily an element of T S (p). In the space-time diagram G(A, π p ), the points x, x+u, x+2 u, . . . , x+ m u, x + m u + v, . . . , x + m u +m v consist in independent random variables with distribution p. But if we now consider only the three points x, x + v, x + u + v, they have no reason to be independent, so that µ can be different from π p . Proposition 4.4 below specifies the cases for which the reverse PCA A r is an element of T S (p), meaning that µ = π p .
Proposition 4.4. -Let A ∈ T S (p). The following properties are equivalent: ( Thus, Cond. 1 holds for T r and, by Theorem 3.1, A r ∈ T S (p).

Independence properties of the space-time diagram
Theorem 4.5. -Let us consider a PCA A ∈ T S (p) and its stationary space-time diagram G = (A, π p ). Then for any |a| 1, the points of G indexed by the discrete -This is a consequence of Proposition 3.5. We can assume without loss of generality that b = 0 and that 0 < a 1, and we assume that the discrete line is non-empty. Let (x, y) ∈ Z 2 e be the first point with positive coordinates belonging to the integer line, so that we have in particular 0 < y x. Let us define the sequence (t i ) ∈Z by t i+kx = i + ky for i ∈ {0, . . . , y − 1} and t i+kx = y + (−1) i−y −1 2 + ky for i ∈ {y, . . . , x − 1}, and any k ∈ Z. This sequence satisfies the conditions of Proposition 3.5, so that ( Theorem 4.6. -Let us consider a PCA A ∈ T S (p) satisfying Cond. 2 or Cond. 3. Then, for any line of its stationary space-time diagram G = (A, π p ), nodes on that line are i.i.d.
Proof. -We prove the result for a PCA A ∈ T S (p) satisfying Cond. 2. In that case, A is r-quasi-reversible. Now, take any line L in G.
Let us now consider an equation of the form y = ax + b with |a| > 1, or x = c. We can assume without loss of generality that b = 0. Let (x, y) ∈ Z 2 e be the first point with a positive coordinates belonging to the integer line, so that we have in particular 0 < |x| < y. Then we can perform flips, similarly as the ones done in the proof of the Theorem 4.1, case g = r (see Figure 4.3), to obtain that, for any m, the points
Hence, Cond. 1 holds and, by Theorem 3.1, A ∈ T S (p), and µ = π p . The reverse statement is trivial.
using the fact that A is top and left (resp. right) i.i.d. But these are respectively Cond. 2 and Cond. 3 and, so, by Theorem 4.4, A is D 4 -quasi-reversible. The reverse statement is trivial.
Note that G = (A, µ) is top, bottom, and left (resp. right) i.i.d. if and only if A is a r-quasi-reversible PCA (resp. r −1 -quasi-reversible PCA) of T S and µ is its invariant HZPM.
Connection with previous results for PCA with memory one In the special case when the PCA has memory one, meaning that the transition probabilities T (a, b, c; d) do no depend on b ∈ S, Cond. 1 reduces to: ∀ a, c, d ∈ S, , c; d). So, the only PCA having an invariant HZPM are trivial ones (no time dependence at all). In that context, it is in fact more relevant to study PCA having simply an invariant horizontal product measure, as done in [MM14b]. Observe that when there is no dependence on We recover the two sufficient conditions for having an horizontal product measure, as described in Theorem 5.6 of [MM14b]. In that article, the space-time diagrams are represented on a regular triangular lattice, which is more adapted to the models that are considered. The authors show that under one or the other of these two conditions, there exists a transversal PCA, so that after an appropriate rotation of the triangular lattice, the stationary space-time diagram can also be described as the one of another PCA. With our terminology, this corresponds to a quasi-reversibility property.

PCA with an explicit invariant law that is not HZPM
As already mentioned, if a PCA A ∈ T S (p) satisfies Cond. 2 and not Cond. 3 (or the reverse), we get a PCA C = A r (or A r −1 ) for which we can compute exactly the marginals of an invariant measure µ, see equations (4.5) and (4.6). Using the previous results, we can describe conditions on the transitions of a PCA C for being of the form C = A r , with A having an invariant p-HZMP. We thus obtain next corollary, highlighting a family of PCA having an invariant measure that can be computed explicitly, although it does not have a well-identified form.
Corollary 4.10. -Let C be a PCA with transition kernel U . If there exists a probability distribution p such that Cond. 2 and Cond. 3 hold, then there exists a unique probability distribution µ on , and the transition kernel of C r −1 is equal to U r −1 .
its r-reverse is (C r , π p ) with C r ∈ T S (p), and the transition kernel of C r is equal to U r .
In Section 7, Example 7.7 provides an example of a PCA satisfying only Cond. 2, so that its r-reverse A r satisfies the conditions of Corollary 4.10 above.

Horizontal zigzag Markov chains
After having extensively studied PCA with memory two having an invariant measure with a product form, we now focus on PCA having an invariant measure with a Markovian form, with a similar approach. We characterize such PCA, and among them, those having a quasi-reversibility property, thus justifying the results that were announced in Table 2.2. As in the previous section, our results will also allow us to exhibit a new family of PCA having an invariant measure that can be computed explicitly, although it does not have a simple form.

Conditions for having an invariant HZMC
In this section, we recall some previous results obtained in [Cas18] about PCA with memory two having an invariant measure which is a Horizontal Zigzag Markov Chain. Our purpose is to keep the present article as self-contained as possible.
First, let us recall what is a (F, B)-HZMC distribution. This is the same notion as (D, U )-HZMC in [Cas18], but to be consistent with the orientation chosen here for the space-time diagrams, we prefer using the notations F for forward in time, and B for backward in time (rather than D for down and U for up). The definition we give below relies on the following Lemma 5.1.
be two positive transition matrices from S to S. We denote by ρ B (resp. ρ F ) the invariant probability distribution of B (resp. F ), that is, the normalised left-eigenvector of B (resp. F ) associated to the eigenvalue 1.
Proof. -Note that by Perron-Frobenius, B and F have a unique invariant probability distribution, satisfying respectively ρ B B = B and ρ F F = F . Since F B = BF , we have ρ B F B = ρ B BF = ρ B F, so that the vector ρ B F is an invariant probability distribution of B. By uniqueness, we obtain ρ B F = ρ B . Since the invariant probability distribution of F is also unique, we obtain ρ B = ρ F .
Definition 5.2. -Let S be a finite set, and let F and B be two transition matrices from S to S, such that F B = BF . We denote by ρ their (common) lefteigenvector associated to the eigenvalue 1. The (F, B)-HZMC (for Horizontal Zigzag Markov Chain) on S Zt × S Z t+1 is the distribution ζ F,B such that, for any n ∈ Z t , for any a −n , a −n+2 , . . . , a n ∈ S, b −n+1 , n n+3 , . . . , b n−1 ∈ S, We give a simple necessary and sufficient condition that depends on both T and (F, B) for a (F, B)-HZMC to be an invariant measure of a PCA with transition kernel T (in order to remove any ambiguity, let us precise that the definition of a transition kernel is exactly the same as in the previous sections). We denote by T S (F, B) the subset of positive-rates PCA with set of symbols S having an invariant (F, B)-HZMC.
In the context of PCA having an invariant (F, B)-HZMC, Proposition 3.5 can be extended as follows. The proof being similar, we omit it.
For any zigzag polyline (i, t i ) m i n , and any (a i ) m i n ∈ S n+1 , we have In general, the knowledge of the transition kernel T alone is not sufficient to be able to tell if the PCA A admits or not an invariant (F, B)

Quasi-reversible PCA with invariant HZMC
This section is devoted to PCA having an HZMC invariant measure, and that are (quasi-)reversible.
Let A ∈ T S (F, B). For a, b ∈ S, let us define We also introduce the notations M − u = B and M v = F (recall that u = (−1, 1) and v = (1, 1)). For g ∈ D 4 , we define g(a, b, c)) ρ(g(a))M g(− u) (g(a), g(b)) M g(v) (g(b), g(c)), where g(− u) corresponds to the vector g(a)g(b), and g(v) to g(b)g(c) (note again the small abuse of notation for the images of the vertices by the permutation induced by g). If g is the identity, we have p F B (a, b, c) = ρ(a)B(a, b)F (b, c). And for example in the case where g = r, we have r(a, b, c) a)B(a, b) = ρ(a)F (a, b)B(a, b). From the transition kernel T of A, we define a map T g : S 3 × S → R + by: g(a, b, c)) T (a, b, c; d).
For example, in the case where g = r, this gives: , c; d).
The expressions of T r 2 , T h , and T r −1 can be found in Table 2.2.
Observe that T g is not necessarily a transition kernel. For example, T r is a transition kernel if and only if: which is equivalent to Cond. 5, see again Table 2.2. Analogously, T r −1 is a transition kernel if and only if Cond. 6 is satisfied. And it appears that as soon as a PCA belongs to T S (F, B), T r 2 and T h are transition kernels, since Cond. 4 is satisfied.
Theorem 5.5. -A PCA A ∈ T S (F, B) of transition kernel T is g-quasi-reversible if and only if T g is a transition kernel. In that case, T g is the transition kernel of the The next result is a direct consequence of Theorem 5.5.
Proof of Theorem 5.5. -The cases g = id and g = v are obvious by Proposition 2.7, and the cases g = h and g = r 2 can be handled exactly as in the proof of Theorem 4.1. We thus focus on the case g = r, for which the proof also follows the same idea as in Theorem 4.1. Suppose that A is r-quasi-reversible, and denote by T the transition kernel of A r . Then, for any a, b, c, d ∈ S, and any x ∈ Z 2 e , .
On the one hand, we have

ANNALES HENRI LEBESGUE
On the other hand, we have Using the expressions of T (d 0 , c 0 , b 0 ; c 1 ) and T (c 1 , b 0 , a 0 ; b 1 ) given by (5.2) and simplifying, we get: Now summing on d 0 ∈ S, we find, for any b 0 , c 1 ∈ S, Conversely, suppose that A is a PCA having an invariant measures (F, B)-HZMC and that Cond. 5 holds. Then, we can perform flips thanks to (5.1) as in Figures 4.2  and 4.3 to obtain the result.

PCA with an explicit invariant law that is not HZMC
As mentioned in Section 4.3, there exist PCA A ∈ T S (p) that are r-quasi-reversible, and for which the r-reverse A r does not belong to T S (p). In that case, A r has an invariant measure µ which is not a product measure, and for which we know formulas allowing to compute exactly all the marginals, see equations (4.5) and (4.6). Let us point out that the measure µ cannot be a (F, B)-HZMC. Indeed, consider the stationary space-time diagram (A, π p ) = (η(i, t) : (i, t) ∈ Z 2 e ), and assume that µ is a (F, B)-HZMC measure. Then, the marginal of size one of µ is equal to ρ = p, and for any x ∈ Z 2 e , we have P (η(x + u) = a, η( any a, b, c ∈ S, meaning that the (F, B)-HZMC is in fact a p-HZMP, which is not possible since A r does not belong to T S (p).
So, the PCA A r has an invariant measure that we can compute, and that has neither a product form nor a Markovian one. That was a real surprise of this work. We give an explicit example of such a PCA in Section 7, see Example 7.7.
Similarly, if a PCA A with a (F, B)-invariant HZMC is r-quasi-reversible, and is such that its r-reverse A r does not have an invariant HZMC, then we can compute exactly the invariant measure of A r , although it does not have a well-known form. This provides an analogous of Corollary 4.10, in the Markovian case. Precisely, next Corollary 5.7 gives conditions on the transitions of a PCA C for being of the form A having a (F, B)-invariant HZMC. To the best knowledge of the authors, this is the first time that we can compute from the transition kernel an invariant law that is not Markovian. TOME 3 (2020) Corollary 5.7. -Let C be a PCA of positive transition kernel U . If there exist two positive transition matrices F and B from S to S such that F B = BF , and such that the two following conditions hold: -for any a, b, c ∈ S, F (b; c) = d∈S F (a; d)U (d, a, b; c); Cond. 8 . -for any a, c, d ∈ S, B(d; c) = b∈S B(a; b)U (d, a, b; c); then there exists a probability measure µ on S such that (F, B), and the transition kernel of C r −1 is given by T (a, b, c; a, b; c).
, and the transition kernel of C r is given by T r 2 (c, d, a; b) a, b; c). Moreover, we have explicit formulas for marginals of µ.
Note that in the above Theorem 5.5, the transition kernel T , which is the transition kernel of C r −1 , corresponds to the transition kernel of the PCA A (having a (F, B)invariant HZMC) such that C = A r . The transition kernels T of C r −1 and T r 2 of C r can be written as being equal respectively to U r −1 and U r , but with (F, B) replaced by (B, M − v ) when applying the formulas given in Table 2.2.
Remark 5.8. -In Cond. 7 (resp. Cond. 8), by putting a = b (resp. a = d), we find that for any a ∈ S, the vector (F (a; d) : d ∈ S) (resp. (B(a; b) : b ∈ S)) is the left eigenvector of (U (d, a, a; c)) d,c∈S (resp. (U (a, a, b; c)) b,c∈S ) associated to the eigenvalue 1. In particular, this allows to determine F and B, knowing only U . This is in contrast with Proposition 5.3, where in general, the knowledge of T does not allow to find F and B.
Proof. -Let us define Then, as Cond. 7 and Cond. 8 hold, we can check that T is a transition kernel and satisfies Cond. 4, so ζ F,B is an invariant measure of the PCA whose transition kernel is T . Moreover, That is Cond. 5, and we conclude by application of Theorem 5.5 (case g = r). Finally, multidimensional laws of µ are deduced from the space-time diagram (C, µ). Indeed, after action of the rotation r −1 , it has the same distribution as (C r −1 , ζ F,B ) whose transition kernel is T . So, we can compute all the finite-dimensional marginals of the space-time diagram, and in particular the multidimensional laws of µ.

Applications to statistical physics
In many models of statistical physics (6 and 8-vertex models, enumeration of directed animals, TASEP model, First Passage Percolation / Last Passage Percolation, Eden growth model, Box-Ball Systems. . . ), one can find a PCA of order 1 or 2 that provides an alternative description of the system. In most cases, the original models are integrable only on a submanifold of their parameter space, and this corresponds to choices of parameters for which the corresponding PCA has a simple invariant measure, with a product or a Markovian structure.
In Section 6.1 and 6.2, we show that using this PCA approach, results of previous sections provide new proofs of known results about the 8-vertex model and about the enumeration of directed animals, after associating suitable PCA with these two models. This gives a simpler alternative to transfer-matrix methods, that does not require tedious computations.
The PCA approach also often suggests relevant ways to extend the initial models of statistical physics. Let us indeed assume that when studying the PCA involved, it appears that the family of parameters for which they have a closed-form invariant measure belong to a larger submanifold, defined with more parameters. Then, one can try to backtrack this observation to the original model, by the introduction of new parameters preserving the physical meaning of the model, with a larger domain of parameters on which the model is integrable. This is what we illustrate in Section 6.3, where we define a new TASEP model with variable speed that extends the classical parallel TASEP. The results of previous sections (Theorem 3.1 and Proposition 5.3) do not apply directly because the PCA does not have positive rates. Nevertheless, our techniques can still be used to compute some invariant measures of this new model.

The 8-vertex model
For some k ∈ 2Z, we consider the graph G k whose set of vertices is V k = Z 2 e ∩ [−k, k] 2 , the restriction of the even lattice to a finite box, and whose set of edges For each edge of G k , we choose an orientation. This defines an orientation O of G k , and we denote by O k the set of orientations of G k . For a given orientation O ∈ O k , and an edge e ∈ E k , we denote: Around each vertex x ∈ V k \ ∂V k , there are 4 oriented edges, giving a total of 16 possible local configurations, defining the type of the vertex x. In the 8-vertex model case, we consider only the orientations O such that around each vertex x ∈ V k \ ∂V k , there is an even number (0, 2 or 4) of incoming edges, so that only 8 local configurations remain, see Figure 6.1. To each local configuration i among these 8 local configurations, we associate a local weight w i > 0. This allows to define a global weight W on the set O k of admissible orientations, by: (2) (4) (6) (8) Thanks to these weights, we finally define a probability distribution P W on O k , by: In the following, we assume that the parameters satisfy w 1 = w 2 = a, w 3 = w 4 = b, w 5 = w 6 = c and w 7 = w 8 = d, which corresponds to the most studied 8-vertex model. We furthermore assume that a + c = b + d. Let us briefly comment on this condition. Among 6 and 8-vertex models, a different case that has attracted much attention is the case when a = b, which corresponds to the XXZ model [DCGH + 18]. But from a physical point of view, this condition is quite restrictive, since it amounts to assuming that the oxygen atoms are arranged on a square lattice. We can relax this condition by assuming that the oxygen atoms are arranged on a diamondshaped lattice, and within this framework, an interesting specific case for some ranges of temperature and pressure consists in assuming that a + c = b + d. As we will see, from a mathematical point of view, this choice of parameters has also the advantage of allowing simple explicit computations. For more information on the 8-vertex model, we refer the interested reader to [Bax82, Cas16, DCGH + 18, Mel18], and references therein. In [BCG16], the authors study the 6-vertex model under an analogous condition, using transfer matrices. Understanding the links between these two techniques would be a very interesting subject, but the authors of the present article are not yet sufficiently familiar with these techniques to achieve this. Following Baxter [Bax82, Section 10.2], let us define a "two-to-one" map C 8 between 2-colorings of faces of G k and admissible orientations of the 8-vertex model on G k . Let F k be the set of faces of G k , that is, the set of quadruples (x, x + u, x + u + v, x + v) ∈ (Z 2 e ) 4 for which at least 3 of the 4 vertices belong to G k . Let C ∈ {0, 1} F k be a 2-coloring of faces of G k , and take any edge e ∈ E k . We denote by f e and f e the two adjacent faces of e. The map consists in setting: It is a "two-to-one" map because from an admissible orientation O, we obtain two 2-colorings in C −1 8 (O) = {C 0 , C 1 }. These two colorings satisfy C 0 (f ) = 1 − C 1 (f ) for any f ∈ F k , see Figure 6.2.
Let us set q = a/(a + c) and r = b/(b + d), and consider the PCA A 8 defined in Section 2.1. We denote F k the law of its space-time diagram G(A 8 , π p ) under its invariant measure π p restricted to G k . This proposition is the first application of PCA with memory two in the literature. One can check that the PCA A 8 satisfies Cond. 1 with p(0) = p(1) = 1/2. Theorem 3.3 implies that this PCA is ergodic. When n → ∞, the center of the square has the same behaviour whatever the boundary conditions are [Cas18, Proposition 1.6].
Note that in what precedes, we have assumed that the weights satisfy the relation If we now assume that they rather satisfy a + d = b + c, we can design a PCA that, when iterated from left to right (or equivalently, from right to left), generates configurations distributed according to the required distribution P W . When a = b and c = d, so that both relations are satisfied, we obtain q = r, and the dynamics is D 4 -reversible.

Enumeration of directed animals
In this section, we give a new proof about a known result (Lemma 6.3) about enumerations of directed animals. Our new proof is based on the fact that, to any PCA with memory 2 with an HZMC invariant law, one can associate a PCA with memory 1 with the same invariant law.
A directed animal on the square lattice (resp. on the triangular lattice) is a set A ⊂ Z 2 e such that (0, 0) ∈ A and, for any z ∈ A there exists a directed path w = ((0, 0) = x 0 , x 1 , . . . , x m−1 , x m = z) such that, for any 1 k m, Figure 6.2 for an illustration on the triangular lattice. Let us denote by A S (resp. A T ) the set of directed animals on the square (resp. triangular) lattice.
The area of an animal A is the cardinal of A and the perimeter of an animal A is the cardinal of P (A) = {x : x / ∈ A, {x} ∪ A is a directed animal}. Let us introduce the generating functions of directed animals enumerated according to their area, on the square lattice and on the triangular lattice: The computation of these series was done by Dhar in 1982 via the study of hardparticles model [Dha82]. Here, we will present this work using PCA, see also [MM14a] for details. Let us introduce the binary state PCA B S with memory one, and the binary state PCA B T with memory two, defined respectively by the following transition kernels: The following Theorem 6.2 connects the generating functions G S and G T with these PCA. In the case of directed animal on the square lattice, it is well known that B S has a simple (F, B)-HZMC invariant measure, that can be computed explicitly, see [BM98,CM15,Dha82,LBM07]. Note also that the ergodicity of B S , for any choice of p S ∈ (0, 1), was proven in [HMM19]. Using Proposition 5.3, one obtain that if p S = p T 1+p T , then Cond. 4 holds, so that the same (F, B)-HZMC is also an invariant measure of B T . We thus obtain the following Lemma 6.3 (although B S and B T do not have positive rates, we can easily extend our sufficient conditions to them). Lemma 6.3. -We have the following: This allows to obtain the expression of the area generating function of directed animals on the triangular lattice from the one on the square lattice. We give below the two expressions of G S and G T .
Theorem 6.4 ( [BM98,Dha82]). -The area generating function of directed animals on the square lattice and on the triangular lattice are respectively given by: The same questions can be asked for the enumeration according to area and perimeter, but in this case we have no explicit description of the invariant measures of the PCA involved. For more references on the question, see [LBM07,MM14a].

Parallel TASEP of order two
The TASEP (Totally ASymmetric Exclusion Process) describes the evolution of some particles that go from the left to the right on a line without overtaking. There are various kinds of TASEP models, with discrete or continuous time and space, and one or more types of particles. We refer the interested readers to the following articles [BC15,BF05,GG01] for the description of some models with discrete time and space. Here, we present a new (to the best knowledge of the authors) generalization of TASEP called TASEP of order two on real line and discrete time.
The TASEP presented here models the behaviour of an infinite number of particles (indexed by Z) on the real line, that move to the right, and that do not bypass and do not overlap. For i, t ∈ Z, we denote by x i (t) ∈ R the position of particle i at time t. Time is discrete, and at time t, each particle i ∈ Z moves with a random speed v i (t), independently of the others. The random speed v i (t) depends on the distance x i+1 (t) − x i (t) between the particle i and the particle i + 1 in front of it, and of the speed v i+1 (t − 1) = x i+1 (t) − x i+1 (t − 1) of the particle i + 1 at time t − 1. Formally, the evolution of (x i (t)) i∈Z is defined by: where v i (t) is random and distributed following µ (x i+1 (t)−x i (t),v i+1 (t−1)) , a probability distribution on R + , and (v i (t)) i∈Z are independent, knowing (x i (t)) i∈Z and (x i (t − 1)) i∈Z .
It is known that TASEP with discrete time can be represented by PCA [Cas16,MM14a]. We adopt the approach presented in [Cas16] to show that the TASEP of order two can be represented by a PCA with memory two: take η(i, t) = x i (t), then (η(i, t) : i ∈ Z, t ∈ N) is the space-time diagram of a PCA with memory two whose transition kernel T is, for any a ∈ R, x, y, v ∈ R + , T (a, a + x, a + x + y; a + v) = µ (x+y,y) (v). Now, we will focus as an example on the simplest case where v ∈ {0, 1} a.s. and particles move on the integer line (for any i ∈ Z, t ∈ N, x i (t) ∈ Z). The constraints we have on T are the following: a, a + 1, a + 1; a) = 1 for any a ∈ Z (and T (a, b, b + i; c) can take any value if i = 0, 1 or b a), • T (a, a + k, a + k + i; c) = 0 for any a, k 0, i ∈ {0, 1} and c / ∈ {a, a + 1}, a, b ∈ Z. The first two points mean that the next position has to be empty for a particle to move, and that a particle can only move of one unit forward. The last point is an hypothesis of translation invariance. Hence, the PCA can be described by the  Figure 6.4. On the left, the classical representation of a TASEP: a white square is an empty square; a black square is a square that contains a particle, the white number is the label of this particle. On the right, the PCA that represents this TASEP; each column represents the trajectory of a particle.
transitions (T (0, k, k; 0)) k 2 and (T (0, k, k + 1; 0)) k 1 . Although this PCA is not with positive rates and does not admit an invariant measure, it has some interesting properties that permit to find invariant measures for the parallel TASEP. The model above is reminiscent of q-TASEP models as defined in [BC15]. In particular, the discrete time Bernoulli q-TASEP can be interpreted as a TASEP of order two with a time change. In contrast to our model, the number of particles is assumed to be finite, and at each time step, the positions of the particles are updated successively from right to left: each particle can move by one unit, with a probability that depends both on the length of the gap with the foregoing particle and on the fact that the jump of the foregoing particle did occur or not. The Poisson q-TASEP is a continuous time TASEP, but it also shares the property that the rate at which particles jump forward is modulated by the distance to the next particle, which is also the case of the discrete time geometric q-TASEP. In this last model, particles can move by several units in one time step. It follows that the geometric q-TASEP is tightly related to TAZRP models, and as we may detail in a future work, the PCA approach can be extended to these models.
The first result of this section is about the fact that there exists a family of (F, B)-HZMC that is stable by this PCA. First, let us define a (q, p)-HZMC for any probability q and p on Z: a (q, p)-HZMC is a (F, B)-HZMC such that, for any a, k ∈ Z, F (a; a + k) = q(k) and B(a; a + k) = p(k).
We can remark that q is the speed law of a particle under the stationary regime and p the distance law between two successive particles (to be precise the left one at current time t and the right one at previous time t − 1). Theorem 6.6. -For any T , for any distribution q on {0, 1} such that there exists a unique distribution p on N * such that (6.2) hold. Moreover, this distribution p is, for any k 1, Proof. -Let q be a probability measure on {0, 1}. By (6.4), we have .
With Theorem 6.6, we recover the invariant measures of the classical parallel TASEP and can express the one of the generalized TASEP as a renewal process.
Corollary 6.7. -For any distribution q on {0, 1} such that (6.5) holds and p (given by (6.6), with p(0) = 0) has a finite mean, the TASEP of order 2 has an invariant measure on {0, 1} Z × {0, 1} Z such that on the same line, the distance between two consecutive 1s are i.i.d. and distributed according to γ with, for any This example goes beyond our previous framework for many reasons. First, the PCA does not have positive rates. Second, studying the invariant measures of this PCA is not interesting because they only correspond to states where nobody moves. That is why we have focused here on families of distributions that are stable by the PCA rather than on only one invariant distribution. And by studying carefully the eigenvectors on the good subspaces, we are able to solve the algebraic issues. This provides some interesting results on a PCA with an infinite alphabet, involving a family of (F, B)-HZMC laws (with F = B), whereas our main explicit results on PCA involve general (F, B)-HZMC measures but on an alphabet of size 2, or PCA with a general alphabet but with an invariant (F, F )-HZMC (see the end of Section 5.1 and Section 8).
Let p be a probability measure on S = {0, 1}. If we specify some results of Table 2.1 to the case |S| = 2, we obtain In this section, we describe more precisely these different sets, which give an alternative proof of the value of their dimension, in the binary case. First, the next result shows that in the binary case, the sets above having the same dimension are equal.
Proposition 7.2. -Let p be any positive probability on S = {0, 1}, and let A ∈ T S (p). Then, we have the following properties.
(1) A is h-reversible. ( Proof. (1). -Since A is in T S (p), A is h-quasi-reversible, and the transition kernel T h of its h-reverse satisfies, for any a, b, c, d ∈ S, a, b, c; d). a, b, c; d), and for b = d, Cond. 9 provides the result.
(2). -It is a corollary of 1. Indeed, if A is in T S (p) and v-reversible, then it is h and v-reversible, and so also r 2 = v •h-reversible. And conversely, if it is r 2 -reversible, then it is v = r 2 • h-reversible.
(3). -This will be a consequence of Proposition 7.5. As a consequence of Proposition 7.1, we obtain the following descriptions of binary PCA having an invariant HZPM. More explicitly, this is equivalent to the following condition.
Cond. 13 . -there exists k ∈ ]0, ∞[ and In that case, the p-HZPM invariant is (p(0), p(1)) where Proof. -The PCA A has an invariant HZPM iff there exists a probability p on S such that Cond. 9 is satisfied, which can easily be shown to be equivalent to the above conditions.
Conversely, let A be such that Cond. 14 holds. Then Cond. 9 and 10 hold, so A ∈ T S (p) and A is r-quasi-reversible.
Proposition 7.5. -Let p be a positive probability on S, and let k = p(0)/p(1).
Moreover, in that case, A is D 4 -reversible.
Proof. -The PCA A is a {r, r −1 }-quasi-reversible PCA of T S (p) iff Cond. 9, 10 and 11 are satisfied, which can easily be shown to be equivalent to the above condition. Now, we prove the D 4 -reversibility of A. First, A is symmetric, so A is v-reversible. Second, it is easy to check that T r = T , so that A is r and v-reversible. By (5) of Proposition 2.7, A is D 4 -reversible.
Example 7.6. -Let us consider the special case when p is the uniform distribution on S, meaning that p(0) = p(1) = 1/2. Then, k = 1, and the family of PCA above corresponds to :   ∀ a, b, c, d ∈ S, T (a, b, c; In the deterministic case (q 0 = 1), we get a linear CA. Such CA have been intensively studied. Here, in the probabilistic setting, the PCA we obtain can be seen as noisy versions of that linear CA (with a probability 1 − q 0 of doing an error, independently for different cells). This is a special case of the 8-vertex PCA, with p = r.

Extension to general alphabet
We now present some extensions of our methods and results to general sets of symbols. First of all, we extend the definition of PCA to any Polish space S, as it has been done in [Cas16] for PCA with memory one. The transition kernel T of a PCA with memory two must now satisfy • for any Borel set D ∈ B (S), the map T (a, b, c; D) is B(S 3 )-measurable; • for any a, b, c ∈ S, the map For any σ-finite measure µ on S, the transition kernel T is said to be µ-positive if, for µ 3 -almost every (a, b, c) ∈ S 3 , T (a, b, c; .) is absolutely continuous with respect to µ and µ is absolutely continuous with respect to T (a, b, c; .). In that case, thanks to Radon-Nikodym theorem, we can define the density of T with respect to µ, that is a µ 4 -measurable positive function where, for µ 3 -almost every (a, b, c) ∈ S 3 ,   t(a, b, c; is the Radon-Nikodym derivative of T (a, b, c; · ) with respect to µ.
where t is the µ-density of T .
Then, the P -HZPM is invariant by A where p(·)/µ(p) is the µ-density of P .
Proof. -The proof follows the same idea as that of Theorem 3.1, except that we are now on a Polish space S. Let A be a µ-positive triangular PCA with alphabet S.
Suppose that A has an invariant µ-positive P -HZPM and that (η t , η t+1 ) follows a P -HZPM distribution. Then, for any A, B, C, D ∈ B (S), on the one hand, 3 (a, c, d) on the other hand.
Then, for any k 1, for any µ-measurable Borel sets B 0 , B 1 , . . . , B k , C 0 , . . . , C k−1 , thus, the P -HZPM distribution is invariant by A. Now, the problem is reduced to finding eigenfunction associated to the eigenvalue 1 of some integral operator. If this problem is solved by Gauss elimination in the case of a finite space, this is more complicated in the general case. Indeed, such a function does not always exist, but, when it is the case, the solution is unique (up to a multiplicative constant), see the following lemma.
Lemma 8.2 ([Dur10, Theorem 6.8.7]). -Let A be an integral operator of kernel m: If m is the µ-density of a µ-positive t. k. M from S to S, then A possesses at most one positive eigenfunction in L 1 (µ) (up to a multiplicative constant).
Moreover, the previous results concerning the characterization of reversible and quasi-reversible PCA extend to PCA with general alphabet. The difference is that we are considering µ-positive PCA and that Cond. 2 and 3 are respectively replaced by the two following Conditions (18) and (19). Following the same idea as in [Cas16], many results on PCA with invariant (F, B)-HZMC can also be generalized to PCA on general alphabets.

Dimensions of the manifolds
In this section, we give the dimensions of T S (p) and of its subsets of (quasi-) reversible PCA, as functions of the cardinal n of the alphabet S (see Table 2.1). But first, we need some results about dimensions of sets of matrices.

Preliminaries: dimensions of sets of matrices with a given eigenvector
Let S be a finite set of size n, and let u, v be two positive probabilities on S. We This gives 2n − 1 independent linear equations on the n 2 variables (m ij ) i,j∈S . So dim M S (p) n 2 − (2n − 1) = (n − 1) 2 .
We do not have the equality yet because we have the additional condition: ∀ i, j ∈ S, m ij > 0. Hence, we have to ensure that M S (u, v) is not empty, and that we are not in any other degenerate case for which the dimension would be strictly smaller than (n − 1) 2 . For that, we first exhibit a solution of the system such that m ij > 0 and then find a neighbourhood around this solution having the dimension we want. First, the matrix M = (v(j)) i,j∈S is in M S (u, v). Now, let s ∈ S be a distinguished element of S. Let us set: S = S\{s}. One can check that there exists a neighbourhood V 0 of 0 in R (S ) 2 such that for any ( ij : i, j ∈ S ) ∈ V 0 , the matrix M = (m ij ) i,j∈S defined by: is positive, stochastic, and satisfies uM = v. So, dim M S (p) dim(S ) 2 = (n − 1) 2 .
In the particular case when u = v = p, we obtain the following. Let s ∈ S and define S = S\{s}. If, for any k ∈ S , M k ∈ M S (p), then M s ∈ M S (p).
Proof. -By (9.1), the coefficients of the matrix M s satisfy, for any i, j ∈ S, First, let us prove that M s is stochastic: for any i, then, that p is a left-eigenvector of M s : for any j, Proof. -First, let us prove that M is stochastic: for any i ∈ S, j∈S m ij = j∈S p(j) p(i) m ji = 1; then, that p is a left-eigenvector of M : for any j ∈ S, i∈S p(i) m ij = i∈S p(j)m ji = p(j).
Finally, we get the dimension of M sym S (p). Lemma 9.5. -Let S be a finite set of size n and p be a positive probability on S, then: Proof. -First, dim M sym S (p) (n−1)n 2 because we know by the proof of Lemma 9.1 that we can describe a matrix in the manifold M S (p) by knowing (m ij : i, j ∈ S * ). But, with the new constraint p(i)m ij = p(j)m ji for any i, j ∈ S, it is sufficient to know only (m ij : i j, i, j ∈ S * ).
Conversely, let us take (m ij : i j, i, j ∈ S * ) in a neighbourhood V of (m ij = p(j) : i j, i, j ∈ S * ) in R n(n−1)/2 . Let us take: • for any i, j ∈ S * , i > j, m ij = p(j) p(i) m ji ; • for any i ∈ S * , m is = 1 − j∈S * m ij ; • for any j ∈ S * , m sj = p(j)− i∈S * p(i)m ij p(s) ; • m ss = 1 − j∈S m sj . By the same argument as in the proof of Lemma 9.1, there exists a neighbourhood V of dimension (n−1)n 2 such that for any point on it, These preliminary results will be useful to prove dimensions of sets of (quasi-) reversible PCA.

Dimensions of T S (p) and its subsets
Theorem 9.6. -Let S be a set of size n, and p be a positive probability on S.
(2). -By Theorem 3.1, as A ∈ T S (p), for any a, c ∈ S, (T (a, b, c; d) (3). -The proof is similar to the proof of Lemma 9.5.
(1) OEIS A006528 (2) OEIS A037270 (3) OEIS A002817 (4). -As before, for any a, c ∈ S, (T (a, b, c; d) Hence, we can choose freely a collection of |S * | 2 matrices ((T (a, b, c; d)) b,d∈S ∈ M S (p) : a, c ∈ S ). Then, by Lemma 9.3, matrices ((T (s, b, c; d)) b,d∈S : c ∈ S ) and (T (a, b, s; d) b,d∈S : a ∈ S ) are uniquely defined and in M S (p). Finally, the last matrix (T (s, b, s; d)) b,d∈S can be obtained from two various methods but define the same matrix at the end (the proof is similar to the one of Lemma 9.1). Hence, , a; d). So, matrices { (T (a, b, a; d) c, b, a; d)) b,d∈S is known and ∈ M S (p) by Lemma 9.4. So, we can just choose freely matrices (T (a, b, c; d)) b,d∈S ∈ M S (p) with a c. That is why the dimension is the same as for v-reversible matrices. then (T (a, b, c; d)) b,d∈S ∈ M S (p) and (a, b, c; d). Then, for any a, c ∈ S, (T (a, b, c; d)) b,d∈S ∈ M sym S (p) and, moreover, they can be chosen freely. So, by Lemma 9.5, To conclude, we just have to say that, for any a, c ∈ S, we can take freely (T (a, b, c; d) Let us however mention that getting the dimension of ∪ We recall that S is a set of size n, s a given element of S, and p a positive probability measure on S. We denote S * = S\{s}.
The proofs of the last three points of Theorem 9.6 are long because they consist in reducing an affine system with |S| 4 equations and |S| 4 variables (containing some redundant equations) into one with only free equations describing the same manifold. Furthermore, we must ensure that there exists a solution with positive coefficients and that we are not in a degenerate case (see the discussion in the middle of the proof of Lemma 9.1). Since the proofs of the three points are similar, but not exactly the same, we first define some conditions that are useful for the three cases, then we detail the proof of Point (9) and finally, we focus on the differences for the two other cases in comparison with Point (9).

Preliminary results
This section is technical and must be seen as a reference for the sections that are following, so it can be omitted in a first lecture.
First, we define some conditions on the transition kernel T .
⇐. -Now, suppose that Cond. 20 and Cond. 24* hold. Cond. 21 hold by Lemma 9.9. We will prove that Cond. 24 holds. It is obvious when a, b, c, d ∈ S * by Cond. 24*. Furthermore, we have the following properties.
For any a, b, c ∈ S * , c, s, a).  Proof. -This proof is half-algebraic and half-combinatorics. The goal is to use Cond. 24* to split the set {T (a, b, c; d) : a, b, c, d ∈ S * } in some subsets such that variables in each subset depend of only one free parameter. The partition is the following one: One can check that, in each subset of this partition, there is exactly only one free variable, according to Cond. 24*, see Table 9.1 to find the equations that connect them. Now, the dimension is just the size of this partition. Enumeration is done in Table 9.1. By adding the fourth column, we find dim{ (T (a, b, c; d) : a, b, c, d ∈ S * ) : Cond. 24 * } = n(n − 1)(n 2 − 3n + 4) 4 .  {T (a, b, c; d) : a, b, c, d ∈ S * } according to Cond. 24*. On each line, we detail one of the type of the subset involved in the partition. The first column is the subset type. The second gives the equations that connect the variables in the subset; these equations are obtained by specifications of Cond. 24*. The third column gives conditions on the arguments to get independent sets when we enumerate them. The fourth column is the enumeration of subsets of that type. To get the lower bound for dim ({A ∈ T S (p) : A is r -reversible}), we use a similar trick that we have done in the proof of Lemma 9.1. We first remark that T (a, b, c; d) = p(d) is a solution and, then by all the previous equations, it is not difficult to construct a neighbourhood whose dimension is dim{ (T (a, b, c; d) : a, b, c, d ∈ S * ) : Cond. 24 * } and for which we do not lose positivity of T (a, b, c; d) for any a, b, c, d ∈ S. Then, we obtain dim ({A ∈ T S (p) : A is r -reversible}) dim{ (T (a, b, c; d) : a, b, c, d ∈ S * ) : Cond. 24 * }.

TOME 3 (2020)
That ends the proof of point (9) of Theorem 9.6. 9.3.3. Proof of (10) of Theorem 9.6 (r • v-reversible) The proof of (10) is similar to the one of (9). Hence, we will omit some parts of the proof that are the same. We only detail the partition in Lemma 9.14, because it differs from the one of Lemma 9.12.
The conditions we will need here are the two following ones. Proof. -The proof is similar to the one of Lemma 9.12, except that the variable space is not partitioned in the same way. The new partition (based on Cond. 25*) and its enumeration is given in the Table 9.2. Thus, the size of this partition is (n−1) 2 (n 2 −2n+2) 2 . The end of the proof is like the ones of Lemmas 9.1 and 9.12. It consists in checking that there exists a neighbourhood of the point (T (a, b, c; d) = p(d) : a, b, c, d ∈ S) with the good dimension such that any point of this neighbourhood satisfies the required conditions. 9.3.4. Proof of (11) of Theorem 9.6 (D 4 -reversible) The proof of point (11) is similar to the two previous ones. We begin by introducing the two new following conditions.
Proof. -As before, the main argument is to find the partition of T based on Cond. 24* and Cond. 26*. This partition and its enumeration is given in Table 9.3. Thus, the size of this partition is n(n−1)(n 2 −n+2)   {T (a, b, c; d) : a, b, c, d ∈ S * } according to Cond. 25*.