Matrix Hermite polynomials, Random determinants and the geometry of Gaussian fields

We study generalized Hermite polynomials with rectangular matrix arguments arising in multivariate statistical analysis and the theory of zonal polynomials. We show that these are well-suited for expressing the Wiener-Ito chaos expansion of functionals of the spectral measure associated with Gaussian matrices. In particular, we obtain the Wiener chaos expansion of Gaussian determinants of the form $\det(XX^T)^{1/2}$ and prove that, in the setting where the rows of $X$ are i.i.d. centred Gaussian vectors with a given covariance matrix, its projection coefficients admit a geometric interpretation in terms of intrinsic volumes of ellipsoids, thus extending the content of Kabluchko and Zaporozhets (2012) to arbitrary chaotic projection coefficients. Our proofs are based on a crucial relation between generalized Hermite polynomials and generalized Laguerre polynomials. In a second part, we introduce the matrix analog of the classical Mehler's formula for the Ornstein-Uhlenbeck semigroup and prove that matrix-variate Hermite polynomials are eigenfunctions of these operators. As a byproduct, we derive an orthogonality relation for Hermite polynomials evaluated at correlated Gaussian matrices. We apply our results to vectors of independent arithmetic random waves on the three-torus, proving in particular a CLT in the high-energy regime for a generalized notion of total variation on the full torus.

vector space of ℓ × n matrices with entries in R with Id n denoting the identity matrix of dimension n. We write P n (R) for the space of positive-definite matrices of dimension n. For X ∈ R ℓ×n , we denote by Vec(X) its vectorisation, that is the vector in R ℓn obtained from X by juxtaposing its columns and etr (X) := e tr(X) , where tr(X) is the trace of X. We write φ (ℓ,n) for the probability density function of X ∈ R ℓ×n with i.i.d. standard normal entries, given by φ (ℓ,n) (X) = (2π) −nℓ/2 etr −2 −1 XX T .
In this case, we write X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) and refer to it as the standard normal matrix distribution. Here, ⊗ denotes the usual Kronecker product of matrices. When ℓ = n = 1, we write φ (1,1) =: φ for the standard Gaussian density on R.
For numerical sequences {a n }, {b n }, we write a n = O(b n ) or a n ≪ b n to indicate that there exists an absolute constant C > 0 such that |a n | ≤ C|b n | and a n = o(b n ) to indicate that a n /b n → 0 as n → ∞. Throughout this paper, we assume that every random object is defined on a common probability space (Ω, F , P) and write E [·] and Var[·] for the mathematical expectation and variance with respect to P, respectively.

Introduction
In applications to stochastic geometry dealing with the asymptotic analysis of local geometric quantities associated with Gaussian random fields on manifolds, one is often confronted with expressions involving quantities of the type F (X), where X is a rectangular centred Gaussian matrix and F is a certain spectral function, that is, F only depends on the spectral measure associated with the matrix XX T . For instance, if Z = {Z(x) : x ∈ M} is a ℓ-dimensional stationary Gaussian field on a manifold M of dimension n (with 1 ≤ ℓ ≤ n), the nodal volume of Z over a region R ⊂ M has typically the form R δ 0 (Z(x))F (J Z (x))vol M (dx), where δ 0 indicates the Dirac mass at zero, J Z (x) stands for the Jacobian matrix of Z computed at x and F (X) = det (XX T ), see for instance [Wig10,MPRW16,MRW20,NPR19,Not21] for some distinguished examples. While objects of this type are amenable to analysis by Wiener-Itô chaos expansions (which involves in particular the decomposition of F (J Z (x)) into Hermite polynomials having the entries of J Z (x) as arguments, see the references above), it is to be expected that such a technique will generate combinatorially untractable expressions for large values of the dimensions ℓ and n (see for instance [DEL21] and [Not21]). The aim of this paper is to tackle directly such a difficulty by initiating a systematic study of chaotic expansions for spectral random variables F (X) as above by using matrix-variate Hermite polynomials, that is, a collection of orthogonal polynomials with matrix entries which are indexed by partitions of integers, obtained by orthogonalizing matrix monomials of the type tr([XX T ] s ) with respect to the law of a Gaussian matrix. We will see that matrix-variate Hermite polynomials inherit the rich combinatorial structure and actually can be defined in terms of zonal polynomials introduced in [Jam61], thus allowing one to deduce explicit formulae in any dimension. We now describe the principal achievements of the present work.
(a) In [Tha93] (see also the related work [Koc96]), the author studies Hermite expansions of functions of the form F (x) = f 0 ( x )P (x) on R n , where f 0 is a function depending only on the norm x and P is a harmonic polynomial. In particular, in such a work, the author provides explicit formulae for the projection coefficients associated with the Wiener-Itô chaos expansion of functionals F as above in terms of Laguerre polynomials on the real line. In Theorem 3.2, we extend this framework by studying matrix-Hermite expansions of radial functionals of the type F (X) = f 0 (XX T ) on matrix spaces. Our results involve generalized Laguerre polynomials with matrix argument, thus yielding a natural counterpart to the work by Thangavelu [Tha93] in higher dimensions.
(b) In [ZK12] (see Theorem 1.1 therein), Kabluchko and Zaporozhets establish a formula for the expected value of Gaussian determinants of the form F (X) = det(XX T ) in terms of mixed volumes and intrinsic volumes of ellipsoids associated with the covariance matrices of the underlying Gaussian vectors, yielding in particular an expression for the projection of F onto the Gaussian Wiener chaos of order zero associated with X (which corresponds to the expectation). In Theorem 3.5 and Theorem 3.6 of the present paper, we substantially extend their framework by considering arbitrary projection coefficients of the form E F (X)H (ℓ,n) κ (X) (where X is a centred Gaussian matrix of dimension ℓ×n) associated with such random determinants. Our results can be formulated using integrations on the so-called Stiefel manifold (see Theorem 3.5), which can subsequently be interpreted in terms of mixed and intrinsic volumes (see Theorem 3.6).
(c) In Section 3.3, we introduce a collection of operators on matrix spaces via a Mehler-type formula, whose definition is amenable to that of the classical Ornstein-Uhlenbeck semigroup on the Euclidean space R n . In Theorem 3.10 we provide a characterization of matrix-Hermite polynomials as the eigenfunctions of these operators, thus yielding a direct analog of the action of the Ornstein-Uhlenbeck semigroup on classical Hermite polynomials on the real line. We subsequently use Theorem 3.10 in order to deduce an intrinsic orthogonality relation between two matrix-Hermite polynomials evaluated in correlated Gaussian matrices. Such a result extends the classical orthogonality relation for matrix-Hermite polynomials as well as the case of Hermite polynomials on the real line. Conjecturally, the objects and techniques introduced in Section 3.3 generate a basis for a special Malliavin Calculus on matrix spaces via the introduction of further operators, such as Malliavin derivatives, adjoints and generators of the Ornstein-Uhlenbeck semigroup (see e.g. [Nua95,NP12]). Such a program of study however largely falls outside the scope of the present paper and is left open for further research.
(d) In Section 3.4, we apply our results to the study of the generalized total variation of multi-dimensional Gaussian random fields, defined as the integral of the square root of the Gramian determinant of its normalized Jacobian matrix. More specifically, we study the high-energy behaviour of the generalized total variation of multiple independent Arithmetic Random Waves on the three-torus. In particular, in Theorem 3.17 we establish its expected mean, an asymptotic law for its variance and a Central Limit Theorem for the suitably normalized total variation. Our arguments rely on the expansion in matrix-Hermite polynomials of the total variation, allowing us to prove that its probabilistic fluctuations are entirely characterized by its projection on the second Wiener chaos. Throughout this application, we also make use of variance expansions of radial functionals by means of its projection coefficients (see Proposition 3.3). Our findings are to be compared with Theorem 1 of [PR18], where the authors prove a CLT for the Leray measure of Arithmetic Random Waves on the two-torus.
The organization of the paper is as follows: In Section 2, we present preliminary notions that will be used in our proofs, notably on zonal polynomials and generalized Laguerre polynomials (Section 2.1), polar matrix factorizations (Section 2.2) and tools from integral geometry such as mixed volumes, intrinsic volumes of convex bodies and general facts about ellipsoids (Section 2.3). Our main contributions are presented in Section 3. Finally, the entire Section 4 is devoted to the proofs of our results. reader is referred for instance to the books [MPH95] and [Chi03] for a thorough introduction to zonal polynomials. Let us now fix integers ℓ ≥ 1 and k ≥ 0. We write κ ⊢ k to denote a partition κ of k into no more than ℓ integer parts (note that such a notation does not involve the integer ℓ, whose role should be understood from the context), that is κ = (k 1 , . . . , k ℓ ), k 1 ≥ k 2 ≥ . . . ≥ k ℓ > 0, k 1 + . . . + k ℓ = k.
Let S ∈ R ℓ×ℓ be a symmetric matrix with eigenvalues s 1 , . . . , s ℓ . For an integer k ≥ 1, we denote by Pol k (S) the space of homogeneous polynomials of degree k in the ℓ(ℓ + 1)/2 variables of S. For an invertible matrix L ∈ R ℓ×ℓ , the transformation S → LSL T induces a representation π of GL ℓ (R) into the vector space GL(Pol k (S)) of isomorphisms from Pol k (S) to itself ([Chi03, Eq.(A.2.1)]): given by It can be shown that Pol k (S) can be decomposed as direct sum ([Chi03, p.297]) where {V κ (S) : κ ⊢ k} are irreducible and π-invariant subspaces. Since tr(S) k is a homogeneous symmetric polynomial of degree k in the eigenvalues of S, it can accordingly be decomposed in the spaces V κ (S) as follows ([MPH95, Eq.(4.3.38)]), where C κ (S) denotes the zonal polynomial associated with the partition κ of k, that is, C κ (S) is the projection of tr(S) k onto the space V κ (S). Applying (2.2) with ℓ = 1 gives C (k) (s) = s k , so that zonal polynomials can be interpreted as a generalization of classical monomials. In particular, evaluating at s = 1 yields C (k) (1) = 1. Zonal polynomials satisfy a generalized binomial formula ([MPH95, Eq.(4.5.1)]), This relation in particular defines the generalized binomial coefficients κ σ . Taking S = a Id ℓ for a ∈ R in (2.3) yields so that, using the homogeneity property of zonal polynomials gives (a + 1) k = k s=0 σ⊢s a s κ σ .
In particular, using the usual binomial formula for real numbers on the left-hand side, one deduces a relation linking classical and generalized binomial coefficients ([MPH95, Eq.(4.5.2)]): A table with generalized binomial coefficients up to order 5 can be found in Table 4.4.1 of [MPH95]. For X ∈ R ℓ×n , zonal polynomials associated with partition κ ⊢ k and matrix argument XX T can be decomposed as ([MPH95, Theorem 4.3.6]) κν are numerical constants. Writing t j := t j (X), the zonal polynomials associated with partitions up to order 3 are given by (see e.g. [MPH95,Table 4.3.1]) In particular, since for every j ∈ [k], t j (X) νj is a homogeneous polynomial of degree 2jν j in the entries of X, it follows from (2.4) that C κ (XX T ) is a homogeneous polynomial of degree 2k in the entries of X, n j=1 α ij = 2k and z κ α is a numerical constant depending on α and κ. Zonal polynomials evaluated at the identity matrix Id ℓ can be computed to be ([Chi03, Eq.(A.2.7)]) where p = p(κ) is the number of non-zero parts in κ, and for a ∈ C , (a) κ stands for the generalized , (a) n = a(a + 1) · · · (a + n − 1) (2.7) defined in terms of classical Pochammer symbols (a) n . The product of two zonal polynomials associated with partitions τ ⊢ t and σ ⊢ s respectively, is given by ([MPH95, Eq.(4.3.65)]) (2.9) Generalized Laguerre polynomials. For a symmetric matrix S ∈ R ℓ×ℓ , the generalized Laguerre polynomial of order γ > −1 associated with a partition κ of k and matrix variable S is defined as ([MPH95, Eq.(4.6.5)]) (2.10)

5
The first Laguerre polynomials associated with partitions up to order three are listed in (4.6.8) of [MPH95]. The generalized Laguerre polynomials define a class of orthogonal polynomials on P ℓ (R) with respect to the weight function etr (−R) det(R) γ , that is, for every integers k, l ≥ 0 and every partitions κ ⊢ k, σ ⊢ l, one has ([MPH95, Theorem 4.6.4]) where ν(dR) denotes the Lebesgue measure on P ℓ (R). Here, for a ∈ R, Γ ℓ (a) denotes the multivariate Gamma function defined by where Γ(·) is the usual Gamma function. A useful formula that we will use at several occasions is the following (see e.g. [MPH95,Theorem 4.4.1]) where A ∈ C ℓ×ℓ is a complex symmetric matrix with positive real part, B ∈ C ℓ×ℓ is a complex symmetric matrix and t is such that ℜ(t) > (ℓ − 1)/2.

Polar decomposition for matrices
Let 1 ≤ ℓ ≤ n be integers. For X = (X ij ) ∈ R ℓ×n , we denote by dX := (dX ij ) its associated differential matrix. We endow the spaces R ℓ×n and P ℓ (R) with the measures respectively. Assuming that the rows of X are linearly independent, the polar decomposition of X is uniquely given by (see for instance [Dow72]) where R 1/2 denotes the positive square root of R, that is the unique matrix B such that B 2 = R. We also define R −1/2 := (R 1/2 ) −1 . The space O(n, ℓ) in (2.13) denotes the so-called Stiefel manifold of matrices Y ∈ R ℓ×n such that Y Y T = Id ℓ , that is, Y has orthonormal rows. An element of O(n, ℓ) is called an ℓ-frame in R n , see for instance [Chi03,p.8].The matrices R and U in (2.13) are seen to be the radial part and orientation of X, respectively and hence the decomposition X = R 1/2 U is a generalization of the standard polar factorization for vectors (obtained for ℓ = 1).
Haar measure on the Stiefel manifold. The family of Stiefel manifolds O(n, ℓ) contains as special cases the n-shpere O(n, 1) = S n−1 and the orthogonal group O(n, n) = O(n). The space O(n, ℓ) is the compact manifold of dimension nℓ − ℓ − ℓ(ℓ − 1)/2 realized as the homogeneous space O(n)/O(n − ℓ). The Stiefel manifold is endowed with a left and right-invariant Haar measure µ, that is, for every P ∈ O(n) and every Q ∈ O(ℓ), for every U ∈ O(n, ℓ). Remark that our notation of µ is independent of ℓ and n, and should be understood from the context. We refer the reader for instance to [Chi03] or [Mui82]  µ(dU ) = 2 ℓ π nℓ/2 Γ ℓ (n/2) .
The normalised measureμ hence defines a left and right invariant probability measure on O(n, ℓ). We call it the Haar probability measure on O(n, ℓ).

Intrinsic volumes, mixed volumes and ellipsoids
Intrinsic volumes and mixed volumes. We present two important notions from integral geometry: intrinsic and mixed volumes. We mainly follow [SW08] for this part (see in particular Section 14.2 therein). For an integer n ≥ 1, we denote by K n the set of convex bodies in R n . We write B n for the unit ball in R n and vol n for the n-dimensional volume measure in R n . For K ∈ K n and ε > 0, we write for the parallel body of K at distance ε. Steiner's formula ([SW08, Eq.(14.5)]) asserts that its volume is a polynomial of degree n in ε, where the coefficients {V j (K), j = 0, . . . , n} denote the intrinsic volumes of K. We set V j (∅) := 0. For instance, when n = 2, V 2 (K) is the area, V 1 (K) is half the boundary length and V 0 (K) is the Euler characteristic of K. Moreover, for every n ≥ 1, we have V n (K) = vol n (K), that is, the n-th intrinsic volume coincides with the n-dimensional volume measure. The intrinsic volumes of the unit ball B n can be computed to be ([SW08, Eq.(14.8)]) V j (B n ) = n j κ n κ n−j , κ n = π n/2 Γ(1 + n/2) .
For an integer 1 ≤ j ≤ n, we denote by G(n, j) the Grassmannian of j-dimensional linear subspaces of R n . It carries a unique invariant Haar probability measure ν n,j . One possible way to realize Grassmannians is as the quotient space G(n, j) = O(n, j)/O(j), where two elements U 1 , U 2 ∈ O(n, j) are equivalent if and only if there exists an orthogonal matrix Q ∈ O(j) such that U 1 = QU 2 , see for instance [Chi03,. Intrinsic volumes admit a useful integral representation, known as Kubota's formula ([SW08, Eq.(6.11)]), where K|L stands for the image of the orthogonal projection of K onto L ∈ G(n, j), and integration is with respect to the Haar probability measure on G(n, j). Let m ≥ 1 and consider m convex bodies K 1 , . . . , K m ∈ K n . Then, for real numbers λ 1 , . . . , λ m ≥ 0, the n-dimensional volume of the Minkowski sum λ 1 K 1 + . . . λ m K m is a homogeneous polynomial of degree n in the variables λ 1 , . . . , λ m ([SW08, Eq.(14.7)]), for uniquely determined symmetric coefficients V (K i1 , . . . , K in ). These coefficients are called the mixed volumes of the convex bodies K i1 , . . . , K in . This formula is a generalization of Steiner's formula in (2.15). Whenever we have mixed volumes involving only two distinct convex bodies K 1 and K 2 , we use the short-hand notation General facts about ellipsoids. We will need some preliminaries about a particular type of convex bodies, namely ellipsoids, see e.g. [ZK12]. Let Σ ∈ R n×n be a non-singular symmetric matrix. We define the ellipsoid E Σ of R n represented by the matrix Σ, obtained as an affinity of the unit n-dimensional ball B n , that is E Σ = {Σ 1/2 y : y ∈ B n }. In particular, its n-dimensional volume is given by For any non-degenerate linear transformation represented by a matrix A ∈ R n×n , the ellipsoid AE Σ = {Ax : x ∈ E Σ } is represented by the matrix AΣA T , that is Let L ∈ G(n, ℓ) be a ℓ-dimensional linear subspace in R n and denote by L ∈ O(n, ℓ) any matrix whose rows form an orthonormal basis of L . Then, the image of the orthogonal projection of E Σ onto L , written E Σ |L , is an ellipsoid in R ℓ that is represented by the matrix LΣL T ∈ R ℓ×ℓ . In particlar, it follows from

Wiener-chaos expansion of matrix-variate functions
Hermite polynomials on the real line. Let m ≥ 1 be an integer and X = (X 1 , . . . , X m ) be a standard m-dimensional Gaussian vector. For α = (α 1 , . . . , α m ) ∈ N m , we write α! := α 1 ! · · · α m ! and |α| := α 1 + . . . + α m and define the multivariate Hermite polynomials associated with the vector (X 1 , . . . , X m ) as the tensor product of univariate Hermite polynomials, that is where H α l denotes the Hermite polynomial of order α l on the real line. It is well-known that the normalised Hermite polynomials {(k!) −1/2 H k : k ≥ 0} form a complete orthonormal system of L 2 (φ) := L 2 (R, φ(z)dz) (see e.g. [NP12]). This implies that the collection of normalised multivariate Hermite polynomials form a complete orthonormal system of L 2 (φ ⊗m ), where φ ⊗m stands for the standard m-dimensional Gaussian measure. In particular, every random variable F ∈ L 2 (φ ⊗m ) admits a unique decomposition denotes the Fourier-Hermite coefficients of F associated with the multi-index α. For k ≥ 0, we write for the closed linear subspace of L 2 (P) generated by multivariate Hermite polynomials of cumulative degree k. The space C X k is the so-called k-th Wiener chaos associated with the vector X = (X 1 , . . . , X m ). We have that C X 0 = R. For F ∈ L 2 (φ ⊗m ), we denote by proj(F |C X k ) the projection of F onto C X k , that is, (3.2) can be rewritten as the L 2 (P)-converging series This decomposition is known as the Wiener-Itô chaos expansion of F .
where the coefficients a κ τ,σ are defined by the relation (2.8) and (ℓ/2) σ denotes the generalized Pochammer symbol, formally defined in (2.7). Zonal polynomials being generalizations of monomials, the expansion in (3.5) is to be compared to the classical expansion of univariate Hermite polynomials in the basis of monomials (see e.g. [NP12,p.19 Alternatively, H where, for X = (X ij ) ∈ R ℓ×n , the differential matrix ∂X is given by ∂X = ∂ ∂Xij . We note that (3.6) is a generalization of the classical well-known Rodrigues formula for univariate Hermite polynomials (see for instance [NP12,Proposition 1.4

.2])
Moreover, matrix-variate Hermite polynomials are orthonormal on R ℓ×n with respect to the matrixnormal density function φ (ℓ,n) , that is (see e.g. [Hay69, Corollary 3]) for every integers k, l ≥ 0 and every partitions κ ⊢ k, σ ⊢ l, Let now X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) and write s 1 , . . . , s ℓ for the eigenvalues of XX T . The spectral measure of XX T associated with the matrix X is the measure supported on the spectrum of XX T , where δ y is the Dirac mass at y. We write L 2 (µ X ) := L 2 (Ω, σ(µ X ), P) to indicate the subspace of L 2 (φ (ℓ,n) ) := L 2 (R ℓ×n , φ (ℓ,n) (X)(dX)) consisting of those random variables that are measurable with respect to the sigma algebra generated by µ X . By this, we mean the subspace of L 2 (φ (ℓ,n) ) of random variables that are generated by elements of the type R f (t)µ X (dt) =: µ X (f ). (3.10) Since matrix-variate Hermite polynomials in (3.5) admit an expansion into zonal polynomials, they are themselves symmetric functionals of the eigenvalues s 1 , . . . , s ℓ . This fact together with the orthogonality relation (3.9), implies that the family of normalised matrix-variate Hermite polynomials forms an orthonormal system in L 2 (µ X ). In Appendix A, we prove the following Proposition, stating that this system is also complete in L 2 (µ X ).
Therefore, every F ∈ L 2 (µ X ) admits a unique decomposition in the basis (3.11), is the Fourier-Hermite coefficient of F associated with the partition κ and c(κ) is as in (3.11). To state our result, we introduce some further notation. For an integer s ≥ 0 and X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ), we recall the notation t s (X) := tr ([XX T ] s ) introduced in (2.5) and define the spaces where the closure is with respect to L 2 (µ X ). By construction, we have that U X k ⊂ U X k+1 . We let that is, U X k is the space of those random variables in U X k that are orthogonal in L 2 (P) to elements of U X k−1 . Expanding matrix-Hermite polynomials into zonal polynomials by (3.5) and subsequently zonal polynomials into monomials of the type t s (X) by (2.4) shows that Hermite polynomials admit an expansion into monomials t s (X). In particular, since Hermite polynomials are orthogonal in view of (3.9), it follows that The following result links matrix-variate Hermite polynomials with the classical Wiener-Itô decomposition in (3.2). In particular, we establish an explicit formula for projection coefficients associated with radial functionals of the form F (X) = f 0 (XX T ) ∈ L 2 (µ X ) where X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) in terms of generalized Laguerre polynomials (see Section 2.1 for definitions). Such a formula is to be compared to [Tha93,Koc96], where the authors study Hermite expansions of functions of the form F (x) = f 0 ( X )P (x) on R n , where P is a harmonic polynomial.
Theorem 3.2. For integers 1 ≤ ℓ ≤ n, let X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) and write X = Vec(X). Then, for every integer k ≥ 0 and every partition κ ⊢ k, we have that H (ℓ,n) κ (X) is an element of C X 2k and for every F ∈ L 2 (µ X ), where F (κ) is as in (3.13). In particular, we have that proj κ denotes the generalized Laguerre polynomial of order γ > −1 associated with the partition κ, defined in (2.10) and ν(dR) is the Lebesgue measure on P ℓ (R).
Our proof of Theorem 3.2 suggests that, combining the generalized Rodrigues formula (3.6) with the univariate Rodrigues formula (3.7), matrix-variate Hermite polynomials can be expressed in terms of multivariate Hermite polynomials. For instance, combining (3.6) with (3.7) in the case ℓ = n = 1 (so that for every integer k ≥ 0, κ = (k) is the only partition of k) and writing φ = φ (1,1) for the standard Gaussian density function yields for every k ≥ 0 where we used that n 2 (k) = n 2 k , C (k) (a) = a k for a ∈ R and the Rodrigues formula for classical Hermite polynomials in (3.7). This shows in particular that Proceeding similarly for arbitrary dimensions ℓ and n, we compute the first matrix-variate Hermite polynomials associated with partitions of order up to 2 to be (3.16) Combining the content of Theorem 3.2 with the orthogonality relation (3.9), allows one to derive variance expansions of spectral variables F (X) ∈ L 2 (µ X ) where X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) as a converging series in terms of its Fourier-Hermite coefficients.
where the convergence of the series is part of the conclusion.

Fourier-Hermite coefficients of Gaussian determinants as intrinsic volumes of ellipsoids
In this section, we consider rectangular Gaussian matrices X and provide the Wiener chaos expansion of determinants of the form det(XX T ) 1/2 . In [ZK12], Kabluchko and Zaporozhets consider the case where X ∈ R ℓ×n has centred independent rows with respective covariance matrices Σ 1 , . . . , Σ ℓ , and prove that (see in particular [ZK12, Theorem 1.1]) . . , B n ) denotes the mixed volume of the ellipsoids E Σi , i = 1, . . . , ℓ associated with matrices Σ i and B n denotes the unit ball in R n with volume κ n = π n/2 /Γ(1 + n/2). We also refer the reader to [Vit91, Theorem 3.2], where the author proves a similar formula linking the expected absolute determinant of a matrix with i.i.d. copies of a random vector to the volume of the zonoid associated with the random distribution.
In Theorem 3.6 below, we substantially extend the framework of Kabluchko and Zaporozhets to arbitrary projection coefficients associated with the Wiener chaos expansion of such Gaussian determinants in the case where the rows of X are i.i.d centred Gaussian vectors with the same covariance matrix Σ.
Let Σ ∈ R n×n be a symmetric positive-definite matrix and } a collection of independent Gaussian vectors with covariance matrix Σ. We write X ∈ R ℓ×n for the matrix whose i-th row is X (i) . It follows that X has distribution N ℓ×n (0, Id ℓ ⊗Σ) with density function As a consequence, the matrix XΣ −1/2 has the N ℓ×n (0, Id ℓ ⊗ Id n ) distribution (see e.g. [GN00, Theorem 2.3.10]). Based on the matrix-variate Hermite polynomials H (ℓ,n) κ and their orthogonality relation with respect to φ (ℓ,n) in (3.9), we define (3.19) In particular, we note that H Proposition 3.4. For every integers k, l ≥ 0 and every partitions κ ⊢ k, σ ⊢ l, we have Therefore, the family of normalized polynomials forms a complete orthonormal system of L 2 (µ X ), where µ X denotes the spectral measure of XX T associated with X. Hence, for every F ∈ L 2 (µ X ), one has the expansion where the projection coefficients are given by The next result provides an explicit formula for the projection coefficients F (κ; Σ) in the special case where F (X) = det(XX T ) 1/2 .
Theorem 3.5. For integers 1 ≤ ℓ ≤ n and Σ ∈ R n×n positive-definite symmetric, let X ∼ N ℓ×n (0, Id ℓ ⊗Σ). Then, F (X) = det(XX T ) 1/2 is an element of L 2 (µ X ), and one has the decomposition where the Fourier-Hermite coefficients of F are given by the formula Here, κ σ denote the generalized binomial coefficients defined by (2.3), andμ stands for the Haar probability measure on the Stiefel manifold O(n, ℓ) of ℓ-frames in R n .
As anticipated, our next result yields a geometric interpretation of the projection coefficients F (κ; Σ) appearing in (3.22) in terms of mixed volumes and intrinsic volumes of ellipsoids (see Section 2.3 for preliminaries on these notions and in particular notation (2.17)).
(b) Our proof of Theorem 3.6 suggests the following new relation for intrinsic volumes of ellipsoids whereμ indicates the Haar probability measure on the Stiefel manifold O(n, ℓ).
(c) In Section 4.2.1, we sketch an attempt to further generalize the findings of Kabluchko and Zaporozhets to the more general setting where the rows of X are independent with respective covariance matrices Σ 1 , . . . , Σ ℓ . As we will explain, we are not successful to adapt our techniques employed in the proof of Theorems 3.5 and 3.6 to this more general framework. Such a difficulty may be explained by the fact that the polynomials defined in (4.17) are not easily tractable for matrix calculus, as we have to deal with each row separately.
The following Corollary is obtained from Theorem 3.5 applied with Σ = Id n , that is, when X has independent rows with independent coordinates. In this case, we have H (ℓ,n) κ (X; Id n ) = H (ℓ,n) κ (X) and F (κ; Id n ) = F (κ) as in (3.13).
Corollary 3.8. For integers 1 ≤ ℓ ≤ n, let X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ). Then, F (X) = det(XX T ) 1/2 is an element of L 2 (µ X ), and one has the decomposition where the Fourier-Hermite coefficients of F are given by the formula (3.26) In particular, . (3.27) Remark 3.9. (a) Combining the contents of Corollary 3.8 and Theorem 3.2, we see that (3.26) provides the chaotic projection coefficients associated with the Wiener-chaos decomposition of det(XX T ) 1/2 . In Section 3.4, we consider functionals of multi-dimensional Gaussian fields arising in stochastic geometry, that admit a certain integral representation in terms of Jacobian determinants, and effectively use formula (3.26) to obtain a compact expression of their Wiener-Itô chaos expansions.
also appearing in the Gaussian Kinematic formula (see for instance Chapter 13 in [AT07]). In particular, one has that

A Mehler-type representation
In this section, we provide the equivalent counterpart on matrix spaces of the classical Ornstein-Uhlenbeck For an integer d ≥ 1 and f : for the Ornstein-Uhlenbeck operator in dimension d, in such a way that P t = P (1) t . We fix integers 1 ≤ ℓ ≤ n, and define the space that is, an element of Π(ℓ, n) is a matrix-variate function that is right-invariant under orthogonal transformations. For a diagonal matrix A = diag(a 1 , . . . , a n ) ∈ R n×n with a 1 , . . . , a n ≥ 0 and f ∈ Π(ℓ, n), we introduce the operator where the expectation is taken with respect to X 0 ∼ N ℓ×n (0, Id ℓ ⊗ Id n ), for a matrix M ∈ R n×n , (3.31) Theorem 3.10. For every diagonal matrix A = diag(a 1 , . . . , a n ) ∈ R n×n such that a 1 , . . . , a n ≥ 0, every integer k ≥ 0 and every partition κ ⊢ k, we have that In particular, the family {O

From (3.32) it becomes clear that the polynomials H
Let us make some remarks about Theorem 3.10.
Remark 3.11. (a) Using the fact that H (ℓ,n) κ is an element of the class Π(ℓ, n) (as can be seen for instance from (3.8)), we deduce from (3.32) applied with A = Id n that where we used that C κ (e −2t Idn ) = e −2kt C κ (Id n ) by homogeneity. Recalling that H (ℓ,n) κ (X) is an element of the 2k-th Wiener chaos associated with Vec(X), it is clear that the classical Ornstein- , which is consistent with (3.33). (b) Let us assume that A = diag(a, . . . , a), a ≥ 0. Then the relation in (3.32) reduces to In particular, from this identity, one can directly verify the semigroup property verified by O (ℓ,n) t;A on matrix-Hermite polynomials, as for every s, t ≥ 0, Combining this relation with (3.32) in particular suggests the identity which fails to hold in the case where the diagonal entries of A are not all equal. Indeed, for simplicity a direct computation in the case ℓ = n = 2, κ = (1), a 1 = 1, a 2 = 2 shows that the left and right-hand sides of the above relation are respectively given by which are different.

An extension of the orthogonality relation for matrix-variate Hermite polynomials
Exploiting the action of the operator O (ℓ,n) t;A on matrix-variate Hermite polynomials derived in Theorem 3.10 allows us to establish the matrix-counterpart of the orthogonality relation (3.34) in the setting where the correlation of the Gaussian matrix entries X and Y is reflected in a matrix R. This is the content of the following Theorem.
Theorem 3.12. Let X, X 0 ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) be independent and R be a deterministic matrix of dimension n × n. Let Y L = XR + X 0 (Id n −R 2 ) 1/2 in distribution. Then, for every integers k, l ≥ 0 and every partitions κ ⊢ k, σ ⊢ l, we have (3.35) Some remarks concerning Theorem 3.12 are in order.
Remark 3.13. (a) By independence of X and X 0 and the distributional identity Y where we used that X ∼ N ℓ×n (0, Id ℓ ⊗ Id n ), yielding that, for every j, j ′ = 1, . . . , n, |R jj ′ | ≤ 1 by virtue of the Cauchy-Schwarz inequality. The above observation implies that R is necessarily symmetric and positive-semidefinite as a covariance matrix, and therefore has non-negative eigenvalues r 1 , . . . , r n . Note that if R = ∆ = diag(r 1 , . . . , r n ) is diagonal, we therefore necessarily have |r j | ≤ 1 for every j = 1, . . . , n, so that (Id n −R 2 ) 1/2 is well-defined. Our arguments to prove Theorem 3.12 are based on the following general reduction argument: for f, g ∈ Π(ℓ, n), writing R = O∆O T with O ∈ O(n) and ∆ = diag(r 1 , . . . , r n ), where we used the fact that f (X) = f (XO) and g(XO T ) = g(X) since f, g ∈ Π(ℓ, n) as well as the fact that (XO, X 0 O) L = (X, X 0 ), showing in particular that |r j | ≤ 1 for every j = 1, . . . , n.

Applications to geometric functionals of Gaussian random fields
In this section, we apply our main results of Sections 3.1, 3.2 and 3.3 to the study of geometric functionals of multidimensional Gaussian fields.
In Section 3.4.1, we consider random variables admitting an integral representation in terms of Jacobian determinants associated with multi-dimensional Gaussian fields. We argue that such a definition can be interpreted as the total variation of vector-valued functions, generalizing the classical definition of total variation of multi-variate functions. More specifically, in the setting of a certain matrix correlation structure between two Jacobian matrices, appearing notably in the study of Gaussian Laplace eigenfunctions, we exploit the findings of Theorem 3.12 to obtain a precise expression for the variance of the total variation in terms of integrals of zonal polynomials.
In Section 3.4.2, we apply the general framework of Section 3.4.1 to vectors of independent arithmetic random waves with the same eigenvalue on the three-dimensional torus, and prove a CLT in the highenergy regime for their generalized total variation on the full torus.
In Section 3.4.3, we consider the nodal volumes associated with vectors of independent arithmetic random waves on the three torus. In particular, we provide its Wiener-Itô chaos expansions in terms of both, multivariate and matrix-variate Hermite polynomials, and provide some insight for variance estimates of its chaotic components.

Generalized total variation of vector-valued functions
Let n ≥ 1 be an integer and consider a centred smooth Gaussian field f = {f(z) : z ∈ R n } on R n . For 1 ≤ ℓ ≤ n, we consider ℓ i.i.d copies f (1) , . . . , f (ℓ) of f and are interested in the ℓ-dimensional Gaussian field We denote by f ′ ℓ (z) ∈ R ℓ×n the Jacobian matrix of f ℓ evaluated at z ∈ R n . Moreover, we assume that (i) for every z ∈ R n , the distribution of f ℓ (z) is non-degenerate and (ii) for every z ∈ R n , f ′ ℓ (z) ∼ N ℓ×n (0, Id ℓ ⊗ Id n ). We define the following random variable. We note that the above integral is well-defined since U is compact and det(f ′ ℓ (z)) is a multivariate polynomial in the entries of f ′ ℓ (z). We remark that the random variable V ℓ,n (f ℓ ; U ) can be seen as a generalization of the total variation of vector-valued functions. Indeed, for ℓ = 1, (3.39) coincides with the definition of the total variation for functions R n → R. For ℓ = n, [FFM04, DP12] consider a relaxed total variation of the Jacobian given by the Area formula (see e.g. [AW09, Proposition 6.1]) where N y (f n ; U ) = card({z ∈ U : f n (z) = y}). Using the Co-area formula ([AW09, Proposition 6.13]) in (3.39) shows that where σ y (f ℓ ; U ) denotes the (n − ℓ)-dimensional Hausdorff measure of the level set {z ∈ U : f ℓ (z) = y}: Thus, the definition (3.39) generalizes the above setting to functions R n → R ℓ with ℓ < n.
From now on, 1 ≤ ℓ ≤ n are fixed and we write V(f ℓ ; U ) = V ℓ,n (f ℓ ; U ). The fact that, for every can be expanded in matrix-variate Hermite polynomials by means of Corollary 3.8, yielding its Wiener chaos expansion where Φ(κ) is as in (3.26) and V(f ℓ ; U )[2k] denotes the projection of V(f ℓ ; U ) onto the Wiener chaos of order 2k associated with f ℓ . In the following proposition, we compute the variance of the total variation of f ℓ on U in the specific framework, where the matrices f ′ ℓ (z) and f ′ ℓ (z ′ ) satisfy a certain matrix correlation structure for every z, z ′ ∈ R n (see (3.41) below).
Proposition 3.16. Let the above notation prevail. Assume furthermore that for every z, z ′ ∈ R n , in distribution, where X 0 = X 0 (z, z ′ ) is an independent copy of f ′ ℓ (z) and R(z, z ′ ) is a deterministic matrix. Then, where Φ(κ) is as in (3.26).

Applications to Arithmetic Random Waves on the three-torus
The study of local and non-local features in the high-energy regime associated with zero and non-zero level sets of Gaussian Laplace eigenfunctions on manifolds has gained great importance in past years, where different models have been taken into consideration. In this section, we apply the general framework presented in Section 3.4.1 to the setting of vectors of independent arithmetic random waves on the three-torus, T 3 .
that is, n is an integer expressible as a sum of d integer squares. The set of frequencies associated with n ∈ S d is and we write |Λ n | =: N n for its cardinality, that is, N n is the number of ways in which n is represented as a sum of squares. An L 2 (T d )-basis of eigenfunctions is given by complex exponentials of the form {e λ (·) := exp(2πi λ, · ) : λ ∈ Λ n }, where ·, · denotes the standard Euclidean inner product on R d . For n ∈ S d , the ARW with eigenvalue E n is defined as random a linear combination of complex exponentials where the coefficients {a λ : λ ∈ Λ n } are independent standard complex Gaussian random variables save for the relation a −λ = a λ , which makes T n real-valued. Alternatively, ARWs are defined as the Gaussian process {T n (z) : z ∈ T d } on T d with covariance function Note that r (n) only depends on the difference z − z ′ , meaning that the random field {T n (z) : z ∈ T d } is stationary. Moreover, the fact that r n (0) = 1, implies that for every z ∈ T d , T n (z) has variance one.
Total variation of vectors of ARW on T 3 . For an integer 1 ≤ ℓ ≤ 3 and n ∈ S 3 , we consider i.i.d copies Our specific goal is to study the high-energy behaviour of the total variation V(T (ℓ) n ; T 3 ) (as defined in (3.39)) of T (ℓ) n on the full torus, that is when N n → ∞. Since for every z ∈ T 3 , we have we introduce the normalised partial derivatives with unit variance and writeṪ (ℓ) n (z) ∈ R ℓ×n for the normalised Jacobian matrix of T (ℓ) n . According to (3.39), we use the homogeneity of the determinant in order to rewrite the total variation as where Φ(M ) = det(M M T ). Differentiating (3.43) and using the fact that r (n) (0) = 1 implies thaṫ T (ℓ) n (z) ∼ N ℓ×n (0, Id ℓ ⊗ Id n ) and is stochastically independent of T (ℓ) n (z) for every z ∈ T 3 . We furthermore adopt the notation (3.48) The statement of our result is divided into three parts: (i) gives the expected total variation of vectorvalued ARWs on the full torus, (ii) is an exact variance asymptotic and (iii) is a Central Limit Theorem in the high-energy regime for the normalised total variation Theorem 3.17. Let the above notation prevail.

Digression: Comparison with [Not21]
In this section, we compare our findings with [Not21], where we study nodal volumes L (ℓ) n := H 3−ℓ (Z(T (ℓ) n )) of the nodal sets Z(T (ℓ) n ) associated with T (ℓ) n in (3.45) (here, H k denotes the k-dimensional Hausdorff measure, with H 0 indicating counting measure). Such a work is in particular based on the asymptotic study of the fourth chaotic projection associated with the Wiener-Itô chaos expansion of L (ℓ) n . Recall that, in view of the Co-Area formula, the nodal volume L (ℓ) n is defined P-almost surely and in L 2 (P) as where Φ is as in (3.47),Ṫ (ℓ) n is the normalised Jacobian matrix of T (ℓ) n , and δ (0,...,0) (x) := δ 0 (x 1 ) · · · δ 0 (x ℓ ), x = (x 1 , . . . , x ℓ ) denotes the multiple Dirac mass at the origin. Using matrix-Hermite polynomials studied in the present article, the chaotic projection of L (ℓ) n on the Wiener chaos of order 2q is obtained by means of Corollary 3.8 as where for a multi-index α ∈ N ℓ , β α denote the projection coefficients of the Dirac mass and Φ(κ) can be computed from (3.26) (applied with n = 3) (3.55) Writing out the explicit values of the projection coefficients in (3.55) for partitions κ ∈ {(2), (1, 1)} and using the expressions for matrix-variate Hermite polynomials in (3.16), we eventually recover the projection coefficients associated with Φ appearing in [Not21,Proposition B.5]. We remark that, unlike the Wiener chaos expansion of the generalized total variation in (3.40), the presence of the multiple Dirac mass leads to an expression containing both, multivariate and matrix-variate Hermite polynomials. Specifying (3.54) to q = 2 yields that the projection of L (ℓ) n on the fourth Wiener chaos can be written compactly as the sum of five terms (2) (Ṫ where the last identity follows by stationarity. Moreover since the terms S (ℓ) 4 (n) and S (ℓ) 5 (n) involve different partitions of the integer 2, Theorem 3.12 implies that the random variables S 4 (n) and S 5 (n) are orthogonal in L 2 (P). It should be remarked that S involving covariances between univariate and matrix-variate Hermite polynomials. In order to deal with such expressions, one can expand matrix-Hermite polynomials into univariate Hermite polynomials and rely on the classical diagram formulae for the latter.

Proofs of Section 3.1
Proof of Theorem 3.2 Since F ∈ L 2 (µ X ) ⊂ L 2 (φ (ℓ,n) ), we can expand it in the two orthonormal systems H [ℓn] and H [ℓ×n] defined in (3.1) and (3.11) respectively, yielding (4.1) Using the representation of zonal polynomials in (2.6), we write C κ (XX T ) as a homogeneous polynomial of degree 2k in the entries of X = (X ij ), that is where α ∈ N ℓ×n is a multi-index such that |α| = 2k and z κ α is an explicit constant depending on α and κ. Using the above representation of zonal polynomials in the generalized Rodrigues formula (3.6), it follows that Then, using the classical Rodrigues formula for Hermite polynomials on the real line (3.7) for every i ∈ [ℓ], j ∈ [n], we infer that so that, using the fact that |α| = 2k, . . , X ℓn ).
The above expression yields the expansion of H (ℓ,n) κ (X) into multivariate Hermite polynomials and implies in particular that H (ℓ,n) κ (X) is an element of the Wiener chaos of order 2k associated with the vector X = Vec(X). The formula for the projection of F onto C X 2k in (3.14) then follows summing over all partitions of k. The fact that the projection of F onto Wiener chaos of odd order is zero follows from the fact that the RHS of (4.1) does not involve any multivariate Hermite polynomials of cumulative odd order since H (ℓ,n) κ (X) ∈ C X 2k . In order to prove formula (3.15), we use the identity (3.8) and subsequently apply the polar decomposition X = R 1/2 U according to (2.13), yielding (dX) = π nℓ/2 Γ ℓ ( n 2 ) det(R) n−ℓ−1 2 ν(dR)μ(dU ), (see e.g. [Chi03, Theorem 1.5.2]). Therefore, we have from (3.13) where we used thatμ is a probability measure on O(n, ℓ) and the definitions of c(κ) and γ κ in (3.11) and (3.8), respectively. This finishes the proof.

Proof of Proposition 3.3
By Theorem 3.2, the Wiener-Itô chaos expansion of F (X) is given by where F (κ) is as in (3.13). Computing the L 2 (P)-norm on both sides of (4.2) and using the orthogonality relation (3.9) then yields Since F ((0)) = E [F (X)], we obtain the expansion for the variance of F (X), where we used (3.11).

Polar decomposition of Gaussian rectangular matrices
Let us assume that X has the N ℓ×n (0, Id ℓ ⊗Σ) distribution with density function φ (ℓ,n) Σ (X) defined in (3.18), and write X = R 1/2 U for its polar decomposition according to (2.13). In the following lemma, we compute the joint probability density function of the pair (R, U ).
The following lemma (see [Chi03,Theorem 2.4.2]) gives the marginal density functions of R and U , respectively. These are obtained when integrating the joint density f (R,U) (R, U ) with respect to U and R, respectively.
Lemma 4.2 (Theorem 2.4.2, [Chi03]). Assume that X ∼ N ℓ×n (0, Id ℓ ⊗Σ) and write X = R 1/2 ·U . Then, the marginal density functions of R and U are respectively given by a 1 , . . . , a p ; b 1 denotes the hypergeometric function with two matrix arguments (see e.g. [Chi03, Appendix A.6]) and The density function of U in (4.5) is referred to as the matrix angular central distribution with parameter Σ on O(n, ℓ). We also point out that, when Σ = Id n , the matrix R follows the Wishart distribution with density function and the matrix angular central distribution of U reduces to the uniform distribution on O(n, ℓ). Moreover, it follows from (4.3), that in this case, R and U are independent.
Combining (4.3) with (4.5), we obtain the conditional probability density of R given U : In the forthcoming sections, whenever Z is a random variable, we often write E Z [·] to indicate mathematical expectation with respect to the law of Z.

Proof of Theorem 3.5
The proof of Theorem 3.5 is based on the following key identity.
Lemma 4.3. Let A ∈ C ℓ×ℓ be a complex symmetric matrix with positive real part, B ∈ C ℓ×ℓ a complex symmetric matrix and t ∈ C such that ℜ(t) > (ℓ − 1)/2. Then, we have Proof. This identity follows directly from the definition of Laguerre polynomials in (2.10): indeed by linearity, it suffices to apply relation (2.12) on each zonal polynomial C σ appearing in the expansion of L We are now in position to prove Theorem 3.5.
Proof of Theorem 3.5. The fact that the random variable F (X) = det(XX T ) 1/2 is an element of L 2 (µ X ) follows from the following observation: Denoting by s 1 , . . . , s ℓ the eigenvalues of XX T , we have that This justifies the decomposition into matrix-variate Hermite polynomials of F . We now prove formula (3.22). Using the definition of the polynomials H (ℓ,n) κ (X; Σ) in (3.19) and the relation (3.8), we obtain from (3.21) Applying the polar decomposition X = R 1/2 U and noting that F (R 1/2 U ) = det(R) 1/2 , we have where in the last line we used the fact that L in view of the permutation invariance property (2.9) of zonal polynomials appearing in the definition of matrix-variate Laguerre polynomials (2.10). By conditioning on U , we can rewrite the above expectation as (4.9) We start by computing Z κ (U ; Σ). Using the conditional probability density of R given U in (4.6), we have (4.10) where (4.11) Exploiting identity (4.8) with γ = (n − ℓ − 1)/2, t = (n + 1)/2 and A = B = 2 −1 U Σ −1 U T yields (4.12) Replacing this expression into the RHS of (4.10) eventually gives Taking expectations with respect to U gives from (4.9) The expectation with respect to U is computed using (4.5), Replacing this expression into the previous relation, we conclude that where we used the definition of c(κ; Σ) in (3.20). Combining this expression with the definitions of γ κ in (3.8) and d κ in (4.12), yields after simplications which finishes the proof.
Proof of Theorem 3.6 In order to prove (3.23), it is sufficient to prove the relation since then (3.23) directly follows after combining (4.13) with (3.22). Let us now prove (4.13). A direct computation shows that . (4.14) Since the mixed volume on the RHS of (4.13) only involves the convex bodies E Σ and B n , we can use (2.18) to represent it as an intrinsic volume, Using the integral representation (2.16) for the ℓ-th intrinsic volume yields where ν n,ℓ is the Haar probability measure on the Grassmannian G(n, ℓ). Combining this with (4.14) shows that the identity in (4.13) is equivalent to (4.16) Therefore it remains to prove (4.16). We rewrite the LHS of (4.16) as follows where Π n,ℓ (dU ) = det(Σ) −ℓ/2 det(U Σ −1 U T ) −n/2μ (dU ) is a probability measure on O(n, ℓ) by virtue of (4.5). We now argue that det(U ΣU T ) 1/2 Π n,ℓ (dU ).
In order to see this, let us write Σ = OΛO T for O ∈ O(n), Λ = diag(λ 1 , . . . , λ n ). Then, we have Therefore, it suffices to consider the case where Σ = Λ is diagonal. Moreover, since W ∈ O(n, ℓ) we have for every Q ∈ O(ℓ) that (QW )(QW ) T = QW W T Q T = Id ℓ , that is QW ∈ O(n, ℓ). This implies that, up to rotating the matrix W = U O, we can assume that the rows of W coincide with the ℓ first canonical basis vectors e 1 , . . . , e ℓ in R n . Then, we compute Therefore, integrating on O(n, ℓ) and noting that Π n,ℓ (d(QU )) = Π n,ℓ (dU ) for every Q ∈ O(ℓ) yields the claim. Now, since Π n,ℓ is left-invariant by orthogonal transformations, it can be viewed as a probability measure on O(n, ℓ)/O(ℓ) ≃ G(n, ℓ), where two elements U 1 , U 2 in O(n, ℓ) are equivalent if and only if there exists Q ∈ O(ℓ) such that U 1 = QU 2 . Thus, since ν n,ℓ is the unique left and right-invariant Haar probability measure on G(n, ℓ), it must coincide with Π n,ℓ . Writing U for the ℓ-dimensional linear subspace generated by the rows of U , we have that the matrix U ΣU T represents the ellipsoid E Σ |U of volume vol ℓ (E Σ |U ) = κ ℓ det(U ΣU T ) 1/2 , implying in turn This proves (4.16) and thus (4.13). Formula (3.24) follows from (3.23) and relation (4.15). Formula (3.25) is obtained when setting κ = (0) in (3.23) and (3.24), respectively, and using the fact that F ((0); Σ) = E X [F (X)].

An attempt at generalizing to distinct covariance matrices
In this section, we try to generalize the results of Theorem 3.5 and Theorem 3.6 to the more general setting where the rows of X are independent Gaussian vectors with distinct covariance matrices.
} a collection of ℓ independent Gaussian vectors with respective covariance matrices Σ 1 , . . . , Σ ℓ . We write X for the ℓ × n matrix whose i-th row is X (i) . Then, the vector Vec(X T ) has the multivariate normal distribution N ℓn (0, Ω), where with e i ∈ R ℓ denoting the i-th canonical basis vector. The density function of X is given by If X is distributed as above, a computation shows that the ℓ × n matrix has the standard matrix normal distribution. Therefore, we consider the matrix-variate polynomials forms an orthonormal system of L 2 (µ X ), where, as usual, µ X indicates the spectral measure associated with XX T . Expanding the function F (X) = det(XX T ) 1/2 ∈ L 2 (µ X ) in the basis H Ω , using the relation φ Ω (Vec(X T )) = det(Ω) −1/2 φ (ℓ,n) (Y X ) and the definition of H (ℓ,n) κ (·; Ω), we have that the associated projection coefficients are The idea is now to perform the polar change of variables X = R 1/2 U . In order to do so, we compute tr(Y X Y T X ): Then, using the relation Vec(S) T (BD ⊗ E)Vec(S) = tr(DS T ESB) (see e.g. [GN00, Theorem 1.2.22]), we obtain The difficulty to proceed now is the following: the above computation suggests that we cannot write tr(Y X Y T X ) as tr(AR) for some matrix A, due to the fact that one cannot exploit the permutation invariance of the trace in view of presence of the matrix e i e T i . We remark that, when Σ i = Σ for every i = 1, . . . , ℓ, the above formula gives tr(Y X Y T X ) = tr(Id ℓ U Σ −1 U T ) = tr(U Σ −1 U T ), which coincides with our computations in the proof of Theorem 3.5. This observation makes it in particular difficult to directly apply the integration formula (2.12), and thus hints to the fact that the polynomials H (ℓ,n) κ (·; Ω) are not easily amenable to matrix calculus.

Proofs of Section 3.3
Proofs of Theorem 3.10 and Theorem 3.12 Our proofs of Theorem 3.10 and Theorem 3.12 involve auxiliary polynomials introduced in [Hay69]. For X ∈ R ℓ×n and A ∈ R n×n symmetric, we consider the polynomials P κ (X, A), κ ⊢ k defined by (see [Hay69,Eq.(34) These polynomials have the following properties (see e.g. [MPH95,p.229] and [Hay69, Section 6]).
Lemma 4.4. For every k ≥ 0, κ ⊢ k and symmetric A ∈ R n×n , we have that (4.20) In order to prove Theorem 3.10, we shall first show the following Lemma, linking the conditional expectation of H (ℓ,n) κ with the polynomial P κ introduced above.
We are now in position to prove Theorem 3.10.
Proof of Theorem 3.10. In order to prove (3.32), we use Fubini and apply (4.21) with ∆ = e −tA and integrate both sides with respect to the Haar measure on O(n) to obtain: where we used (4.18) and (4.19). This finishes the proof of the first part of the statement. Let us now prove the second part: Assume first that A = diag(a, . . . , a) and let f ∈ Π(ℓ, n) (see (3.29)). Then, one has that where we used the facts that X 0 L = X 0 H for H ∈ O(n), f is an element of Π(ℓ, n) andμ is a probability measure on O(n). Finally, if the a i 's are not all equal, then arguing as in Remark 3.11 (b), one can derive a relation contradicting the semigroup property of O (ℓ,n) t;A . Proof of Theorem 3.12. We proceed in two steps. In view of Remark 3.13, the matrix R is necessarily symmetric and has non-negative eigenvalues. We start by showing that (3.35) holds for diagonal matrices R = diag(r 1 , . . . , r n ). The statement for arbitrary symmetric matrices will then follow from the diagonal case by a reduction argument.
Step 2: R is symmetric.
where the last equality follows from the fact that the pair (X, X 0 ) has the same distribution as the pair (XO, X 0 O). Since ∆ R is diagonal, we can apply the conclusion of Step 1 to infer where in the last line, we used the fact that C κ (∆ 2 R ) = C κ (O∆ 2 R O T ) = C κ (R 2 ). This finishes the proof.

Proofs of Section 3.4
Proof of Proposition 3.16 The variance of the total variation is obtained from (3.40). Using the orthogonality of Wiener chaoses, the variance of V(f ℓ ; U ) is computed to be (4.26) where with Φ(κ) as in (3.26). Now, in view of (3.41), we can apply Theorem 3.12 with R = R(z, z ′ ) to infer yielding The relation in (3.42) then follows from (4.26).
Second Wiener chaos component. The second Wiener chaos of V(T (ℓ) n ; T 3 ) is given by n (z))dz. (4.28) In the following lemma, we establish the asymptotic variance of the second Wiener chaos in the high-energy regime: Lemma 4.6. As n → ∞, n ≡ 0, 4, 7 (mod 8), we have Proof. Since, for every z, z ′ ∈ T 3 , we have we note that the matricesṪ where X 0 = X 0 (z, z ′ ) is an independent copy ofṪ (ℓ) n (z) and the matrix R n (z − z ′ ) is given by ,r Indeed, from (4.30) it follows that (see also Remark 3.13 part (a)) which is (4.29). In particular, the variance of the second Wiener chaos component is computed by Proposition 3.16, (1) where we used that C (1) (A) = tr(A) and stationarity of T (ℓ) n to reduce integrations on T 3 × T 3 to T 3 . A direct computation gives Now, in view of (3.44) and (3.48), we havẽ Integrating over T 3 and using the orthogonality relation for complex exponentials on the torus 16π 4 1 N 2 In particular, as n → ∞, n ≡ 0, 4, 7 (mod 8), where o P (1) denotes a sequence of random variables converging to zero in probability, that is, in the highenergy regime, the random variable V(T (ℓ) n ; T 3 ) is dominated in the L 2 (P)-sense by its projection on the second Wiener chaos.
The proof of Proposition 4.7 is based on a suitable partition of the torus into singular and non-singular pairs of cubes, as introduced in [ORW08] (see also e.g. [DNPR19,PR18,Not21] for further references using this approach).
We now describe this partition. For every n ∈ S 3 , we partition the torus into a disjoint union of cubes of length 1/M , where M = M n ≥ 1 is an integer proportional to √ E n , as follows: Let Q 0 = [0, 1/M ) 3 ; then we consider the partition of T 3 obtained by translating Q 0 in the directions k/M, k ∈ Z 3 . Denote by P(M ) the partition of T 3 that is obtained in this way. By construction, we have that card(P(M )) = M 3 . where V(T (ℓ) n ; Q) denotes the total variation of T (ℓ) n in the cube Q. From now on, we fix a small number 0 < η < 10 −10 . In the forthcoming definition, we define singular pairs of points and cubes.  where R n (6) = T 3 [r (n) (z)] 6 dz. In view of (4.35), we can thus split the variance into its singular and non-singular contribution as follows The contributions to the variance of the terms ∆ (ℓ) n,j , j = 1, 2 are given in Lemma 4.9 and 4.10 below. The combination of both results proves Proposition 4.7.
consists of at most ℓ points, we deduce that σ(µ X ) is generated by random variables as above where f = p is a polynomial. Our goal is now to prove that, if F ∈ L 2 (µ X ) is such that E F H (ℓ,n) κ (X) = 0 for every κ ⊢ k and every k ≥ 0, then F = 0, P-almost everywhere. In order to obtain the desired conclusion, we will use the following three facts: (i) zonal polynomials can be expanded into a finite linear combination of matrix-Hermite polynomials (see e.g [Chi92, Eq.(4.12)]), (ii) the product of finitely many zonal polynomials is a finite linear combination of zonal polynomials (see (2.8)) and (iii) every monomial of the form t k (X) := tr([XX T ] k ) = s k 1 + . . . + s k ℓ (where s 1 , . . . , s ℓ denote the eigenvalues of XX T ) can be represented as a linear combination of zonal polynomials (see (2.1)). Using these three facts shows that, whenever F satisfies the assumption above, then E [F t j0 (X) a0 . . . t jM (X) aM ] = 0 for every finite M ≥ 1 and every collection of integers j 0 , . . . , j M ≥ 0 and a 0 , . . . , a M ≥ 0. In particular, writing p(x) = M j=0 c j x j for a polynomial of degree M , one has that (ic 0 ) a0 · · · (ic M ) aM a 0 ! · · · a M ! E [F t 0 (X) a0 . . . t M (X) aM ] = 0 by assumption. By a standard approximation argument, we therefore deduce that E [F |σ(µ X )] = 0, yielding the desired conclusion.