A BLOCK MOMENT METHOD TO HANDLE SPECTRAL CONDENSATION PHENOMENON IN PARABOLIC CONTROL PROBLEMS UNE MÉTHODE DES MOMENTS PAR BLOCS POUR GÉRER LA CONDENSATION SPECTRALE DANS LES PROBLÈMES DE CONTRÔLE PARABOLIQUE

— This article is devoted to the characterization of the minimal null control time for abstract linear control problem. More precisely we aim at giving a precise answer to the following question: what is the minimal time needed to drive the solution of the system starting from any initial condition in a given subspace to zero? Our setting will encompass a wide variety of systems of coupled one dimensional linear parabolic equations with a scalar control. Following classical ideas we reduce this controllability issue to the resolution of a moment problem on the control and provide a new block resolution technique for this moment problem.


A S SI A B E N A B D A L L A H F R A N C K B O Y E R M O R G A N M O R A N C E Y A B L O C K M O M E N T M E T H O D T O H A N D L E S P E C T R A L C O N D E N S ATIO N P H E N O M E N O N IN PA R A B O L IC C O N T R O L P R O B L E M S U N E M É T H O D E D E S M O M E N TS PA R B L O C S P O U R G É R E R L A C O N D E N S ATIO N S P E C T R A L E D A N S L E S P R O B L È M E S D E C O N T R Ô L E PA R A B O L IQ U E
Abstract. -This article is devoted to the characterization of the minimal null control time for abstract linear control problem. More precisely we aim at giving a precise answer to the following question: what is the minimal time needed to drive the solution of the system starting from any initial condition in a given subspace to zero? Our setting will encompass a wide variety of systems of coupled one dimensional linear parabolic equations with a scalar control.
Following classical ideas we reduce this controllability issue to the resolution of a moment problem on the control and provide a new block resolution technique for this moment problem.
Keywords: control theory, parabolic partial differential equations, minimal null control time, block moment method. 2020 Mathematics Subject Classification: 93B05, 93C20, 93C25, 30E05, 35K90, 35P10. DOI: https://doi.org/10.5802/ahl.45 The obtained estimates are sharp and hold uniformly for a certain class of operators. This uniformity allows various applications for parameter dependent control problems and permits us to deal naturally with the case of algebraically multiple eigenvalues in the underlying generator.
Our approach sheds light on a new phenomenon: the condensation of eigenvalues (which can cause a non zero minimal null control time in general) can be somehow compensated by the condensation of eigenvectors. We provide various examples (some are abstract systems, others are actual PDE systems) to highlight those new situations we are able to manage by the block resolution of the moment problem we propose.

Problem under study and state of the art
This paper is concerned with the following abstract linear control system (1.1)    y (t) + Ay(t) = Bu(t), y(0) = y 0 .
We are more precisely interested in the minimal time issue for null-controllability, which can be roughly expressed as follows: what is the smallest time T 0 0 such that, for any T > T 0 , for any initial condition y 0 , there exists a control u such that the associated solution of (1.1) satisfies y(T ) = 0? Under quite general assumptions, we shall give formulas (that are reasonably explicit and usable) for such a minimal control time. The precise notion of solution as well as the wellposedness result for such system will be detailed below (see Propositions 1.1 and 1.2).
We will consider assumptions on the operator A that will relate (1.1) to parabolic evolution equations. Thus, due to regularizing properties, one cannot expect to reach any target in the state space and should rather try to reach any trajectory. By linearity, this is equivalent to the aforementioned null-controllability property (see for instance [Cor07,Sections. 2

.3 and 2.5]).
Pioneering works for null-controllability of a scalar one dimensional heat equations are due to H.O. Fattorini and D.L. Russell [FR71,FR74]. For instance, they proved null-controllability of the one dimensional heat equation using a nonhomogeneous boundary condition as a control. For this purpose, they give a direct strategy reducing the null-controllability property to a moment problem that the control should satisfy (see Section 1.5 for a presentation of the moment problem). The moment method they propose consists in solving this problem using a biorthogonal family in L 2 (0, T ) to the family of exponentials associated to the eigenvalues of A * . Let us mention that this idea of reducing a (optimal) control problem to a moment problem is already present in the works [Ego63] by J.V. Egorov and [Gal69] by L.I. Gal'chuk.
Later on, A.V. Fursikov, O.Yu. Imanuvilov [FI96] and G. Lebeau, L. Robbiano [LR95] used Carleman estimates to solve the boundary and internal null-controllability problem of the heat equation in any space dimension. These two papers have generated plenty of null-controllability results for various parabolic problems. The common qualitative behavior of these results is that for scalar parabolic equations null-controllability holds in arbitrary time (i.e. T 0 = 0) and without any restriction on the control domain.
Among all of these results let use mention the peculiar work [Dol73] by S. Dolecki. He considered a one dimensional heat equation, with a scalar control located at one point inside the space interval, and proved that choosing suitably the location of this control point one can achieve any value in [0, +∞] for the minimal null-control time T 0 . Until the years 2000's this work seemed to be considered too peculiar and the possible presence of a positive minimal null-control time (a very natural property in the hyperbolic case) was expected to be generically not possible in a parabolic setting. However, this point of view has been reconsidered recently in various works as for instance: [AKBDK05] for abstract control problems, [AKBGBdT14] for the abstract problem (1.1) with applications to systems of one dimensional coupled parabolic equations, and [BCG14] for a degenerate parabolic two dimensional equation of Grushin type.
Since then, the minimal null-control time property was investigated on various examples, still in the setting of coupled one dimensional parabolic systems [AKBGBdT16, Dup17,Oua20,Sam19] or in the setting of degenerate parabolic scalar equations [BC17,BDE20,BHHR15,BMM15,DK20]. For coupled parabolic systems a geometric control condition may also be needed for approximate controllability to hold [BO14,Oli14], proving once again that hyperbolic-like behavior can be observed in the parabolic setting.
Concerning the study of the abstract control problem (1.1), some characterizations are provided in the series of works [Gau11, JPP07, JPP10, JPP13, JPP14] using the formalism of Carleson measures. However the precise question of an abstract characterization of the minimal null-control time has not been much considered. A formula has been given for the minimal null-control time of system (1.1), in [AKBGBdT14], in a setting encompassing coupled one dimensional parabolic equations with a scalar control. Its value depends on the condensation index of the eigenvalues of A * (see Section 7.5 for a precise definition) and the observation of the associated eigenvectors. In this work the authors assume that the eigenvectors form a Riesz basis of TOME 3 (2020) the state space. Let us also mention the work [AKBGBM19] where the null-control time is studied through a resolvent-like inequality (introduced in [DM12]) that is a quantification of the well-known Fattorini-Hautus test for approximate controllability (see [Fat66,Oli14]). It is an abstract characterization that might not be easily computable on actual examples but provides a common setting for all the previous examples exhibiting a positive null-control time. The last two mentioned results also rely on the moment method. Note that, even if the Carleman approach is very powerful, it does not seem to be applicable to all the systems of interest: in many situations (including the ones discussed in Section 5.2) the moment method is still the only successful technique up to now.
To highlight the limitations of the existing literature on such problems and the improvements we propose, let us consider the following control problem where c ∈ L 2 (0, 1; R) is a given potential. We insist on the fact that our goal is not to study this particular example but to develop a general characterization. The application to this particular example is detailed in Section 5.2. The study of the minimal null-control time for this system for an arbitrary potential c is not covered by the literature for several reasons.
• First, depending on c, the underlying operator can have geometrically double eigenvalues. This induces (a finite number of) non-observable modes and thus prevents even approximate controllability. We thus propose to extend the study of the minimal null-control time to a given subspace of initial conditions. This allows to still analyze the controllability properties in this case. • Even if the potential c is such that the eigenvalues are geometrically simple it can happen that some of them are algebraically double. In this case, to the best of our knowledge, the only existing results are [AKBGBdT11,FCGBdT10] which ensures null-controllability in arbitrary time if the eigenvalues are well separated (i.e. satisfy the classical gap condition recalled in (2.2)). However, their arguments to disprove null-controllability at time T < T * cannot be applied in this example when the potential c is such that the family of eigenvectors forms a complete family but is not a Riesz basis. Therefore, the ANNALES HENRI LEBESGUE above formula for T * may dramatically overestimate the actual null-control time for the system. We will see in Section 5.2.1 that it may happen that T * = +∞ whereas the system is actually null-controllable at any time T > 0. We will use quite general assumptions in our analysis answering all these concerns in the case of scalar control (see [BM20] for an extension to non-scalar control). Doing so, we will prove that the difference between the Riesz basis assumption and the complete family assumption for the eigenvectors is not only technical and a new phenomenon can appear: the condensation of eigenvalues can be compensated by the condensation of eigenvectors.
We continue this introduction by stating more precisely the problem under consideration and the obtained results.

Functional setting
Let X be a Hilbert space, whose inner product and norm are denoted by (•, •) and • respectively. We shall systematically identify X to its dual through the Riesz theorem. Let (A, D(A)) be an unbounded operator in X such that −A generates a C 0 −semigroup in X and (A * , D(A * )) its adjoint in X. Up to a suitable translation, we can assume that 0 is in the resolvent set of A. We denote by X 1 (resp. X * 1 ) the Hilbert space D(A) (resp. D(A * )) equipped with the norm x 1 := Ax (resp. x 1 * := A * x ). We define X −1 as the completion of X with respect to the norm y −1 := sup z∈X * 1 (y, z) z 1 * .
Notice that X −1 is isometrical to the topological dual of X * 1 using X as a pivot space (see for instance [TW09, Proposition 2.10.2]); the corresponding duality bracket will be denoted by •, • −1,1 * .
Let U be an Hilbert space (that we will identify to its dual) and B : U → X −1 be a linear continuous control operator. We denote by B * : X * 1 → U its adjoint in the duality described above.
Proposition 1.1. -Under the above assumptions, for any T > 0, any y 0 ∈ X −1 , and any u ∈ L 2 (0, T ; U ), there exists a unique y ∈ C 0 ([0, T ]; X −1 ) solution to (1.1) in the sense that it satisfies for any t ∈ [0, T ] and any z t ∈ X * 1 , Moreover there exists C T > 0 such that The proof of this result is recalled in Appendix 7.1. Let us mention that this notion of solution is very weak. In most works concerning controllability properties for abstract systems like (1.1), an extra admissibility assumption is made on the control operator B to ensure more regularity for the solutions. Note however that this is not mandatory to prove wellposedness of the system in the weak sense above. We will discuss below the regularity properties of the control problem.
Let (X * , . * ) be an Hilbert space such that X * 1 ⊂ X * ⊂ X with dense and continuous embeddings. We assume that X * is stable by the semigroup generated by −A * (see Remark 1.3). We also define X − as the subspace of X −1 defined by which is also isometrical to the dual of X * with X as a pivot space. The corresponding duality bracket will be denoted by •, • − , . Thus, we end up with the following five functional spaces We say that the control operator B is an admissible control operator for (1.1) with respect to the space X − if for any T > 0 there exists C T > 0 such that 3) holds for some T > 0 it holds for any T > 0. The admissibility condition (1.3) implies that, by density, we can give a meaning to the map In this setting, following the lines of [Cor07, Theorem 2.37] we obtain the following regularity result for the solutions.
Proposition 1.2. -Assume that (1.3) holds. Then, for any T > 0, any y 0 ∈ X − , and any u ∈ L 2 (0, T ; U ), there exists a unique y ∈ C 0 ([0, T ]; X − ) solution to (1.1) in the sense that it satisfies for any t ∈ [0, T ] and any z t ∈ X * , Moreover there exists C T > 0 such that Remark 1.3. -Note that a similar regularity result holds if we don't assume that X * is stable by the semigroup generated by −A * except that we need to restrict ourselves to initial data y 0 ∈ X. In that case the solution satisfies for any t ∈ [0, T ] and any z t ∈ X * , Remark 1.4. -The case where X * = X * 1 means that we do not have any additional regularity property for B. Conversely, the case X * = X means that we have the best regularity we can hope for system (1.1) (this is the usual definition of an admissible control operator as in [Cor07,TW09]).
To give more accurate results, we aim at analyzing the minimal null-control time problem for each specified set of initial data. This is the object of the following definition.
Definition 1.5. -Let Y 0 be a closed subspace of X − .We say that system (1.1) is null-controllable at time T from Y 0 if for any y 0 ∈ Y 0 there exists u ∈ L 2 (0, T ; U ) such that the associated solution of (1.1) satisfies y(T ) = 0.
As a specific choice of Y 0 one can think of Y 0 = X − , in which case we recover the classical notion of null-controllability. On the opposite side, if Y 0 is a one dimensional subspace Y 0 = Span{y 0 }, then the notion above amounts to consider only the nullcontrollability of the system for that particular initial condition y 0 .
From now on, we will assume that the space Y 0 is given, and we denote by P Y 0 the orthogonal projection onto Y 0 with respect to • − and by P * Y 0 ∈ L(X * ) its adjoint in the duality X − , X * . Notice that these definitions yield

Notations
We give here some notations that will be used throughout this article.
• For any integers a, b, c ∈ N, we shall define the following subsets of N: • For any complex number µ ∈ C we define e µ : (0, +∞) → C to be the exponential function (1.5) e µ : s → e −µs .
• We shall denote by C γ 1 ,...,γ l > 0 a constant possibly varying from one line to another but depending only on the parameters γ 1 , . . . , γ l . • For any multi-index α ∈ N n , we denote its length by |α| = n j=1 α j and its maximum by For α, µ ∈ N n , we say that µ α if and only if µ j α j for any j ∈ [[1, n]]. • For any finite subset A ⊂ C, we will make use of the polynomial P A defined by It satisfies in particular, for any λ ∈ A, TOME 3 (2020)

Spectral assumptions
In addition to the hypotheses described in the introduction that are necessary for the well-posedness and regularity of our control problem, we shall make now the following structural assumptions.
• First of all, we shall only consider scalar controls in this paper, that is U = R.
• We assume that the spectrum of A * is only made of a countable number of geometrically simple eigenvalues denoted by Λ. We refer to Section 6.3 for a discussion on this assumption. We shall also assume for simplicity that the eigenvalues are all real (see however the discussion in Section 6.1) and that Note that, if (1.7) does not hold, we can replace A by A + γ for γ > 0 large enough and find an associated null-control u. A null-control for the original problem is then given by t → e γt u(t) and we can explicitly bound its cost with respect to the parameters γ and T . • For any eigenvalue λ ∈ Λ, we denote by α λ 1 its algebraic multiplicity and we assume that there exists an integer η 1 such that α λ η for any λ ∈ Λ. • The main structural assumptions on the eigenvalues Λ we shall make in this paper are the following: -Asymptotic behavior: (1.8) λ∈Λ 1 λ < +∞.
-Weak gap condition with parameters p ∈ N and ρ > 0: In the case p = 1, the weak gap condition above simply reduces to which is the usual gap condition used for instance in [FR71]. If the spectrum Λ is increasingly indexed as Λ = (λ m ) m 1 the weak gap condition (1.9) reads As we will use a different labelling of the spectrum in this paper we shall not use these notations anymore in what follows. • We denote by (φ 0 λ ) λ∈Λ an associated family of eigenvectors of A * . These eigenvectors are chosen to be normalized in X * .
As we are interested in null-controllability properties of system (1.1), we will first assume that (1.10) This is a necessary condition for the approximate controllability of system (1.1), and is therefore mandatory if we expect null-controllability to hold. In our setting, the assumption (1.10) is also a sufficient condition for approximate controllability (see [Fat66,Oli14]). When the considered set of initial data Y 0 is not the whole space X − , the approximate controllability condition (1.10) can be too strong and we can relax it. We will discuss this point in Section 6.2.
]. By (1.10), we may uniquely determine such Jordan chain if we impose in addition that the generalized eigenvectors satisfy ]. This particular choice of the Jordan chain is not mandatory but will simplify the forthcoming computations. In the case were the eigenvalues are algebraically simple (η = 1) we drop the superscript 0 for the eigenvectors. • We introduce the notation , for the family of all the (generalized) eigenvectors of A * . We assume that Φ is complete in X * i.e. for any y ∈ X − , we have We emphasize the fact that we will not make any additional assumptions on the family Φ. This is a very important difference with related results in the literature in which, most of the time, it is assumed that Φ forms a Riesz basis of X * . This is discussed in Sections 1.4.4 and 3.
In the forthcoming paper [BM20], we will study the extension of our analysis to the case of possibly infinite dimensional controls.

Groupings of eigenvalues
To introduce our formula for the minimal null-control time it is convenient to define adapted groupings for the spectrum Λ. We highlight that this notion does not exactly coincide with the condensation groupings introduced by Bernstein [Ber33], even though it is closely related.
Definition 1.6. -Let p ∈ N * and r, ρ > 0. A sequence of sets (G k ) k 1 ⊂ P(Λ) is said to be a grouping for Λ with parameters p, r, ρ, and we will write with the additional properties that for every k 1,
We prove in Appendix (Proposition 7.1) that such a grouping always exists for any Λ satisfying the weak gap condition (1.9).
Once we are given such a grouping, we shall always adopt the following labelling of the elements of Λ G k = {λ k,1 , . . . , λ k,g k } with λ k,1 < · · · < λ k,g k , and the (generalized) eigenvectors will be relabelled accord- , where in the same fashion α k,j := α λ k,j . For any k 1, we gather the multiplicities associated with the elements of G k in the multi-index α k = (α λ k,1 , . . . , α λ k,g k ) ∈ N g k .
Remark 1.7. -The condition λ k,1 < · · · < λ k,g k is convenient to treat the abstract problem (1.1) but might not be convenient in actual examples. As all the estimates in our analysis will depend on the parameters p and ρ, the eigenvalues inside a same group are mostly interchangeable and thus the increasing labelling is not needed.

Minimal control time definition
From now on, we assume given a grouping (G k ) k in G(Λ, p, r, ρ). Thanks to the assumption (1.10), we can define the following family of elements in X * (1.15) ψ l k,j := ] whose coefficients can be explicitly computed on actual control problems (see Section 5) and that only depends on the group G k and on the multiplicity multi-index µ.
In the simpler case where the eigenvalues are assumed to be algebraically simple (i.e. η = 1) we can immediately give a more explicit formula for T 0 (Y 0 ). Indeed, in this case one recovers the standard divided differences (whose definition and properties are recalled in Section 7.3.1) and thus Then, using Corollary 7.8 and (1.14) it comes that the computation of all those divided differences is not needed and the formula reduces to where the last equality comes from the use of Newton formula (see Proposition 7.4).
Remark 1.8. -The definition above corresponds to a given grouping of the spectrum, however the minimal null-control result stated in Theorem 1.11 will show that its value does not depend on this particular choice of a grouping. As a consequence, for specific examples, one can compute the minimal null control time T 0 (Y 0 ) using any convenient such grouping in a class G(Λ, p, r, ρ).
For the sake of simplicity, for any y 0 ∈ X − we denote by T 0 (y 0 ) the quantity T 0 (Span(y 0 )). Of course, we have the following proposition relating T 0 (Y 0 ) and T 0 (y 0 ) for y 0 ∈ Y 0 . Proposition 1.9. -For any closed subspace Y 0 ⊂ X − , This assertion is proved in Section 7.4. Remark 1.10. -Let us discuss the sign of T 0 (Y 0 ).
• In the case Y 0 = X − (the operator P * Y 0 thus reduces to the identity), the minimal time T 0 (Y 0 ) is always non-negative. Indeed, from the case µ = (1, 0, . . . , 0) in the definition (1.16) of T 0 we have that From the admissibility condition (1.3) applied to z = φ 0 k,1 , we deduce the following upper bound B * φ 0 For instance, if we choose y 0 ∈ X 1 to be an eigenvector of A for an eigenvalue λ ∈ Λ, then we have T 0 (y 0 ) = −∞. Indeed, we first observe that We deduce that the logarithms in the definition of T 0 (y 0 ) are all equal to −∞ for k large enough.

Null-controllability result
The main result of this paper reads as follows (see also the extension discussed in Section 6.1).
Theorem 1.11. -Assume that the operators A and B satisfy the assumptions given in Section 1.4.1. Let T > 0 and T 0 (Y 0 ) be defined by (1.16). Then, Let us briefly mention that our strategy of proof relies on an adapted block resolution of the associated moment problem (see Theorems 2.5 and 4.1). In the case of spectral condensation this new method of resolution ensures sharper results than the one given by standard biorthogonal families. However, as a by-product, in the case of algebraically simple eigenvalues we recover the known optimal estimates for such biorthogonal families (see Corollary 2.6). In the case of algebraically multiple eigenvalues we provide new estimates for such biorthogonal families (see Corollary 4.2). Before describing with more details this strategy of proof let us make some comments.
• There are settings in which formulas for the minimal null-control time are already known in the literature for instance when the eigenvalues are algebraically simple and: -when the condensation index of Λ (see Appendix 7.5 for a precise definition) is equal to 0 (see [AKBGBM19, Remark 1.15]); -or when the family (φ λ ) λ∈Λ of eigenvectors forms a Riesz basis of X * (see [AKBGBdT14]). Obviously, in those settings we recover the known expressions. This is discussed in Section 3.1 and Section 3.2.
• However, we also prove that the Riesz basis assumption considered in [AKBGBdT14] is not only technical. More precisely, we show in Proposition 3.3, that if the Riesz basis assumption does not hold, then the actual minimal control time is less or equal than the value T * given by the formula in this reference (see (3.2)). Moreover, we present in Section 5.1, a few examples that are built such that the value of T * is any chosen element of [0, +∞] whereas the minimal null-control time T 0 (X − ) is in fact 0. This highlights a new phenomenon: when (φ λ ) λ∈Λ does not form a Riesz basis, it may happen that the eigenvectors condensate (or more precisely the eigenvectors normalized with respect to the observation i.e. φ λ B * φ λ ) and this condensation can compensate for the condensation of eigenvalues.
• The weak gap condition (1.9) is particularly well adapted to the applications we have in mind, namely coupled one dimensional parabolic equations in which case the spectrum is given by a finite union of sequences satisfying a classical gap condition (see for instance Lemma 2.3).
The restriction to the one dimensional case in those applications comes from the assumption (1.8). Although this assumption can be seen as a restriction due to the use of moment method, as we are considering scalar controls (U = R) it is also a necessary null-controllability condition (see for instance [Mil06, Appendix A]).
• As we precised the space of initial conditions in this study of minimal nullcontrol time, it directly comes that finite linear combination of eigenvectors are null-controllable in arbitrary small time: the existence of positive minimal nullcontrol time is definitely a high-frequency phenomenon as already observed in Remark 1.10. • In this article we not only prove Theorem 1.11 but we also develop a new strategy to solve moments problem: the block moment method presented in Section 1.5. The resolution of these problems (see Theorems 2.5 and 4.1) is done with precise estimates. This not only leads to the construction of a control but also to uniform estimates, with respect to Λ in a certain class, on this control (see Corollaries 2.11 and 2.12). Those uniform estimates are important in various contexts when one wants to achieve bounds on the control for parameter-dependent problems (see for instance [AB20,ABM18] for an example in numerical analysis of null-controllability problems, or [LZ02] for an example in oscillating coefficient problems). Thus, the block moment method can be of interest to study parameter-dependent problems even in the presence of a strong gap condition, in particular in the case where the strong gap badly behaves with respect to the parameter. Actually, this strategy has already been applied in [BB19] to study the boundary controllability of a coupled parabolic system with Robin boundary conditions, uniformly with respect to the Robin parameters. Moreover, this uniformity property will be crucial in Section 4 to infer the results on multiple eigenvalues from the ones on simple eigenvalues.

Structure of the article
We end this introduction by describing the global strategy used to prove Theorem 1.11 and giving some further bibliographical comments. Section 2 is dedicated to the proof of Theorem 1.11 in the case of algebraically simple eigenvalues. We provide in Section 3 a comparison of our results with available results of the literature. In Section 4 we prove that the uniform estimates obtained in Section 2 allow to prove Theorem 1.11 in the general case of algebraically multiple eigenvalues. To highlight the new cases and phenomenon covered by our analysis we present different examples in Section 5. Then we propose some extensions in Section 6. To ease the reading we gather various technical results in Section 7.

Strategy of proof
The proof of the positive controllability result (that is point (i) of Theorem 1.11) relies on a block resolution of the moment problem. Let us give more details about this strategy.
Let y 0 ∈ Y 0 and u ∈ L 2 (0, T ; R) given. Using Proposition 1.2, it comes that y(T ) = 0 if and only if the control u satisfies As the family Φ of (generalized) eigenvectors is assumed to form a complete family in X * (see (1.12)) it is in fact sufficient to test (1.20) against the elements of Φ. Therefore a null control u is characterized by the following countable set of equations Using the formalism of generalized divided differences, we can give a convenient expression of the action of the semi-group on the generalized eigenvectors as follows this last equality coming from Definition 7.12. Then, y(T ) = 0 if and only if for any λ ∈ Λ and any l ∈

ANNALES HENRI LEBESGUE
By (1.11), and since y 0 ∈ Y 0 , this reduces to find u ∈ L 2 (0, T ; R) such that for any λ ∈ Λ and any l ∈ [[0, α λ − 1]], that is, using (1.10) and (1.15), (1.23) To solve this so-called moment problem the classical strategy introduced in [FR71] consists in designing a biorthogonal family in L 2 (0, T ) to with associated estimates. Then, thanks to these estimates, a suitable control is defined. Usually in this procedure each biorthogonal element is estimated separately. Thus, this method is somehow inoperative to analyse the possible condensation of eigenvectors (which is related to possible cancellations in linear combinations of right-hand sides of (1.23)). We will thus propose to solve this moment problem using the grouping introduced in Section 1.4.2, in order to cope with such possible compensations. We then look for a solution u in the form where each q k will solve the moment problem corresponding to the group G k . More precisely, such a control will formally solve (1.23) if (1.25) Then the proof of point (i) of Theorem 1.11 reduces to the resolution of such a block moment problem with suitable estimates (see Theorem 4.1). First, we solve in Theorem 2.5 the block moment problem in the case where the eigenvalues are algebraically simple i.e. (1.26) This construction uses a Laplace transform isomorphism together with a suitable restriction argument (Proposition 2.10). The obtained estimates on q k will allow to prove convergence of the series (1.24) when T > T 0 (Y 0 ). Those estimates are uniform with respect to Λ in a certain class (see Definition 2.1) which will allow in Section 4 to infer the resolution of (1.25) in the general case.
Remark 1.12. -Contrarily to the classical strategy, notice that the sequence (q k ) k is not a biorthogonal family to The function q k is only orthogonal to those functions corresponding to groups other than G k . Inside the group G k its definition is adapted to solve the moment problem (1.23). Through the right-hand side (adapted to each initial condition) we will possibly take into account the insufficient observation of eigenvectors, the condensation of eigenvalues but also the condensation of eigenvectors. This construction can thus be seen as a block moment method. As we consider at the same time the eigenvalues associated to a same group this will lead to sharper estimates than the one coming from the design of a biorthogonal family (i.e. when considering each eigenvalue individually).
However, as already mentioned, our strategy still allows to prove the existence and sharp estimates on biorthogonal families (see Corollary 2.6 and Corollary 4.2). Let us mention that, to the best of our knowledge, the estimate we obtain in Corollary 4.2 for a biorthogonal family in presence of algebraic multiplicity of eigenvalues without the standard gap condition was not known. Even though these biorthogonal families are not always suitable to deal with controllability properties in presence of spectral condensation (this is why we designed this block resolution of the moment problem) they can be useful for other problems.
Let us mention that, in the context of control problems with a spectrum satisfying the weak-gap condition, divided differences were already used for instance in [AI01,BKL02]. Among other things, in theses works, the authors give a necessary and sufficient condition for the family of (generalized) divided differences to form a Riesz basis of L 2 (0, T ; C). The possible condensation of eigenvalues then appears to deduce properties on the original family of exponentials (t → e iλt ) λ∈Λ . Their results are then applied to hyperbolic control problems.
The results presented in our work can be seen as the "parabolic" equivalent (or the "real-valued" equivalent if one focuses on the exponentials) of these results. Nevertheless, the control problems have really different behaviours as well as the families of exponentials. Indeed, in our setting the considered families never form a Riesz basis. Thus, neither work can be deduced from the other.
The proof of point (ii) of Theorem 1.11, relies on the optimality of the bounds proved in Theorem 2.5 for the resolution of the block moment problem.
As dealing with null-controllability from a proper subspace of initial conditions is not classical let us recall the following lemma that characterizes this controllability through an observability inequality. (1) For any y 0 ∈ Y 0 there exists a u ∈ L 2 (0, T ; U ) such that y(T ) = 0 and

ANNALES HENRI LEBESGUE
(2) For any z T ∈ X * , the following partial observability holds: In this case, the best constant M satisfying those properties is called the cost of controllability from Y 0 at time T and is denoted M (Y 0 , T ).

The case of simple eigenvalues 2.1. Null-controllability in large time
The goal of this section is to prove Theorem 1.11 Point (i) in the case of algebraically simple eigenvalues. Thus, in all this section we assume that η = 1.
As explained in Section 1.5, we will now focus on the construction of a solution to (1.26). Of course as we want to design a control u ∈ L 2 (0, T ; U ) the estimate of q k L 2 (0,T ;R) will play a crucial role to prove that the series (1.24) makes sense. Actually we will prove sharp estimates that are uniformly valid for Λ in a certain class. These uniform estimates can be used for various applications and will be crucial to deal with algebraic multiplicity of eigenvalues in Section 4. We start by specifying the class of Λ we will consider.
Definition 2.1. -Let p ∈ N * , ρ > 0 and N : (0, +∞) → R. We say that a countable family Λ belongs to the class L w (p, ρ, N ) if Λ satisfies the weak-gap condition (1.9) with parameters p and ρ and if for any ε > 0 we have This definition is directly inspired by the pioneering work [FR74]. More precisely, the class of sequences used in [FR74] is similar to L w (1, ρ, N ), but it is however slightly different since in (2.1) the summation condition is given on the value of λ itself whereas in the above reference the condition is on the index of the eigenvalue in Λ (which is supposed to be sorted increasingly). Despite this small difference (whose aim is to simplify some computations) the results we shall take from [FR74] that use this definition are also valid with this alternative definition and thus we set Remark 2.2 (The usual gap condition). -With our definition, a sequence Λ belongs to L(ρ, N ) if it satisfies the classical gap condition and the asymptotic behavior estimate (2.1).
As we will see in the examples (Section 5), the typical situation where sequences satisfying the weak gap condition appear is when one glues a finite number of sequences, each of them satisfying a standard gap condition as in Remark 2.2. This is formalized in the following Lemma 2.3. Lemma 2.3. -Let p, p ∈ N * , ρ, ρ > 0 and N , N : (0, +∞) → R given. Then, for any Λ ∈ L w (p, ρ, N ) and Λ ∈ L w ( p, ρ, N ), we have Proof. -Let us first prove the weak gap condition. For any µ 0, we have and taking the cardinal, we get For the asymptotic behavior of the sequences, we have The claim is proved. The following straightforward facts will also be useful.
For any ε > 0, there exists a constant C ε, T, p, r, ρ, N > 0 such that for any k 1, for any ω k,1 , . . . , ω k, g k ∈ C, there exists q k ∈ L 2 (0, T ; C) satisfying Moreover, up to the factor e ελ k,1 , this last estimate is sharp: any solution q k ∈ L 2 (0, T ; C) of (2.3b) satisfies for some C p > 0.

ANNALES HENRI LEBESGUE
The proof of Theorem 2.5 is conducted all along Sections 2.1.1 and 2.1.4. Before going on with the proof, let us notice that the resolution of the block moment problem (2.3) for a specific choice of ω k,j allows to prove, as a by-product, the existence and uniform estimates of a biorthogonal family to the exponentials (e λ ) λ∈Λ where e λ is defined by (1.5).
For any k 1, for any j ∈ [[1, where δ denotes the Kronecker symbol. Moreover, for any ε > 0, there exists a constant C ε,T,p,r,ρ,N > 0 such that for any k 1 and for any j ∈ [[1, , Moreover, up to the factor e ελ k,1 , this estimate is optimal since any function q k,j satisfying (2.6), satisfies the lower bound . Let q k,j ∈ L 2 (0, T ; C) be the solution of the block moment problem (2.3) given by Theorem 2.5 associated with the right-hand side ω k,j = δ j,j for any j ∈ [[1, g k ]]. Since those values of ω are real we can change q k,j in its real part without changing its properties. Then, the equalities (2.6) follow directly. Moreover we have

From the Newton formula (see Proposition 7.4) it comes that for any
To conclude the proof of Corollary 2.6 we prove that there exists C p,ρ > 0 such that |λ k,j − λ k,m | . TOME 3 (2020) By (1.14), we get . Thus, As the right-hand side only takes a finite number of values, inequality (2.8) proves (2.7) and ends the proof of Corollary 2.6.
The lower bound directly follows from (2.5) and the inequality

Resolution of block moment problems in infinite time
In this section, we start by proving Theorem 2.5 in the case of simple eigenvalues and with T = +∞. More precisely, we prove the following proposition.
For any ε > 0, there exists a constant C ε, p, r, ρ, N > 0 such that for any k 1, for any ω k,1 , . . . , ω k,g k ∈ C, there exists q k ∈ L 2 (0, +∞; C) satisfying and The proof relies on the construction of an holomorphic function satisfying suitable properties and estimates. The resolution of the block moment problem (2.9) then comes from the isomorphism induced by the Laplace transform.
Proof. -Let us start by recalling classical properties of the Laplace transform (see for instance [Sch43, and the references therein). Let From the properties of H 2 (C + ), it comes that

ANNALES HENRI LEBESGUE
Then the Laplace transform is an isomorphism. We shall construct for each k, a function J k ∈ H 2 (C + ) satisfying and such that for any ε > 0, there exists C ε,p,r,ρ,N > 0 such that Taking advantage of the isomorphism property of the Laplace transform we will then set q k := L −1 (J k ), to conclude the proof.

Construction of J k
We define J k as where P k is a polynomial of degree less than p which is precised below and W k is the following Blaschke-type product The sequence Λ j contains the j th element of each group G l , except if this group contains less than j elements, in which case, we replace it by the largest element of G l that is λ l,g l . In particular, we observe that Λ j is a subsequence of Λ. From (1.8), we deduce that λ∈Λ j 1 λ < +∞, so that for any j, the associated infinite product uniformly converges on any compact of C + . As a consequence, W k is well-defined and holomorphic in C + (see for instance [Rud87,Chapter 15]). It follows that J k is also holomorphic on C + .
We shall need the following property, whose proof is technical and postponed to Section 2.1.6.
Proposition 2.8. -There exists a constant C ε, p, r, ρ, N > 0 such that for any k 1, any l ∈ [[0, p]], and any θ ∈ Conv(G k ), From the definition of W k it comes that (2.10) is satisfied. Next, it comes that (2.11) is equivalent to To satisfy (2.11), we define P k as the Lagrange interpolating polynomial at points Thus, to conclude it remains to estimate Notice that since the eigenvalues in Λ are real, for any k 1 and any τ ∈ R, we have |W k (iτ )| = 1. This implies and thus J k ∈ H 2 (C + ). Using Leibniz formula (see Proposition 7.7), Using again Leibniz formula (see Proposition 7.7), The two factors in each term of this sum are estimated using Lagrange theorem (see Proposition 7.5): By using (2.13), it follows that Recall that (1.14) implies λ k,g k − λ k,1 < ρ. Then, using (2.17) and (2.18) into the identity (2.16) proves that there exists C ε,p,r,ρ,N > 0 such that for any Plugging it in (2.15) we obtain Finally, getting back to estimate (2.14), and using the isomorphism property of L ends the proof of Proposition 2.7.

From infinite time horizon to finite time horizon
In this section we first prove that the estimates on the solution on (0, +∞) of the block moment problem (2.9) for simple eigenvalues given in Proposition 2.7 implies the resolution on (0, T ) of the similar block moment problem (2.3). More precisely we prove the following.
is an isomorphism. Moreover there exists a constant C T, p, ρ, N > 0 such that In the case p = 1, this result is due to Fattorini and Russell [FR74, Theorem 1.3]. Our proof follows closely the strategy developed in this reference and takes advantage of the uniform estimates we established in the previous sections.
Proof. -The fact that R Λ,T is an isomorphism is proved in [Sch43] under the sole assumption (1.8). The only thing to prove is thus the bound (2.20).
The proof is done by contradiction. Assume that the estimate does not hold for given T , p, ρ, and N , then there exists a sequence (Λ m ) m 1 belonging to the same class L w (p, ρ, N ), such that For each m, by Proposition 7.1, we consider a grouping (G m k ) k ∈ G(Λ m , p, ρ/p, N ), and from (2.21) we know that there exists coefficients a m k,j such that the finite linear combination Let 0 < ε < T 2 be fixed and let C + 2ε := {z ∈ C ; (z) > 2ε}. We prove that the sequence z → P m (z) is uniformly bounded on any compact of C + 2ε . Let m 1 and z ∈ C + 2ε . Then for any k ∈ {1, . . . , K m } the application of Proposition 2.7 to the sequence Λ m yields the existence of q m,z k ∈ L 2 (0, +∞; C) satisfying and where e z is defined in (1.5).
The previous right-hand side is estimated using Lagrange theorem (see Proposition 7.5). As the function e z is complex-valued we apply it on both its real and imaginary parts. This yields Then, using (2.22) it comes that, for m sufficiently large, From Cauchy-Schwarz inequality we deduce that Summing these inequalities we obtain that for any z ∈ C + 2ε , From the properties of the groupings (see Definition 1.6), it comes that λ m k,1 ρ p (k−1). Thus, for any z ∈ C + 2ε , This gives that (P m ) m is a sequence of holomorphic functions uniformly bounded on any compact of C + 2ε . From Montel's theorem it comes that we can extract a subsequence converging uniformly on any compact of C + 2ε to an holomorphic function P . Now recall that P m L 2 (0, T ; C) goes to 0 as m goes to infinity. This implies that P (t) = 0 for any t ∈ (2ε, T ). The function P being holomorphic it comes that it vanishes on C + 2ε . Using (2.23) and the Lebesgue dominated-convergence theorem yields P m L 2 (0, + ∞ ; C) −−−→ m→∞ 0. This is in contradiction with P m L 2 (0, + ∞ ; C) = 1 and ends the proof of Proposition 2.10.
We now have all the ingredients to prove Proposition 2.9.
Proof of Theorem 2.5. -The resolution of the block moment problem (2.3) as well as the estimate (2.4) follow directly from Propositions 2.7 and 2.9.
The only thing left to prove is the lower bound (2.5). Let q k ∈ L 2 (0, T ; C) be any solution of (2.3b). Using the linearity of divided differences, equalities (2.3b) imply where e t is defined by (1.5). From the Lagrange theorem (see Proposition 7.5) and the fact that e (i−1) t is decreasing on (0, +∞), it comes that for any t ∈ (0, T ), we have since we assumed that Λ ⊂ [1, +∞). Thus applying the Cauchy-Schwarz inequality to (2.25) gives which proves (2.5) and ends the proof of Theorem 2.5.

Construction of the control
In this section we gather the previous ingredients to prove the positive controllability result. We also give an upper bound and a lower bound on the cost of controllability from Y 0 at time T as defined by Lemma 1.13.
Proof of point (i) of Theorem 1.11 for simple eigenvalues. -Assume that T 0 (Y 0 ) < +∞ and let us consider an initial data y 0 ∈ Y 0 . Without loss of generality we assume that y 0 − = 1.
For any k 1 and j ∈ [[1, g k ]] we set Let (q k ) k 1 be the solution of the block moment problem (2.3) given in Theorem 2.5. There exists a constant C ε, T, p, r, ρ, N > 0 such that From Leibniz formula (see Proposition 7.7), In this expression, ξ[. . . ] stands for the divided differences associated with the values (λ k,1 , ξ k,1 ), . . . , (λ k,g k , ξ k,g k ). From Lagrange theorem (see Proposition 7.5) it comes that Using the definition (1.18) of T 0 (y 0 ), it comes that, Thus, there exists C T,p > 0 such that Then, as T > T 0 (y 0 ) + 2ε, the series (1.24) is convergent in L 2 (0, T ; R) and defines a control u that solves the moment problem (1.26), which implies that the associated solution of (1.1) satisfies y(T ) = 0.
With the same strategy we can prove a more accurate result. Namely we get the following uniform bound for the cost of controllability.
We also provide the following lower bound for the cost of controllability.
By definition of z T we have and thus From Newton formula (see Proposition 7.4) and Lagrange theorem (see Proposition 7.5), for any t Using Newton formula (see Proposition 7.4), we have Then, for any ε > 0, there exists C ε,γ,J > 0 such that Remark 2.14. -To be completely accurate let us precise that [FR74, Theorem 1.1] does not exactly state such estimate since this theorem only deals with the estimate of a biorthogonal family. However, the estimate given in this theorem together with [FR74, equality (2.1)] given during its proof directly yield Lemma 2.13.
The generalisation we propose is the following.
Lemma 2.15. -Let γ > 0 and J : R + → R. For any ε > 0, there exists C ε,γ,J > 0 such that, for any sequence Σ ∈ L(γ, J ), for any σ ∈ Σ, we have Proof. -For any σ > 0, since (σ − (z)) 2 (σ + (z)) 2 , it comes that and thus, We introduce the family Σ obtained from Σ by replacing σ by (z), that is Since only one value has been modified, Σ also satisfies σ∈Σ 1 σ < +∞. As, it comes that Σ satisfies the gap condition (2.2) with ρ replaced by γ 2 . Notice that { (z)} ∈ L 1, ε → 1 ε . Thus using Remark 2.4 and the arguments of the proof of Lemma 2.3 it comes that Σ ∈ L γ 2 , J with J depending only on J . Obviously, as the terms σ ∈ Σ that are different from σ have not been modified it comes that W Σ σ = WΣ (z) . Applying Lemma 2.13 it comes that for any ε > 0, there is C ε,γ,J > 0 such that Finally, recalling (2.29) and (2.30), we obtain which ends the proof of Lemma 2.15. We now turn to the estimates we need for the derivatives of 1 Proposition 2.16. -Let γ > 0 and J : R + → R.
Then, for any l 0, for any ε > 0, there exists C l, ε, γ, J > 0 such that for any Σ ∈ L(γ, J ), Proof. -The case l = 0 is nothing but the estimate given in Lemma 2.15. Let As W Σ σ does not vanish in an open neighbourhood of D σ,γ it comes that 1 W Σ σ is holomorphic on this domain. Thus applying Cauchy formula yields From Lemma 2.15 it comes that for any ε > 0 there exists C ε,γ,J > 0 such that and ends the proof of Proposition 2.16.
We shall now move to the proof of Proposition 2.8 which is the main objective of this section.
Proof of Proposition 2.8. -Recall that the function N : R + → R is the one appearing in (2.1) and that the subsequences Λ j are defined in (2.12).
We recall that the index k is fixed, as well as the value θ ∈ Conv(G k ). We introduce the new sequence Λ j obtained from Λ j by replacing the k th value λ k,min(j,g k ) by θ i.e.
Notice that, using Proposition 7.1, the fact that Λ j is a subsequence of Λ such that each term belong to a different group, and by the assumption on θ, we obtain that Λ j satisfies the gap condition (2.2) with ρ replaced by γ := ρ p . Notice that {θ} ∈ L 1, ε → 1 ε . Thus using Remark 2.4 and the arguments of the proof of Lemma 2.3 it comes that Λ j ∈ L γ, J with J depending only on N .
With these notations and Proposition 7.1 it comes that .
Finally, using Leibniz rule (for derivatives), evaluating the result at z = θ and using Proposition 2.16 yield the claim.

Lack of null-controllability in small time
The goal of this section is to prove the point (ii) of Theorem 1.11 in the case of algebraically simple eigenvalues. Thus, in all this section we assume that η = 1. The proof mainly relies on the optimality on the bound obtained in the resolution of the block moments problem in (2.5).
Proof. -Let T > 0 and assume that null controllability from Y 0 in time T of (1.1) holds.
Thus, there exists C T > 0 such that: for any y 0 ∈ X − , there exists u ∈ L 2 (0, T ; R) such that the associated solution of (1.1) with initial condition P Y 0 y 0 satisfies y(T ) = 0 and u L 2 (0,T ;R) C T y 0 − . Due to the equivalence between null controllability and the moment problem (1.23) it comes that Recall that ψ λ is defined by (1.15). Thus, for any k ∈ N * , the control u(T − •) solves (2.3b). Using the lower bound (2.5) with ω k,j := y 0 , e −λ k,j T ψ k,j − , , it comes that Due to the definition of ω k,j , this can be rewritten as where e t is defined by (1.5). By the dual characterization of the norms this implies Finally, plugging this estimate and (2.31) into (2.32) we obtain, Due to the definition of T 0 (Y 0 ), this implies that T T 0 (Y 0 ) and ends the proof on the lack of null-controllability at time T < T 0 (Y 0 ).

Comparison with some already known results
In this section, we prove that we actually recover the known formulas for the minimal null-control time when there is no condensation of eigenvalues or when the eigenvectors are assumed to form a Riesz basis of X * . Doing so we will highlight in Proposition 3.3 that the actual minimal null-control time is always smaller than the value predicted by the formula that would be valid under the Riesz basis assumption. As all these results were proved for algebraically simple eigenvalues we assume in all this section that η = 1.
Notice that the proofs in all this section only rely on the definition of the minimal null-control time (1.19) and thus do not depend on Theorem 1.11.

When there is no condensation of eigenvalues
In this section we prove that, if the condensation index of the sequence Λ vanishes (the definition of c(Λ) is recalled in Appendix 7.5) then the expression (1.19) coincides with the known expression relating the minimal time for null-controllability to the observation of the eigenvectors φ λ through the operator B * .
Proposition 3.1. -Assume that A and B satisfy the assumptions of Theorem 1.11 with η = 1. If c(Λ) = 0, then, we have This result was already proved in [AKBGBdT14] with the additional assumption that the family of eigenvectors Φ = (φ λ ) λ∈Λ forms a Riesz basis of X * or in [AKBGBM19, Remark 1.15] in a more general framework encompassing the one studied here.
Proof. -Notice that when Y 0 = X − , the operator P * Y 0 reduces to the identity. Thus, considering l = 1 in (1.19) always lead to We assume that and we will prove that c(Λ) > 0. We shall reason as in the proof of point (ii) of Theorem 1.11 (see Section 2.2) but starting with the formula (1.17) instead of (1.18). We can find an integer l * 1, an extraction (κ n ) n 1 and integers m n such that 1 m n m n + l * − 1 g κn and such that if since, if it is not the case, we can reduce the value of l * accordingly. Note that, as P * Y 0 φ λ * 1, by (3.1), we know that l * > 1. From the definition of divided differences (see Definition 7.2), it comes that For n sufficiently large, we have Using the definition of T (l * − 1) it comes that, for n large enough, Thus, since l * 2, we can combine the last two estimates to obtain |λ κn, mn+1 − λ κn, mn | |λ κn, mn+l * −1 − λ κn, mn | e −λ κn,1( T 0 (X − )−ε− T (l * −1)) e ρ(T 0 (X − )−ε− T (l * −1)) e −λκ n ,mn (T0(X− )−ε− T (l * −1)) .

When there is a Riesz basis of eigenvectors
As already mentioned the null-control problem for (1.1) has been considered in [AKBGBdT14] with the additional assumption that the family (φ λ ) λ∈Λ forms a Riesz basis of X * . Observe that it is equivalent to ask that (φ λ / φ λ ) λ∈Λ is a Riesz basis of X.
With this additional assumption, the minimal null-control time from Y 0 = X − was proved to be equal to where the interpolating function E Λ is defined in (7.11).
Remark 3.2. -Notice that, since φ λ is normalized in X * , there exists C > 0 such that so that the value of T * in (3.2) does not change if one considers the normalization of eigenvectors in X instead of in X * .
In our setting, we prove that the formula above for T * is always an upper bound of the actual minimal null-control time, without assuming the Riesz basis condition. Proof of Proposition 3.3. First step: we begin by proving that the grouping designed in Proposition 7.1 ensures a simpler expression for T * . Let (G k ) k 1 ∈ G(Λ, p, r, ρ) be a grouping as introduced in Section 1.4.2. For each λ ∈ Λ, we denote by G [λ] the unique group in (G k ) k 1 that contains λ. Then, we have where, for each group G, the polynomial P G is defined by (1.6).
• Let G be a group of eigenvalues and λ ∈ G. We prove that, for any finite subset M of Λ \ G, whose cardinal is denoted by n := #M , we have This proves (3.4).
• From (3.4) we apply [AKBGBdT14, Theorem 3.8] to obtain that for any subsequence (λ n ) n 1 ⊂ Λ, This directly implies (3.3). with the same notation as in the proof above. We claim that with the exact same proof it is sufficient to assume (3.4). Indeed, in the proof of [AKBGBdT14, Theorem 3.8], the only point were assumption (3.6) is used is the Second step in the middle of page 2097. Then the term n! is estimated asymptotically using Stirling formula to prove that the term Γ k,1 goes to 0 as k goes to ∞. As the rest of the proof is long, technical and remains unchanged when replacing (3.6) by (3.4) we do not reproduce it here for the sake of brevity.
As we have considered normalized eigenvectors, and by (1.4), for any k 1 and any l ∈ [[1, g k ]], we have Then, using (3.3), we obtain We now prove that we indeed recover exactly the expression of the minimal time (3.2) (or (3.3)) when we assume that the eigenvectors form a Riesz basis.
Proposition 3.5. -Assume that A and B satisfy the assumptions of Theorem 1.11 with η = 1 and that (φ λ ) λ∈Λ forms a Riesz basis of X * . Then, T 0 (X − ) = T * where T * is defined by (3.3).
Remark 3.6. -It will appear clearly in the proof that the Riesz basis assumption is much stronger than what we really need. The only thing that we actually use at the very beginning of the proof, is that the spectral radius of the inverse of the Gram matrix M k := Gram X * (φ k,1 , . . . , φ k,g k ) satisfies Note in particular that, in practice, estimating such a spectral radius in each group is much simpler than proving that the whole family is a Riesz basis.
Proof. -As we assumed that (φ λ ) λ∈Λ is a Riesz basis of X * it comes that there exists C > 0 such that for any k 1, for any α k,1 , . . . , α k,g k ∈ R,

It follows that for any
Thus, taking the logarithm, Since by definition we have G k = G [λ k,j ] , this ends the proof of Proposition 3.5.

The case of multiple eigenvalues
In this section we prove Theorem 1.11 in the case where we allow algebraic multiplicity for the eigenvalues i.e. η 2. As previously, the main issue is the resolution of the block moment problem given in (1.25). This is detailed in the next subsection.

Resolution of block moment problems
We prove in this subsection the following Theorem 4.1 which is the generalization of Theorem 2.5.
In the case p = 1 (usual gap condition), a solution to (4.1) is given by the biorthogonal family built in [AKBGBdT11]. Here, we extend this resolution using a weak gap condition (1.9) and we prove that the obtained estimates are uniform with respect to Λ in a given class L w (•, •, •). Assume that Λ ∈ L w (p, ρ, N ) and let (G k ) k ∈ G(Λ, p, r, ρ) be an associated grouping.
We consider an integer η 1 and for any k we suppose given a multi-index α k ∈ N g k such that |α k | ∞ η.
Then, for any k 1, for any j ∈ [[1, g k ]] and any l ∈ [[0, α k,j − 1]] there exists q k,j,l ∈ L 2 (0, T ; C) satisfying The proof of Corollary 4.2 is left to the reader: it follows closely the one of Corollary 2.6 and makes use of the estimate given in Proposition 7.16 instead of the Newton formula for standard divided differences.
Remark 4.3. -Contrary to the estimate in Corollary 2.6, the above estimate is not optimal in general, even if we do not consider the exponential factor. Indeed, some cancellations can occur depending on the relative positions and multiplicities of the eigenvalues that are not taken into account in the above general bound.
In actual examples, one needs to compute carefully the coefficients of the generalized divided differences introduced in Proposition 7.16 to see whether or not a sharper estimate can be obtained.
Here also, the proof of Theorem 4.1 relies on the resolution of the block moment problem (4.1) with T = +∞ and then on a restriction argument. For pedagogical reasons (the proof being less technical) let us present first this restriction argument (which is the generalization of Proposition 2.9).
Proof. -For any h > 0, we define Using Remark 2.4 and Lemma 2.3 we have that Λ h ∈ L w (pη, ρ, N ) for some N which does not depend on h. We suppose given a fixed q and, for any h > 0, we can apply Proposition 2.9 with the sequence Λ h and obtain the existence of a function q h ∈ L 2 (0, T ; C) such that and satisfying moreover the uniform estimate We can then find a subsequence (q hn ) n that weakly converges towards some q ∈ L 2 (0, T ; C) such that q L 2 (0, T ; C) C T, pη, r, ρ,Ñ q L 2 (0, + ∞; C) . We will show that q solves the required equations. Let λ ∈ Λ and l ∈ [[0, η − 1]] be fixed. Combining the equations (4.4) to make appear divided differences, we have the equality (4.5) where e t is defined in (1.5). The Lagrange theorem (see Proposition 7.5) implies that, for any t and any n, there is a ξ t,n ∈ [λ, λ + lh n ] such that which implies that |e t [λ, . . . , λ + lh n ]| t l l! e −λt and By the Lebesgue dominated-convergence theorem we deduce the strong convergence in L 2 (0, +∞; C) of t → e t [λ, . . . , λ + lh n ] towards t → (−t) l l! e −λt and the claim follows by weak-strong convergence in (4.5).
Let us now turn to the resolution of the block moment problem (4.1) for T = +∞. The next proposition is the generalization of Proposition 2.7.
For any ε > 0, there exists a constant C ε, p, r, ρ, η, N > 0 such that for any k 1, for any multi-index α k ∈ N g k with |α k | ∞ η, and any set of values ω α k ∈ C |α k | , there exists q k ∈ L 2 (0, +∞; C) satisfying , and the bound (4.6) q k L 2 (0,+∞;C) C ε, p, r, ρ, η, N e ελ k,1 max Before getting to the proof let us mention that Propositions 4.4 and 4.5 imply Theorem 4.1. The lower bound (4.3) is proved in the exact same way as (2.5) and is thus left to the reader.
Proof. -As in the previous proof, for h > 0, we define that belongs to the class L w (pη, ρ, N ). For any k 1, we set For any h < r/(2η), the family (G k,h ) k is a grouping in G(Λ h , pη, r/2, ρ + r/2). Now, we are given a fixed index k. We observe that, there exists a h 0 ∈ (0, r/(2η)) (possibly depending on k) such that, for any h < h 0 , the sets G k , G k + h, . . . , G k + ηh are pairwise disjoint.
Since we need to take into account precisely the multiplicities we are interested in, encoded in the multi-index α k , we introduce the modified k th group {λ k,j , λ k,j + h, . . . , λ k,j + (α k,j − 1)h} ⊂ G k,h , and the new family which satisfies Λ h ⊂ Λ h and therefore also belongs to the class L w (pη, ρ, N ).
By construction, the family of points in G k,h , that we denote by µ k,h,1 < · · · < µ k,h,|α k | is an approximation of the weighted family ((λ k,1 , . . . , λ k,g k ), α k ) in the sense of Definition 7.9. Let F : R → C be a smooth function satisfying the conditions For each h > 0, we apply Proposition 2.7 to the family Λ h to find a solution q k,h ∈ L 2 (0, +∞; C) to the following moment problem (4.8) and satisfying the following bound, with a constant uniform with respect to h, By Proposition 7.10, we know that the right-hand side in the above estimate converges when h → 0 towards a similar quantity with generalized divided differences instead of the usual divided differences. It follows that we can extract a subsequence (q k,hn ) n that weakly converges in L 2 (0, +∞; C) towards a function q k that satisfies the bound (4.6). Finally, by the same argument as in the proof of Proposition 4.4 above, we can combine the equations (4.8) to make appear divided differences on both side and pass to the weak-strong limit in the integral to finally get , which is exactly our claim since, by the computation rule (7.6) and by (4.7), we have

Proof of the minimal null-control time property
In this section we end the proof of Theorem 1.11. The extension of Corollaries 2.11 and 2.12 as well as their proofs to the case η 2 are straightforward and left to the reader.
Proof. -Controllability in large time: proof of point (i) of Theorem 1.11 Let T > T 0 (Y 0 ) and y 0 ∈ Y 0 . For any k 1, let q k ∈ L 2 (0, T ; C) be given by As in Section 2.1.5, since T > T 0 (Y 0 ), the estimates (4.2) imply that Moreover, as q k solves the block moment problem (4.1) it comes that u solves the moment problem (1.23) and thus y(T ) = 0.
Proof. -Lack of null-controllability in small time: proof of point (ii) of Theorem 1.11 The proof follows exactly the lines of Section 2.2 and relies on the lower bound (4.3) given for the solution of the block moments problem (4.1).

Examples
In this section we study various examples. In Section 5.1, we design 'abstract examples' to highlight the phenomenon described in Section 1.4.4: the condensation of eigenvectors can compensate the condensation of eigenvalues. More precisely we design an example which is null-controllable in arbitrary time but with an arbitrary condensation of the eigenvalues. We also give examples to illustrate the new settings covered by our analysis when the eigenvalues are algebraically multiple in the absence of a gap condition. The interest of these abstract examples is to highlight the different phenomena as the computations are straightforward.
Finally, we provide in Section 5.2, actual examples of one dimensional coupled parabolic control systems that have motivated the present study. The precise analysis of null-controllability for those systems was not possible using existing results in the literature.

Abstract examples: a possible compensation of condensation of eigenvalues
The design of these abstract examples is inspired from the work [AKBDK05]. Our goal is to illustrate, in particular, the fact that, even if the control operator has no influence on the minimal null-control time, the knowledge of the condensation index of the eigenvalues of the operator A is not sufficient to understand the nullcontrollability properties of system (1.1).
Let A be a positive definite self-adjoint operator with compact resolvent in a Hilbert space H whose eigenvalues (µ k ) k 1 are assumed to be sorted in increasing order. One can think of A, for instance, as the Laplace operator −∂ xx or any Sturm-Liouville operator with homogeneous Dirichlet boundary conditions. If we denote by (ϕ k ) k 1 a corresponding Hilbert basis of eigenvectors, A may be written where (•, •) H denotes the scalar product in H. We assume that (µ k ) k 1 satisfies (1.8) and (1.9) with p = 1, i.e., satisfies the so-called gap property. Let ρ > 0 be such that and f : σ(A) → R a positive function defined on σ(A) the spectrum of A satisfying Remark 5.1. -This vector x 0 will be used to design the control operator B. This assumption will ensure that the terms B * φ λ appearing in the definition (1.19) have no influence. This will allow us to really emphasize the role of the condensation of eigenvectors.
It is easy to see that (−A, D(A)) generates a C 0 -semigroup on X and that B : R → X is bounded. Thus we consider for this example that X * = X = X − and Y 0 = X. The spectrum of (A * , D(A)) is given by This gives a first example in this setting where the minimal time is not related to the condensation index. As it will appear from the proof, see (5.6), this is due to a condensation of eigenvectors compensating the condensation of eigenvalues.
Proof. -The proof of point (ii) directly follows from straightforward computations using Proposition 7.18 with the explicit choices f : We now turn to the computation of the minimal null-control time. Using (5.1) and (5.2), it comes that (1.8) and (1.9) are satisfied with p = 2. We define our grouping by setting λ k,1 := µ k and λ k,2 := µ k + f (µ k ). The associated normalized eigenvectors are φ k,1 := 1 which do form a complete family in X. Moreover, for all k 1, so that, with (1.15), we have From Definition (1.18), we have T 0 (X) = lim sup k→∞ ln max ψ k,1 , One has Using (5.3) and (5.2) we easily deduce that thus the eigenvectors of A * do not form a Riesz basis of X. If this family were a Riesz basis, then we would deduce from [AKBGBdT14] that the minimal null-control time would be equal to the condensation index c(Λ).
Remark 5.4. -Let us consider in the same setting the evolution problem (1.1) given by In this case, the operator A * has spectrum σ(A * ) = {µ k ; k 1} with algebraically double eigenvalues satisfying the gap property and an associated Hilbert basis of (generalized) eigenvectors given by Notice that from (5.6) one has Thus, the analysis of (5.4)-(5.5), is unchanged if one sets f = 0.

Algebraically multiple eigenvalues
with ρ satisfying (5.1). Let Again B is a bounded control operator and we also set for this example X * = X = X − and Y 0 = X. The spectrum of (A * , D(A)) is given by Proposition 5.5. -Let us consider the control system (1.1) with A and B given by (5.8)-(5.9). Then, (5.10) T 0 (X) = 2β = 2 c(Λ).
Remark 5.6. -In this case, the family of (generalized) eigenvectors do form a Hilbert basis in X. However due to the presence of algebraically multiple eigenvalues one cannot compute the value of the minimal null-control time using [AKBGBdT14]. Its value is still related to the condensation index of Λ but also depends on the multiplicity of each eigenvalue in the system.

Competition between different perturbations
with ρ satisfying (5.1). Let Again B is a bounded control operator and we still set for this example X * = X = X − and Y 0 = X.
Notice that the eigenvalues are not necessarily increasingly sorted inside the k th group depending on the relative positions of α and β but, due to the invariance of divided differences with respect to permutations, this does not change our analysis.
These eigenvalues are algebraically and geometrically simple and the associated eigenvectors are φ k,1 := 1 which do form a complete family in X. Moreover, for all k 1, leading to To compute the minimal time T 0 (X), let us determine the different terms appearing in (1.18). We have ψ[λ k,1 ] = ψ k,1 , Since lim k→+∞ g(µ k ) = 0, we immediately see that, for k large enough, we have The analysis is now split into two cases: (1) Assume first that (5.14) β < α.
We deduce from (5.13) that

ANNALES HENRI LEBESGUE
Remark 5.8. -As previously the family of eigenvectors does not form a Riesz basis since for instance we have φ k,1 − φ k,2 −→ k→∞ 0. Thus, one cannot apply the results of [AKBGBdT14] that would give that the minimal null-control time is (see Appendix 7.5) Yet, in the case (5.15) we still have T 0 (X) = c(Λ). However, in the case (5.14) we have 0 < T 0 (X) = 2β < c(Λ). Notice that, in this case, setting f = 0 one recovers the system studied in subsection 5.1.2 for which the minimal time is exactly 2β.

Condensation in partial differential equations
We provide in this section actual PDE examples covered by our analysis. First of all, let us emphasize that our setting naturally covers a wide range of coupled one dimensional parabolic equations. Indeed if there exists p ∈ N * such that the spectrum of A is given by the union of p families Λ j := λ j k ; k 1 such that each family satisfies (1.9) and (1.8), then the structural assumptions on Λ are automatically satisfied (see Lemma 2.3).

A system with two different potentials
Let us consider the following boundary control system (5.16) where c 1 , c 2 ∈ L 2 (0, 1; R). Without loss of generality we assume that c 1 and c 2 are nonnegative. The operator A appearing in this system is defined in X = (L 2 (0, 1; R)) 2 with domain X * 1 = D(A) = (H 2 (0, 1; R) ∩ H 1 0 (0, 1; R)) 2 . The control operator B is defined in a weak sense as in [TW09]. The expression of its adjoint is easier to rule out and is given by .
For any non-negative potential c ∈ L 2 (0, 1; R), we denote by A c the definite positive self-adjoint operator in L 2 (0, 1; R) with domain H 2 (0, 1; R) ∩ H 1 0 (0, 1; R) defined by A c y = −∂ xx y + c(x)y. Its spectrum is denoted by Λ c ⊂ (π 2 , +∞) and satisfies We choose associated eigenfunctions denoted by ϕ c λ that are normalized in L 2 (0, 1; R) and that satisfy (see for instance [Kir11,Theorem 4.11 In particular, there existC, C > 0 such that, The analysis will be based on the careful inspection of spectral properties of the adjoint operator It is easily seen that the spectrum of A * is given by Λ = Λ c 1 ∪ Λ c 2 . We will often use the following straightforward property where C depends only on c 1 and c 2 . Our controllability result concerning system (5.16) is the following.
Remark 5.10. -The set Y 0 can be equal to the whole space (H −1 (0, 1; R)) 2 , for instance if c 1 and c 2 are close enough.
Before proving this theorem, we would like to emphasize the fact that for a system like (5.16), the condensation index of its spectrum can be arbitrary. Therefore, Theorem 5.9 gives another example of a system which is null-controllable at any time T > 0 (for well-prepared initial data) despite the fact that the condensation index of the spectrum is non zero.
Proof. -This follows from inverse spectral theory. Indeed, it is proven in [PT87, Chapter 3] for instance, that for any α ∈ R and any sequence (ν k ) k 1 ∈ l 2 , one can find a potential c ∈ L 2 (0, 1; R) such that the spectrum of A c is given by (k 2 π 2 + α + ν k ) k . It is then clear that we can choose c 1 and c 2 such that the spectrums of A c 1 and A c 2 are asymptotically as close as we want and then generate an arbitrary condensation index for the spectrum of A * . Note that such potentials are not necessarily non-negative, but this is actually not really needed in our analysis (we simply need that the spectrum of A c is made of positive eigenvalues).
In the context of parabolic control problems, this was already noticed and used in [Oua20].
We can now move to the proof of the main result of this section.
Proof of Theorem 5.9. -The first part of the proof consists in a precise description of the spectral properties of A * .
• For any λ ∈ Λ c 2 , we have a first eigenfunction of A * given by Moreover, by (5.20), we have B * φ 0 λ = B * ϕ c 2 λ = 0, so that all those eigenfunctions are observable.
-However, if λ ∈ Λ c 2 ∩ Λ c 1 , this eigenvalue is (algebraically or geometrically) double. As detailed in Section 6.3, we can deal with geometric multiplicity of eigenvalues with an adequate choice of the space of initial conditions Y 0 . This choice will be precised in (5.28). Let us define (5.23) β λ := (ϕ c 1 λ , ϕ c 2 λ ). By (5.18), we see that there exists λ 0 such that (5.24) 1 2 β λ 1, ∀ λ ∈ Λ c 2 ∩ Λ c 1 , s.t. λ > λ 0 . * If β λ = 0 then there exists a solution of (A c 2 − λ)ϑ λ = −ϕ c 1 λ , that we can choose to satisfy B * ϑ λ = 0 in such a way that is another independent eigenfunction of A * associated with λ that satisfy B * φ 0 λ = 0. Note that, by (5.24), we know that β λ can vanish only for a finite number of values of λ. * Assume now that β λ = 0. In that case, λ is geometrically simple but there exists a generalized eigenfunction φ 1 λ associated with φ 0 λ of the following form where χ λ is the unique solution of We can express χ λ in the basis ϕ c 2 • as follows • Consider now λ ∈ Λ c 1 \ Λ c 2 . We obtain another family of eigenfunctions given by This last equation has a unique solution since λ ∈ Λ c 2 and it can be expressed as follows We now state the following Lemma 5.12, whose proof is postponed at the end of this subsection.
Lemma 5.12. -There exists C 1 , C 2 > 0 depending only on c 1 , c 2 such that This lemma shows in particular that B * ξ λ can only vanish for a finite number of values of λ.
It is straightforward to prove that the family of (generalized) eigenfunctions we just computed is complete in X. We can now define Y 0 to be the set of initial data y 0 ∈ X − such that By construction, this set is closed and of finite codimension, moreover it is clear that initial data not belonging to this set are not approximately controllable. Note that this definition actually excludes the influence of the possible presence of a geometrically double eigenvalue in the system.
We will now endow the space X * with the following norm f g which is equivalent to the usual H 1 -norm and more comfortable for the following computations. Note that, if f, g ∈ H 2 (0, 1; R), this quantity is simply equal to (A c 1 f, f ) + (A c 2 g, g). From (5.17), we can find a ρ > 0 such that This implies, in particular, that Without loss of generality, we can assume that ρ < C 2C 1 where C and C 1 are respectively defined in (5.20) and (5.27).
It follows that Λ satisfies the summability condition (1.8), as well as the weak gap condition (1.9) with p = 2. We can thus consider a grouping (G k ) k ∈ G(Λ, 2, r, ρ) for a suitable r > 0. We will now use the formula (1.17) we obtained for T 0 (Y 0 ) to prove that the system is null-controllable from Y 0 at any time T > 0. For that we will consider one of the groups G (we drop the index k which is not important here) and give estimates of the corresponding divided differences.
• If λ ∈ Λ c 2 we need to estimate the quantity except if λ ∈ Λ c 1 ∩ Λ c 2 and β λ = 0. The computations above, and (5.20), show that • If λ ∈ Λ c 1 \ Λ c 2 , recall that φ 0 λ is given by (5.25) and that we need to estimate the same quantity ψ[λ] * , in the case where B * ξ λ = 0. Since λ is the only element in the group G, we know that |λ − µ| r for any other eigenvalue µ. With this remark, we can deduce that Using Parseval's identity, (5.21) and then (5.27), we finally obtain • Finally, if λ ∈ Λ c 1 ∩ Λ c 2 , then we need to estimate the contribution of the generalized eigenvector ψ[λ, λ] 2 * := P * A computation similar to the one above, for λ > λ 0 , leads to . Here, we have used (5.24) to bound from below the term β λ . It remains to bound a λ . We proceed as follows, by using (5.19), (5.29), and (5.21) This concludes the proof of the uniform bound of ψ[λ, λ] * . Case 2: G = {λ 1 , λ 2 } is of cardinal 2. Since the diameter of G is smaller than ρ, we can choose the numbering such that λ 1 ∈ Λ c 1 \ Λ c 2 and λ 2 ∈ Λ c 2 \ Λ c 1 . In particular, we have B * φ 0 λ 1 = 0, B * φ 0 λ 2 = 0 and there is no generalized eigenvector associated to this group G. Therefore, the only new quantity we need to estimate is the contribution of the following divided difference Using formulas (5.22) and (5.25), we find Since λ 1 and λ 2 can be arbitrarily close it is not clear that this estimate does not blow up. In particular, if we use the triangle inequality, we will not be able to take benefit of compensations that occur in the divided difference.
Using that |λ 1 − µ| r for all µ ∈ Λ c 2 , with µ = λ 2 , we can find with (5.31) and (5.21) the following bound Moreover, we have We use Parseval's identity and (5.21) to bound the second factor by C √ λ 1 . Moreover, by using (5.29), we have for any µ ∈ Λ c 2 , µ = λ 2 , so that the value of the series in the last factor is uniformly bounded. Hence, we have proved From this last estimate, (5.27) and (5.20), we deduce that Recall that ρ <C/(2C 1 ). Since λ 1 and λ 2 belong to the same group G, we have |λ 1 − λ 2 | ρ and thus, we obtain the estimate Coming back to the definition of ψ[λ 1 , λ 2 ] and using (5.30) and the triangle inequality, we write

ANNALES HENRI LEBESGUE
We now analyze each of the three terms.
It remains to prove the Lemma 5.12.

A system with different diffusions and a non constant coupling term
Let us briefly describe another example of a coupled parabolic system with boundary control which has motivated our study. This example is analyzed in details in [Sam19]. We consider the following system (5.36) where q ∈ L ∞ (0, 1) and ν > 0.

The spectrum of
• System (5.36) in the case where q(x) = 1 and ν = 1 was studied in [AKBGBdT14] where the influence of the condensation of eigenvalues in the system was first pointed out. It was proved that the minimal null-control time was exactly the condensation index of Λ, provided that √ ν ∈ Q. • System (5.36) with a non constant q but with the same diffusions, that is ν = 1, was studied in [AKBGBdT16]. The picture is different since in that case, there is no condensation of eigenvalues but there may however exist a minimal null-control time (depending on the coupling term q) due to very weak observation properties of the eigenfunctions. • In the general case, assuming that √ ν ∈ Q, the eigenvalues are algebraically and geometrically simple and it is proved in [Sam19] that the associated family of eigenfunctions is complete in X * = (H −1 (0, 1)) 2 , and that, moreover, there exist functions q and values of ν, √ ν ∈ Q, such that this family (properly normalized) is not a Riesz basis of X * . Therefore the abstract results in [AKBGBdT14, AKBGBdT16] do not apply.
Inspired by the block moment method presented in the present paper, a suitable value of T q,ν 0 is defined in [Sam19] (taking into account both effects of condensation of eigenvalues and weak observation of eigenfunctions) such that T q,ν 0 is the minimal null-control time of (5.36).

Dealing with complex eigenvalues
In the previous sections, we decided to state our results in the framework of real eigenvalues to simplify the presentation. However, most of them still hold for complex eigenvalues satisfying assumptions largely inspired from [AKBGBdT14]. More precisely, for a function N : R + → R, we will consider the class L w (δ, p, ρ, N ) of the families Λ ⊂ C satisfying • Parabolicity condition: λ δ|λ|, ∀ λ ∈ Λ.
• Weak gap condition with parameters ρ > 0 and p ∈ N * : In that case, a grouping (G k ) k should satisfy The corresponding formula the minimal time T 0 (Y 0 ) will be now given by and our results (namely Theorems 1.11, 2.5 and 4.1) still hold in that case.
Most of the proofs are very similar by taking care of the following points: • The divided differences associated with pairwise distinct points x 0 , . . . , x n in the complex plane do not satisfy the Lagrange theorem but instead the following slightly weaker result, due to Jensen [Jen94]. -For any z ∈ U , we have • The Blaschke product W k should be replaced by • Finally, in the restriction argument of Section 2.1.4, the holomorphy domain C + 2ε should be replaced by a sector z ∈ C, z > 2ε, | z| δ 2 |λ| .

Weakening the assumptions on the control operator
In this article, we not only study the classical null-controllability property (i.e.Y 0 = X − ), we also provide a more accurate description depending on the space of initial conditions Y 0 one wants to drive to 0. In this setting, the assumption (1.10) can be too strong.
It is easily seen that a necessary approximate null-controllability condition in that case is the following: for any λ ∈ Λ and any l ∈ [[0, α λ − 1]] we have , where, in this formula, (φ j λ ) j is a Jordan chain associated with the eigenvalue λ. Note that such a Jordan chain is not unique but (6.1) does not depend on the particular chain we choose. Note also that the assumption (1.12) can be verified using any Jordan chain.
From now on, we assume that (6.1) holds. For any λ ∈ Λ, two cases have to be considered: ]. From (6.1), it follows that for any y 0 ∈ Y 0 , any T > 0, all the moment equation (1.21) corresponding to this eigenvalue are automatically satisfied. It follows that we can simply remove this eigenvalue from the family Λ when studying the control problem at time T from Y 0 .
A straightforward computation shows that the semi-group generated by −A * satisfy where V j * := Span(φ 0 λ , . . . , φ j * −1 λ ). We shall prove that the term in V j * does not contribute to the moment problem. Indeed, from (6.1) and (6.2), we have V j * ⊂ Ker B * ∩ Ker P * Y 0 . Thus: • Concerning the control term, we have and by (6.4), it simply remains • Concerning the contribution of the source term, we have with the convention that the sum is 0 as soon as j < j * . We may now adapt the definition of our null-control time by setting so that the moment problem associated with this eigenvalue becomes This is formally exactly the same as (1.23) except that the multiplicity of the eigenvalue have been changed into α λ − j * and the associated values of ψ • λ have been constructed as explained above by (6.3) and (6.5).
As a conclusion, to obtain the definition of the minimal null-control time from Y 0 assuming that (6.1) holds, we simply need to ignore the eigenvalues corresponding to case 1, and to modify the multiplicity and the Jordan chain as explained above for the eigenvalues that are in case 2. Then, we define formally T 0 (Y 0 ) by the same formula as (1.16) and we obtain exactly the same result as Theorem 1.11. Moreover, it clearly appears from the proof that (6.1) is actually a necessary and sufficient condition to solve the moment problem associated to any finite number of eigenvalues. Thus (6.1) is a necessary and sufficient condition for the approximate null-controllability from Y 0 .

Dealing with geometrical multiplicities
In the considered setting of scalar controls, if one wants to be able to control every initial data, that is to take Y 0 = X − , then it is absolutely necessary to assume, as we did in Section 1.4.1, that all the eigenvalues are geometrically simple. Indeed, as soon as dim ker(A * − λ) 2, there necessarily exists an eigenfunction that belongs to ker B * and the system is thus not even approximately controllable.
However, if we are willing to restrict the space of initial data we consider, then our result can apply when some eigenvalues are not geometrically simple. More precisely, we will show that our results can be adapted under the condition (6.6) ker(A * − λ) ∩ ker B * ⊂ ker P * Y 0 , ∀ λ ∈ Λ, replacing the geometrical simplicity condition and the observation condition (1.10). We will also assume, for simplicity, that geometrically multiple eigenvalues are algebraically simple. Let us give the main arguments.
• First, it is clear that (6.6) is a necessary condition for the null-controllability from initial data in Y 0 . Indeed, if this condition does not hold, there exists a φ ∈ ker(A * − λ) ∩ ker B * and a y 0 ∈ Y 0 such that y 0 , φ − , = 0. For this particular y 0 , it cannot exist a u such that y(T ) = 0 because it would contradict the equality (1.20). • Assume now that (6.6) holds. For each λ ∈ Λ, there are two cases: -If ker(A * −λ) ⊂ ker B * , then by (6.6), we also have ker(A * −λ) ⊂ ker P * Y 0 . In that case, the controllability condition (1.20) automatically holds for any φ ∈ ker(A * − λ) and any control u since both terms in the equality are equal to 0. Hence, everything happens as if λ were not an eigenvalue of A * , and we can essentially not consider it in the moment problem to be solved.

Appendices
We gather in this final section some definitions or intermediate results that we used in this paper.

Wellposedness
This section is dedicated to the proof of Proposition 1.1. First of all, let us notice that the problem (1.2) admits at most one solution y ∈ C 0 ([0, T ]; X −1 ) and that the continuous dependency directly follows from (1.2). Thus, it remains to prove the existence of a function y ∈ C 0 ([0, T ]; X −1 ) satisfying (1.2).
From [TW09, Proposition 2.10.3] it comes that A can be uniquely extended to an operator A ∈ L(X, X −1 ). Moreover it comes from [TW09, Proposition 2.10.4] that − A generates a C 0 −semigroup in X −1 satisfying Thus, for any T > 0, any y 0 ∈ X −1 and any u ∈ L 2 (0, T ; U ) the problem We prove now that this function satisfies (1.2). To do so, we simply prove that the semigroup e −tÃ is the adjoint of e −tA * in the duality between X * 1 and X −1 . Let x ∈ X and z ∈ X 1 such that x = Az. As A is an extension of A it also comes that x = Az. Then, as e −tA (X 1 ) ⊂ X 1 it comes that Then, for any x ∈ X and any z ∈ X * 1 e −tÃ x, z −1,1 * = e −tA x, z −1,1 * = e −tA x, z = x, e −tA * z = x, e −tA * z −1,1 * . Thus, the density of X in X −1 implies (7.2) e −tÃ y, z −1,1 * = y, e −tA * z −1,1 * , ∀ y ∈ X −1 , ∀ z ∈ X * 1 . Finally, the duality pairing of (7.1) with any z t ∈ X * 1 with the computation rule (7.2) directly gives (1.2).
Proposition 7.6. -The polynomial function P : R → V defined by is the unique polynomial of degree less than n − 1 such that We recall a simple way to compute divided differences of a product which is known as the Leibniz rule. Finally, we deduce from the results above the following useful corollary. Proof. -Let P be the Lagrange interpolation polynomial defined in (7.3) and let i 1 , . . . , i k be fixed.
By the Hahn-Banach theorem, there exists φ ∈ V , such that φ V = 1 and Additionally, by (7.4) and by linearity of φ, we know that Applying Proposition 7.5 to x → φ, P (x) V ,V ∈ R we find that for some z ∈ Conv{x 1 , . . . , x n }, we have Combining those identities, we arrive at Let us compute the derivatives of P . Let C be the circle of center z and radius R in the complex plane. The Cauchy formula leads to 1 (k − 1)! P (k−1) (z) = 1 2iπ C P (w) (z − w) k dw, so that 1 (k − 1)! P (k−1) (z) Then, the triangle inequality implies that for any w ∈ C, which finally gives the result.

Generalization of divided differences
Assume that V is a normed vector space. Let x = (x 1 , . . . , x n ) ∈ R n be pairwise distinct real numbers and let α ∈ N n a multi-index such that α > 0. To such a multi-index we associate elements of V that we gather in a f α ∈ V |α| and that are indexed as follows x j , ∀ p ∈ P j .
Proposition 7.10. -With the notation above, let F : R → V be any smooth function satisfying . For any approximation of the weighted family (x, α), the (usual) divided difference F [y h 1 , . . . , y h N ] weakly converges when h → 0 towards an element in V that depends only on x, α and f α . In particular it does not depend on the particular choice of F nor or the approximation families (y h p ) p . This limit is called the generalized divided difference associated with the points x, the multi-index α and the values f α and is denoted by Moreover, we extend this definition if some of the α j are 0, simply by not considering the corresponding points.
Remark 7.11. -If the function F is chosen to take its values in a finite dimension space then the above convergence is actually strong. It is always possible to make this assumption, for instance by choosing F that takes its values in the subspace of V spanned by the elements f α .
Proof of Proposition 7.10. -The proof is done by recurrence on N .
• If N = 1, then we necessarily have n = 1 and α 1 = 1. The result is just a consequence of the continuity of F and we simply have f [x 1 ] = f 0 1 . • Assume that the result holds for a given value of N and let us prove it for the value N + 1.

First case:
If there is only one point x 1 . It means that n = 1 and α 1 = N + 1. In this case, for any h > 0, and any ψ ∈ V , we use the Lagrange theorem to get the existence of a z ψ,h ∈ Conv ( Second case: We assume that n > 1. By assumption there exists two distinct indices j 1 , j 2 ∈ [[1, n]] and two distinct indices p 1 , p 2 ∈ [[1, N + 1]] such that y h p 1 → x j 1 and y h p 2 → x j 2 . By symmetry of the usual divided differences, we can always assume that p 1 = N and p 2 = N + 1. It follows that we can write The recurrence assumption shows that the two terms in the numerator have weak limits that only depends on the points x, the multiplicities α and on the values f α , whereas the denominator y h N +1 − y h N converges to x j 2 − x j 1 which is not zero. The result follows. The above construction also shows, as a by-product, the following rules to compute the generalized divided differences: for any µ ∈ N n such that µ α , if µ j = 0 for all j = j, Proposition 7.14 (Lagrange theorem).
-Let x, α as before. We set N = |α|. With any f : R → R of class C N −1 , we associate the set of values f α ∈ R N by Then, there exists a z ∈ Conv({x 1 , . . . , x n }) such that the generalized divided difference built on these data satisfies Proof. -Let y h 1 , . . . , y h N be an approximation of the weighted family of points (x, α) as in Definition 7.9. By definition, the generalized divided difference f [x (α) ] is the limit as h goes to 0, of the usual divided difference f It is clear that (z h ) h is contained in a compact set so that, up to a subsequence, we may find a limit z of (z h ) h that belongs to Conv({x 1 , . . . , x n }) and satisfies the required property. Proof. -We proceed as in the proof of Proposition 7.13 by passing to the limit in the similar result for standard divided differences (Corollary 7.8).
For generalized divided differences, there is no simple equivalent to the Newton formula (Proposition 7.4). However, we can state the following result. and which satisfy the following estimates Proof. -Since the divided differences are clearly linear with respect to the data f α , the existence of the coefficients θ µ j,l is straightforward. Let us prove the claimed estimates. From now on we assume that l is fixed. Moreover, for any j ∈ [[1, n]] we introduce the notation d j := min i∈ [[1,n]] =j |x i − x j | and we define δ j ∈ N n to be the Kronecker multi-index, that is δ j i = 0 for i = j and δ j j = 1.
Proof. -Since by definition T 0 (y 0 ) only depends on Span(y 0 ), it is actually equivalent to prove sup To ease the reading let us do the computations in the simpler case η = 1; the extension to the case η 2 being straightforward. Let us introduce x k,l * λ k,1 .
• Since y 0 is normalized, we have y 0 , x k,l − , x k,l * , for any k and l, it immediately comes that T 0 (y 0 ) T 0 (Y 0 ) and thus sup y 0 ∈Y 0 T 0 (y 0 ) T 0 (Y 0 ).
In the case where the considered sequence satisfies the weak gap condition (1.9) the computation of the condensation index can be simplified: the grouping introduced in Proposition 7.1 is an optimal condensation grouping in the following sense.
Proposition 7.18. -Assume that Σ satisfies the assumptions of Definition 7.17 as well as the weak gap condition (1.9). Denote by (G k ) k 1 a grouping satisfying the conditions of Definition 1.6. Then, Recall that G [σ] is the element of (G k ) k 1 containing σ.
Using this result, we compute easily the condensation index of the particular sequence used in Section 5.1.