A block moment method to handle spectral condensation phenomenon in parabolic control problems

This article is devoted to the characterization of the minimal null control time for abstract linear controlled problem. More precisely we aim at giving a precise answer to the following question: what is the minimal time needed to drive the solution of the system starting from any initial condition in a given subspace to zero ? Our setting will encompass a wide variety of systems of coupled one dimensional linear parabolic equations with a scalar control. Following classical ideas we reduce this controllability issue to the resolution of a moment problem on the control and provide a new block resolution technique for this moment problem. The obtained estimates are sharp and hold uniformly for a certain class of operators. This uniformity will allow various applications for parameter dependant control problems and permitted us to deal naturally with the case of algebraically multiple eigenvalues in the underlying generator. Our approach shed light on a new phenomenon: the condensation of eigenvalues (which can cause a non zero minimal null control time in general) can be somehow compensated by the condensation of eigenvectors. We provide various examples (some are abstract systems, others are actual PDE systems) to highlight those new situations we are able to manage by the block resolution of the moment problem we propose.

1. Introduction. 23 1.1. Problem under study and state of the art. 24 This paper is concerned with the following abstract linear control system 25 (1) y (t) + Ay(t) = Bu(t), y(0) = y 0 . 26 We are more precisely interested in the minimal time issue for null-controllability, 27 which can be roughly expressed as follows: what is the smallest time T 0 ≥ 0 such 28 that, for any T > T 0 , for any initial condition y 0 , there exists a control u such that 29 the associated solution of (1) satisfies y(T ) = 0? Under quite general assumptions, 30 we shall give formulas (that are reasonably explicit and usable) for such a minimal 31 control time. The precise notion of solution as well as the wellposedness result for 32 such system will be detailed below (see Propositions 1.1 and 1.2). 33 We will consider assumptions on the operator A that will relate (1)  To highlight the limitations of the existing litterature on such problems and the 91 improvements we propose, let us consider the following control problem , y(t, 1) = 0 0 , t ∈ (0, T ), x ∈ (0, 1), Moreover there exists C T > 0 such that 148 sup t∈[0,T ] y(t) −1 ≤ C T y 0 −1 + u L 2 (0,T ;U ) .

149
The proof of this result is recalled in Appendix 7.1. Let us mention that this notion of 150 solution is very weak. In most works concerning controllability properties for abstract 151 systems like (1), an extra admissibility assumption is made on the control operator B 152 to ensure more regularity for the solutions. Note however that this is not mandatory 153 to prove wellposedness of the system in the weak sense above. We will discuss below 154 the regularity properties of the control problem.

155
Let (X * , . * ) be an Hilbert space such that X * 1 ⊂ X * ⊂ X with dense and 156 continuous embeddings. We assume that X * is stable by the semigroup generated by y(t) − ≤ C T y 0 + u L 2 (0,T ;U ) .

180
Remark 1.2. The case where X * = X * 1 means that we do not have any additional 181 regularity property for B. Conversely, the case X * = X means that we have the best 182 regularity we can hope for system (1) (this is the usual definition of an admissible 183 control operator as in [19,49]).

184
To give more accurate results, we aim at analyzing the minimal null-control time 185 problem for each specified set of initial data. This is the object of the following 186 definition.

187
Definition 1.1. Let Y 0 be a closed subspace of X − . 188 We say that system (1) is null-controllable at time T from Y 0 if for any y 0 ∈ Y 0 189 there exists u ∈ L 2 (0, T ; U ) such that the associated solution of (1) satisfies y(T ) = 0. subspace Y 0 = Span{y 0 }, then the notion above amounts to consider only the null-193 controllability of the system for that particular initial condition y 0 .
194 From now on, we will assume that the space Y 0 is given, and we denote by P Y0 the 195 orthogonal projection onto Y 0 with respect to • − and by P * Y0 ∈ L(X * ) its adjoint 196 in the duality X − , X * . Notice that these definitions yield 197 (4) P * Y0 z * ≤ z * , ∀z ∈ X * . This manuscript is for review purposes only.
Notations. We give here some notations that will be used throughout this article.

199
• For any integers a, b, c ∈ N, we shall define the following subsets of N: • For any complex number µ ∈ C we define e µ : (0, +∞) → C to be the

206
For α, µ ∈ N n , we say that µ ≤ α if and only if µ j ≤ α j for any j ∈ 1, n .

207
• For any finite subset A ⊂ C, we will make use of the polynomial P A defined   problem, we shall make now the following structural assumptions.

216
• First of all, we shall only consider scalar controls in this paper, that is U = R. 217 • We assume that the spectrum of A * is only made of a countable number of 218 geometrically simple eigenvalues denoted by Λ. We refer to Section 6.3 for a 219 discussion on this assumption. 220 We shall also assume for simplicity that the eigenvalues are all real (see how-221 ever the discussion in Section 6.1) and that 222 (7) Λ ⊂ [1, +∞).

223
Note that, if (7) does not hold, we can replace A by A + γ for γ > 0 large 224 enough and find an associated null-control u. A null-control for the original 225 problem is then given by t → e γt u(t) and we can explicitly bound its cost 226 with respect to the parameters γ and T .

240
As we will use a different labelling of the spectrum in this paper we shall not 241 use these notations anymore in what follows.

242
• We denote by (φ 0 λ ) λ∈Λ an associated family of eigenvectors of A * . These 243 eigenvectors are chosen to be normalized in X * .

247
This is a necessary condition for the approximate controllability of system (1), 248 and is therefore mandatory if we expect null-controllability to hold. In our 249 setting, the assumption (10) is also a sufficient condition for approximate 250 controllability (see [25,41]).

251
When the considered set of initial data Y 0 is not the whole space X − , the 252 approximate controllability condition (10) can be too strong and we can relax 253 it. We will discuss this point in Section 6.2.

260
This particular choice of the Jordan chain is not mandatory but will sim-261 plify the forthcoming computations. In the case were the eigenvalues are 262 algebraically simple (η = 1) we drop the superscipt 0 for the eigenvectors.

263
• We introduce the notation Φ := φ l λ , λ ∈ Λ, l ∈ 0, α λ − 1 , for the family of all the (generalized) eigenvectors of A * . We assume that Φ 264 is complete in X * i.e. for any y ∈ X − , we have 265 (12) y, φ − , = 0, ∀φ ∈ Φ =⇒ y = 0. 266 7 We emphasize the fact that we will not make any additional assumptions on the 267 family Φ. This is a very important difference with related results in the literature 268 in which, most of the time, it is assumed that Φ forms a Riesz basis of X * . This is 269 discussed in Sections 1.3.4 and 3. 270 In the forthcoming paper [17], we will study the extension of our analysis to the 271 case of possibly infinite dimensional controls. (13) dist(G k , G k+1 ) ≥ r, 286 and 287 (14) diam G k < ρ. 288 We prove in Appendix (Proposition 7.1) that such a grouping always exists for 289 any Λ satisyfing the weak gap condition (9).

298
Remark 1.3. The condition λ k,1 < · · · < λ k,g k is convenient to treat the abstract 299 problem (1) but might not be convenient in actual examples. As all the estimates in 300 our analysis will depend on the parameters p and ρ, the eigenvalues inside a same 301 group are mostly interchangeable and thus the increasing labelling is not needed. grouping (G k ) k in G(Λ, p, r, ρ). Thanks to the assumption (10), we can define the 304 following family of elements in X *

316
In the simpler case where the eigenvalues are assumed to be algebraically simple 317 (i.e. η = 1) we can immediately give a more explicit formula for T 0 (Y 0 ). Indeed, in this 318 case one recovers the standard divided differences (whose definition and properties are 319 recalled in Section 7.3.1) and thus Then, using Corollary 7.1 and (14) it comes that the computation of all those divided 322 differences is not needed and the formula reduces to where the last equality comes from the use of Newton formula (see Proposition 7.3).

327
Remark 1.4. The definition above corresponds to a given grouping of the spec-328 trum, however the minimal null-control result stated in Theorem 1.1 will show that its 329 value does not depend on this particular choice of a grouping. As a consequence, for 330 specific examples, one can compute the minimal null control time T 0 (Y 0 ) using any 331 convenient such grouping in a class G(Λ, p, r, ρ).

332
For the sake of simplicity, for any y 0 ∈ X − we denote by T 0 (y 0 ) the quantity 333 T 0 (Span(y 0 )). Of course, we have the following proposition relating T 0 (Y 0 ) and T 0 (y 0 )

337
This assertion is proved in Section 7.4. 338 9 Remark 1.5. Let us discuss the sign of T 0 (Y 0 ).

339
• In the case Y 0 = X − (the operator P * Y0 thus reduces to the identity), the 340 minimal time T 0 (Y 0 ) is always non-negative. Indeed, from the case µ =

347
• In the general case where Y 0 is a strict closed subspace of X − , it may happen For instance, if we choose y 0 ∈ X 1 to be an eigenvector of A for an eigenvalue λ ∈ Λ, then we have T 0 (y 0 ) = −∞. Indeed, we first observe that  i. If T 0 (Y 0 ) < +∞ and T > T 0 (Y 0 ), the system (1) is null-controllable from Y 0 357 at time T .

358
ii. If T 0 (Y 0 ) > 0 and T < T 0 (Y 0 ), the system (1) As the family Φ of (generalized) eigenvectors is assumed to form a complete family 437 in X * (see (12)) it is in fact sufficient to test (20) against the elements of Φ. Therefore 438 a null control u is characterized by the following countable set of equations  λ ∈ Λ and any l ∈ 0, α λ − 1 , .
To solve this so-called moment problem the classical strategy introduced in [26] 452 consists in designing a biorthogonal family in L 2 (0, T ) to 453 t → t l e −λt ; λ ∈ Λ, l ∈ 0, α λ − 1 454 with associated estimates. Then, thanks to these estimates, a suitable control is de-455 fined. Usually in this procedure each biorthogonal element is estimated separately.

456
Thus, this method is somehow inoperent to analyse the possible condensation of eigen-457 vectors (which is related to possible cancellations in linear combinations of right-hand 458 sides of (23)). We will thus propose to solve this moment problem using the grouping 459 introduced in Section 1.3.2, in order to cope with such possible compensations. We 460 then look for a solution u in the form where each q k will solve the moment problem corresponding to the group G k . More 463 precisely, such a control will formally solve (23) if This construction uses a Laplace transform isomorphism together with a suitable 471 restriction argument (Proposition 2.4). The obtained estimates on q k will allow to 472 prove convergence of the series (24) when T > T 0 (Y 0 ). Those estimates are uniform 473 with respect to Λ in a certain class (see Definition 2.1) which will allow in Section 4 474 to infer the resolution of (25) in the general case.

475
Remark 1.6. Contrarily to the classical strategy, notice that the sequence (q k ) k

478
The function q k is only orthogonal to those functions corresponding to groups other 479 than G k . Inside the group G k its definition is adapted to solve the moment prob- can thus be seen as a block moment method. As we consider at the same time the 484 eigenvalues associated to a same group this will lead to sharper estimates than the one 485 coming from the design of a biorthogonal family (i.e. when considering each eigenvalue 486 individually).

487
However, as already mentioned, our strategy still allows to prove the existence and  Let us mention that, in the context of control problems with a spectrum satisfying 496 the weak-gap condition, divided differences were already used for instance in [8,9].

497
Among other things, in theses works, the authors give a necessary and sufficient 498 condition for the family of (generalized) divided differences to form a Riesz basis of L 2 (0, T ; C). The possible condensation of eigenvalues then 501 appears to deduce properties on the original family of exponentials (t → e iλt ) λ∈Λ .

502
Their results are then applied to hyperbolic control problems.

503
The results presented in our work can be seen as the 'parabolic' equivalent (or 2. For any z T ∈ X * , the following partial observability holds: In this case, the best constant M satisfying those properties is called the cost of con-520 trollability from Y 0 at time T and is denoted M (Y 0 , T ).

523
The goal of this section is to prove point i. of Theorem 1.1 in the case of alge-524 braically simple eigenvalues. Thus, in all this section we assume that η = 1.

525
As explained in Section 1.4, we will now focus on the construction of a solution 526 to (26). Of course as we want to design a control u ∈ L 2 (0, T ; U ) the estimate of 527 q k L 2 (0,T ;R) will play a crucial role to prove that the series (24) makes sense. Actually 528 we will prove sharp estimates that are uniformly valid for Λ in a certain class. These 529 uniform estimates can be used for various applications and will be crucial to deal with 530 algebraic multiplicity of eigenvalues in Section 4. We start by precising the class of Λ 531 we will consider.

552
Proof. Let us first prove the weak gap condition. For any µ ≥ 0, we have  For the asymptotic behavior of the sequences, we have The claim is proved.

559
The following straightforward facts will also be useful.
Using this class we prove the following theorem.

577
The proof of Theorem 2.1 is conducted all along Sections 2.1.1 and 2.1.2.

578
Before going on with the proof, let us notice that the resolution of the block 579 moment problem (30) for a specific choice of ω k,j allows to prove, as a by-product, 580 the existence and uniform estimates of a biorthogonal family to the exponentials 581 (e λ ) λ∈Λ where e λ is defined by (5).

590
Moreover, up to the factor e ελ k,1 , this estimate is optimal since any function q k,j satisfying (33), satisfies the lower bound Proof. Let k ≥ 1 and j ∈ 1, g k . Let q k,j ∈ L 2 (0, T ; C) be the solution of the 592 block moment problem (30) given by Theorem 2.1 associated with the right-hand side 593 ω k,j = δ j,j for any j ∈ 1, g k . Since those values of ω are real we can change q k,j in 594 its real part without changing its properties. Then, the equalities (33) follow directly.

595
Moreover we have

From the Newton formula (see Proposition 7.3) it comes that for any
To conclude the proof of Corollary 2.1 we prove that there exists C p,ρ > 0 such that 600 for any k ≥ 1, j ∈ 1, g k and any i ∈ j, g k ,

601
(34) By (14), we get Thus, As the right-hand side only takes a finite number of values, inequality (35) proves (34) 609 and ends the proof of Corollary 2.1.

610
The lower bound directly follows from (32) and the inequality .

631
From the properties of H 2 (C + ), it comes that Then the Laplace transform is an isomorphism. 636 We shall construct for each k, a function J k ∈ H 2 (C + ) satisfying 639 640 and such that for any ε > 0, there exists C ε,p,r,ρ,N > 0 such that Taking advantage of the isomorphism property of the Laplace transform we will then 643 set q k := L −1 (J k ), to conclude the proof. This manuscript is for review purposes only.
Construction of J k . 645 We define J k as where P k is a polynomial of degree less than p which is precised below and W k is the 648 following Blaschke-type product The sequence Λ j contains the j-th element of each group G l , except if this group 653 contains less than j elements, in which case, we replace it by the largest element of 654 G l that is λ l,g l . In particular, we observe that Λ j is a subsequence of Λ.

655
From (8), we deduce that λ∈Λj 1 λ < +∞, so that for any j, the associated 656 infinite product uniformly converges on any compact of C + . As a consequence, W k 657 is well-defined and holomorphic in C + (see for instance [45, Chapter 15]). It follows 658 that J k is also holomorphic on C + . 659 We shall need the following property, whose proof is technical and postponed to 660 Section 2.1.4.

664
From the definition of W k it comes that (37) is satisfied. Next, it comes that (38) 665 is equivalent to To satisfy (38), we define P k as the Lagrange interpolating polynomial at points λ k,j Thus, to conclude it remains to estimate Estimate of J k .

674
Notice that since the eigenvalues in Λ are real, for any k ≥ 1 and any τ ∈ R, we and thus J k ∈ H 2 (C + ).

678
Using Leibniz formula (see Proposition 7.6), Using again Leibniz formula (see Proposition 7.6), The two factors in each term of this sum are estimated using Lagrange theorem (see 683 Proposition 7.4):

688
• Second, we have By using (40), it follows that Recall that (14) implies λ k,g k − λ k,1 < ρ. Then, using (44) and (45) into the identity 694 (43) proves that there exists C ε,p,r,ρ,N > 0 such that for any i, j ∈ 1, g k , i ≤ j, we Plugging it in (42) we obtain Finally, getting back to estimate (41), and using the isomorphism property of L

726
The proof is done by contradiction. Assume that the estimate does not hold for 727 given T , p, ρ, and N , then there exists a sequence (Λ m ) m≥1 belonging to the same For each m, by Proposition 7.1, we consider a grouping (G m k ) k ∈ G(Λ m , p, ρ/p, N ), 731 and from (48) we know that there exists coefficients a m k,j such that the finite linear This manuscript is for review purposes only.
Let 0 < ε < T 2 be fixed and let C + 2ε = {z ∈ C ; (z) > 2ε}. We prove that the 737 sequence z → P m (z) is uniformly bounded on any compact of C + 2ε .

738
Let m ≥ 1 and z ∈ C + 2ε . Then for any k ∈ {1, . . . , K m } the application of Propo-739 sition 2.1 to the sequence Λ m yields the existence of q m,z k ∈ L 2 (0, +∞; C) satisfying where e z is defined in (5).

744
The previous right-hand side is estimated using Lagrange theorem (see Propo-745 sition 7.4). As the function e z is complex-valued we apply it on both its real and 746 imaginary parts. This yields Then, using (49) it comes that, for m sufficiently large, a m k,j e −λ m k,j z .

752
From Cauchy-Schwarz inequality we deduce that Summing these inequalities we obtain that for any z ∈ C + 2ε , From the properties of the groupings (see Definition 1.2), it comes that λ m k,1 ≥ ρ p (k−1).

757
Thus, for any z ∈ C + 2ε , This manuscript is for review purposes only.
Now recall that P m L 2 (0,T ;C) goes to 0 as m goes to infinity. This implies that 764 P (t) = 0 for any t ∈ (2ε, T ). The function P being holomorphic it comes that it 765 vanishes on C + 2ε . Using (50) and the Lebesgue dominated-convergence theorem yields This is in contradiction with P m L 2 (0,+∞;C) = 1 and ends the proof of Proposi-768 tion 2.4. 769 We now have all the ingredients to prove Proposition 2.3. is a proper subspace of L 2 (0, +∞; C). Let Π Λ the associated orthogonal projection.

783
We can now conclude the proof of Theorem 2.1 for simple eigenvalues. The only thing left to prove is the lower bound (32). Let q k ∈ L 2 (0, T ; C) be any 787 solution of (30b). Using the linearity of divided differences, equalities (30b) imply 788 that for any i ∈ 1, g k where e t is defined by (5). From the Lagrange theorem (see Proposition 7.4) and the 791 fact that e (i−1) t is decreasing on (0, +∞), it comes that for any t ∈ (0, T ), we have 793 since we assumed that Λ ⊂ [1, +∞). Thus applying the Cauchy-Schwarz inequality 794 to (52) gives which proves (32) and ends the proof of Theorem 2.1.

802
Assume that T 0 (Y 0 ) < +∞ and let us consider an initial data y 0 ∈ Y 0 . Without 803 loss of generality we assume that y 0 − = 1.
For any k ≥ 1 and j ∈ 1, g k we set Let (q k ) k≥1 be the solution of the block moment problem (30) given in Theorem 2.1.

809
There exists a constant C ε,T,p,r,ρ,N > 0 such that From Leibniz formula (see Proposition 7.6), Using the definition (18) of T 0 (y 0 ), it comes that, Thus, there exists C T,p > 0 such that Then, as T > T 0 (y 0 ) + 2ε, the series (24) is convergent in L 2 (0, T ; R) and defines 823 a control u that solves the moment problem (26), which implies that the associated 824 solution of (1) satisfies y(T ) = 0.

825
With the same strategy we can prove a more accurate result. Namely we get the 826 following uniform bound for the cost of controllability.
Then, there exists a constant C T0(Y0),T,p,r,ρ,N > 0 such that for any y 0 ∈ Y 0 , there 832 exists a control u ∈ L 2 (0, T ; R) such that the associated solution y of (1) satisfies 833 y(T ) = 0 and Proof. We follow the same strategy as in the proof of point i. of Theorem 1.1 but we do not use (54). Instead notice that using (55) we have Thus, writing that u ≤ k≥1 q k , we get From Definition 1.2 it comes that λ k,1 ≥ r(k − 1) which ends the proof of Corol-843 lary 2.2. 844 We also provide the following lower bound for the cost of controllability.
where e T is defined by (5), ψ is defined by (15) and M (Y 0 , T ) the cost of controllability 851 from Y 0 at time T is defined in Lemma 1.1.

852
Proof. Let k ≥ 1 and l ∈ 1, g k . If system (1) is null-controllable at time T from 853 Y 0 , we apply Lemma 1.1 with By definition of z T we have Then, for any ε > 0, there exists C ε,γ,J > 0 such that The generalisation we propose is the following.

907
As W Σ σ does not vanish in an open neighbourhood of D σ,γ it comes that 1 W Σ σ is 908 holomorphic on this domain. Thus applying Cauchy formula yields From Lemma 2.3 it comes that for any ε > 0 there exists C ε,γ,J > 0 such that This manuscript is for review purposes only.
We shall now move to the proof of Proposition 2.2 which is the main objective of 916 this section.

917
Proof (of Proposition 2.2). Recall that the function N : R + → R is the one 918 appearing in (28) and that the subsequences Λ j are defined in (39). 919 We recall that the index k is fixed, as well as the value θ ∈ Conv(G k ). We intro-920 duce the new sequenceΛ j obtained from Λ j by replacing the k-th value λ k,min(j,g k ) 921 by θ i.e.

923
Notice that, using Proposition 7.1, the fact that Λ j is a subsequence of Λ such that 924 each term belong to a different group, and by the assumption on θ, we obtain that it comes thatΛ j ∈ L γ,J withJ depending only on N .

928
With these notations and Proposition 7.1 it comes that .

930
Finally, using Leibniz rule (for derivatives), evaluating the result at z = θ and using 931 Proposition 2.5 yield the claim.  Proof. Let T > 0 and assume that null controllability from Y 0 in time T of (1) 938 holds.

961
Due to the definition of T 0 (Y 0 ), this implies that T ≥ T 0 (Y 0 ) and ends the proof on 962 the lack of null-controllability at time T < T 0 (Y 0 ).

963
3. Comparison with some already known results.

964
In this section, we prove that we actually recover the known formulas for the   Proof. Notice that when Y 0 = X − , the operator P * Y0 reduces to the identity. This manuscript is for review purposes only.
We assume that and we will prove that c(Λ) > 0. 990 We shall reason as in the proof of point ii. of Theorem 1.1 (see Section 2.2) but 991 starting with the formula (17) instead of (18). We can find an integer l * ≥ 1, an 992 extraction (κ n ) n≥1 and integers m n such that 1 ≤ m n ≤ m n + l * − 1 ≤ g κn and such since, if it is not the case, we can reduce the value of l * accordingly. Note that, as 1000 P * Y0 φ λ * ≤ 1, by (60), we know that l * > 1.

1012
In particular, we have

1018
As already mentioned the null-control problem for (1) has been considered in [5] 1019 with the additional assumption that the family (φ λ ) λ∈Λ forms a Riesz basis of X * .

1020
Observe that it is equivalent to ask that (φ λ / φ λ ) λ∈Λ is a Riesz basis of X.

1021
With this additional assumption, the minimal null-control time from Y 0 = X − 1022 was proved to be equal to 1023 (61) T * := lim sup λ→∞ λ∈Λ where the interpolating function E Λ is defined in (126).

1025
Remark 3.1. Notice that, since φ λ is normalized in X * , there exists C > 0 such where, for each group G, the polynomial P G is defined by (6).

1040
• Let G be a group of eigenvalues and λ ∈ G. We prove that, for any finite subset M  This manuscript is for review purposes only. Then, This proves (63).

1072
As we have considered normalized eigenvectors, and by (4), for any k ≥ 1 and 1073 any l ∈ 1, g k , we have This manuscript is for review purposes only.

Using (34) this leads to
Thus, 1081 ln max

1083
Then, using (62), we obtain 1084 T 0 (X − ) ≤ T * . 1085 We now prove that we indeed recover exactly the expression of the minimal 1086 time (61) (or (62)) when we assume that the eigenvectors form a Riesz basis.

1090
Remark 3.3. It will appear clearly in the proof that the Riesz basis assumption is much stronger than what we really need. The only thing that we actually use at the very beginning of the proof, is that the spectral radius of the inverse of the Gram matrix M k := Gram X * (φ k,1 , . . . , φ k,g k ) satisfies Note in particular that, in practice, estimating such a spectral radius in each group is 1091 much simpler than proving that the whole family is a Riesz basis.

1092
Proof. As we assumed that (φ λ ) λ∈Λ is a Riesz basis of X * it comes that there 1093 exists C > 0 such that for any k ≥ 1, for any α k,1 , . . . , α k,g k ∈ R, It follows that for any j ∈ 1, g k , Thus, taking the logarithm,

1110
Since by definition we have G k = G [λ k,j ] , this ends the proof of Proposition 3.3.

1112
In this section we prove Theorem 1.1 in the case where we allow algebraic multi-1113 plicity for the eigenvalues i.e. η ≥ 2. As previously, the main issue is the resolution 1114 of the block moment problem given in (25). This is detailed in the next subsection.

1138
Assume that Λ ∈ L w (p, ρ, N ) and let (G k ) k ∈ G(Λ, p, r, ρ) be an associated grouping. 1139 We consider an integer η ≥ 1 and for any k we suppose given a multi-index 1140 α k ∈ N g k such that |α k | ∞ ≤ η.
We can then find a subsquence (q hn ) n that weakly converges towards some q ∈ 1174 L 2 (0, T ; C) such that q L 2 (0,T ;C) ≤ C T,pη,r,ρ,Ñ q L 2 (0,+∞;C) . We will show that q 1175 solves the required equations. Let us now turn to the resolution of the block moment problem (66) for T = +∞.

1183
The next proposition is the generalization of Proposition 2.1.

1197
Now, we are given a fixed index k. We observe that, there exists a h 0 ∈ (0, r/(2η)) 1198 (possibly depending on k) such that, for any h < h 0 , the sets G k , G k + h, . . . , G k + ηh 1199 are pairwise disjoint.

1200
Since we need to take into account precisely the multiplicities we are interested in, encoded in the multi-index α k , we introduce the modified k-th group

and the new familyΛ
which satisfiesΛ h ⊂ Λ h and therefore also belongs to the class L w (pη, ρ,Ñ ).

37
This manuscript is for review purposes only.

1205
For each h > 0, we apply Proposition 2.1 to the familyΛ h to find a solution q k,h ∈ 1206 L 2 (0, +∞; C) to the following moment problem 1207 (73) and satisfying the following bound, with a constant uniform with respect to h, q k,h L 2 (0,+∞;C) ≤ C ε,ηp,r,ρ,Ñ e ελ k,1 max By Proposition 7.7, we know that the right-hand side in the above estimate converges 1209 when h → 0 towards a similar quantity with generalized divided differences instead 1210 of the usual divided differences. It follows that we can extract a subsequence (q k,hn ) n 1211 that weakly converges in L 2 (0, +∞; C) towards a function q k that satisifies the bound 1212 (71).

1213
Finally, by the same argument as in the proof of Proposition 4.1 above, we can 1214 combine the equations (73) to make appear divided differences on both side and pass 1215 to the weak-strong limit in the integral to finally get   Hilbert space H whose eigenvalues (µ k ) k≥1 are assumed to be sorted in increasing 1256 order. One can think of A, for instance, as the Laplace operator −∂ xx or any Sturm-

1257
Liouville operator with homogeneous Dirichlet boundary conditions.

1258
If we denote by (ϕ k ) k≥1 a corresponding Hilbert basis of eigenvectors, A may be where (•, •) H denotes the scalar product in H. We assume that (µ k ) k≥1 satisfies (8) It is easy to see that (−A, D(A)) generates a C 0 -semigroup on X and that B : R → X 1281 is bounded. Thus we consider for this example that X * = X = X − and Y 0 = X.

1282
The spectrum of (A * , D(A)) is given by From Definition (18), we have 1306 T 0 (X) = lim sup k→∞ ln max ψ k,1 , One has Using (76) and (75) we easily deduce that In this case, the operator A * has spectrum σ(A * ) = {µ k ; k ≥ 1} with algebraically 1320 double eigenvalues satisfying the gap property and an associated Hilbert basis of (gen-1321 eralized) eigenvectors given by Notice that from (79) one has Thus, the analysis of (77)-(78), is unchanged if ones sets f = 0.
Again B is a bounded control operator and we also set for this example X * = X = X − 1334 and Y 0 = X.

1391
Notice that the eigenvalues are not necessarily increasingly sorted inside the k th group which do form a complete family in X. Moreover, for all k ≥ 1, To compute the minimal time T 0 (X), let us determine the different terms appearing Since lim k→+∞ g(µ k ) = 0, we immediately see that, for k large enough, we have so that using (76) and (18) we get The analysis is now split into two cases: 1415 1. Assume first that 1416 (87) β < α. 1417 We deduce from (86) that , y(t, 1) = 0 0 , t ∈ (0, T ), where c 1 , c 2 ∈ L 2 (0, 1; R). Without loss of generality we assume that c 1 and c 2 are non- and is given by .

1461
In particular, there existC,C > 0 such that, The analysis will be based on the careful inspection of spectral properties of the adjoint operator It is easily seen that the spectrum of A * is given by Λ = Λ c1 ∪ Λ c2 . We will often use 1464 the following straightforward property where C depends only on c 1 and c 2 .

1472
Remark 5.6. The set Y 0 can be equal to the whole space (H −1 (0, 1; R)) 2 , for 1473 instance if c 1 and c 2 are close enough.

1474
Before proving this theorem, we would like to emphasize the fact that for a system 1475 like (89), the condensation index of its spectrum can be arbitrary. Therefore, Theorem Proof. This follows from inverse spectral theory. Indeed, it is proven in [44,

1482
Chapter 3] for instance, that for any α ∈ R and any sequence (ν k ) k≥1 ∈ l 2 , one can find 1483 a potential c ∈ L 2 (0, 1; R) such that the spectrum of A c is given by (k 2 π 2 + α + ν k ) k .

1484
It is then clear that we can choose c 1 and c 2 such that the spectrums of A c1 and A c2 1485 are asymptotically as close as we want and then generate an arbitrary condensation 1486 index for the spectrum of A * . Note that such potentials are not necessarily non-1487 negative, but this is actually not really needed in our analysis (we simply need that 1488 the spectrum of A c is made of positive eigenvalues).

1489
In the context of parabolic control problems, this was already noticed and used 1490 in [42]. 1491 We can now move to the proof of the main result of this section. Moreover, by (93), we have so that all those eigenfunctions are observable.
that we can choose to satisfy B * ϑ λ = 0 in such a way that is another independent eigenfunction of A * associated with λ that 1507 satisfy B * φ0 λ = 0. Note that, by (97), we know that β λ can vanish 1508 only for a finite number of values of λ.
1509 * Assume now that β λ = 0. In that case, λ is geometrically simple but there exists a generalized eigenfunction φ 1 λ associated with φ 0 λ of the following form where χ λ is the unique solution of that satisfy B * χ λ = 0. 1510 We can express χ λ in the basis ϕ c2 • as follows

1514
• Consider now λ ∈ Λ c1 \Λ c2 . We obtain another family of eigenfunctions given This last equation has a unique solution since λ ∈ Λ c2 and it can be expressed 1518 as follows We now state the following lemma, whose proof is postponed at the end of 1521 this subsection.
This lemma shows in particular that B * ξ λ can only vanish for a finite number 1525 of values of λ.

1526
It is straightforward to prove that the family of (generalized) eigenfunctions we 1527 just computed is complete in X. We can now define Y 0 to be the set of initial data

1534
We will now endow the space X * with the following norm f g which is equivalent to the usual H 1 -norm and more comfortable for the following 1535 computations. Note that, if f, g ∈ H 2 (0, 1; R), this quantity is simply equal to 1536 (A c1 f, f ) + (A c2 g, g).

1537
From (90), we can find a ρ > 0 such that This implies, in particular, that Without loss of generality, we can assume that ρ < C 2C1 where C and C 1 are respec-1540 tively defined in (93) and (100).

1541
It follows that Λ satisfies the summability condition (8), as well as the weak gap 1542 condition (9) with p = 2. We can thus consider a grouping (G k ) k ∈ G(Λ, 2, r, ρ) for 1543 a suitable r > 0. We will now use the formula (17) we obtained for T 0 (Y 0 ) to prove 1544 that the system is null-controllable from Y 0 at any time T > 0. For that we will 1545 consider one of the groups G (we drop the index k which is not important here) and 1546 give estimates of the corresponding divided differences.

1578
This concludes the proof of the uniform bound of ψ[λ, λ] * .
In particular, we have B * φ 0 λ1 = 0, B * φ 0 λ2 = 0 and there is no generalized eigenvector associated to this group G. Therefore, the only new quantity we need to estimate is the contribution of the following divided difference Using formulas (95) and (98), we find Since λ 1 and λ 2 can be arbitrarily close it is not clear that this estimate does 1583 not blow up. In particular, if we use the triangle inequality, we will not be 1584 able to take benefit of compensations that occur in the divided difference. 1585 We will thus make appear from (99) the principal part of ξ λ1 as follows with β λ1,λ2 := (ϕ c1 λ1 , ϕ c2 λ2 ) and Thus, 1588 1589 51 Since we are only interested in the asymptotic behavior when λ 1 and λ 2 are 1590 large, we see that we can assume from (91) that the following properties hold 1591 (104) |β λ1,λ2 | ≥ 1/2, λ 2 ≥ λ 1 /2 ≥ 1.

1614
We now analyze each of the three terms.

1621
It remains to prove the lemma.

53
We take y = 0 in this inequality and we integrate with respect to x to obtain It remains to bound from below the L 2 norm of ξ λ . To this end, we multiply (108) by ϕ c1 λ and integrate over (0, 1). After integration by parts, and using the equation satisfied by ϕ c1 λ , we get The Cauchy-Schwarz inequality gives and since by (91), we have a uniform L ∞ bound on ϕ c1 λ , the proof is complete. in [46]. We consider the following system 1629 (109) , y(t, 1) = 0 0 , t ∈ (0, T ), where q ∈ L ∞ (0, 1) and ν > 0.

1632
• System (109) in the case where q(x) = 1 and ν = 1 was studied in [5] where 1633 the influence of the condensation of eigenvalues in the system was first pointed 1634 out. It was proved that the minimal null-control time was exactly the con-1635 densation index of Λ, provided that √ ν ∈ Q.
In that case, a grouping (G k ) k should satisfy The corresponding formula the minimal time T 0 (Y 0 ) will be now given by -For any z ∈ U , we have f [x 0 , . . . , x n ] − f (n) (z) n! ≤ C f,n diam(U ).

1668
In this article, we not only study the classical null-controllability property (i.e.

1669
Y 0 = X − ), we also provide a more accurate description depending on the space of 1670 initial conditions Y 0 one wants to drive to 0. In this setting, the assumption (10) can 1671 be too strong.

1672
It is easily seen that a necessary approximate null-controllability condition in that 1673 case is the following: for any λ ∈ Λ and any l ∈ 0, α λ − 1 we have 1674 (110) B * φ j λ = 0, ∀j ∈ 0, l ⇒ P * Y0 φ j λ = 0, ∀j ∈ 0, l , 1675 where, in this formula, (φ j λ ) j is a Jordan chain associated with the eigenvalue λ. Note 1676 that such a Jordan chain is not unique but (110) does not depend on the particular 1677 chain we choose. Note also that the assumption (12) can be verified using any Jordan 1678 chain.

1689
In that case, for j > j * we set and then by induction, we define This construction ensures that (φ j λ ) j and (φ j λ ) j span the same space, that and finally satisfy the equations A * φj λ = λφ j λ +φ j−1 λ + γ j φ j * −1 λ , for some γ j ∈ R whose precise value is unimportant in the sequel.

1694
A straightforward computation shows that the semi-group generated by −A * satisfy e −tA * φ j λ ∈ (e tφ )[λ (j+1) ] + V j * , where V j * := Span(φ 0 λ , . . . , φ j * −1 λ ). We shall prove that the term in V j * does 1695 not contribute to the moment problem. Indeed, from (110) and (111) with the convention that the sum is 0 as soon as j < j * . 1703 We may now adapt the definition of our null-control time by setting have been constructed as explained above by (112) and (114).

1710
As a conclusion, to obtain the definition of the minimal null-control time from Y 0 1711 assuming that (110) holds, we simply need to ignore the eigenvalues corresponding 1712 to case 1, and to modify the multiplicity and the Jordan chain as explained above 1713 for the eigenvalues that are in case 2. Then, we define formally T 0 (Y 0 ) by the same 1714 formula as (16) and we obtain exactly the same result as Theorem 1.1.

1715
Moreover, it clearly appears from the proof that (110) is actually a necessary and 1716 sufficient condition to solve the moment problem associated to any finite number of Combining those identities, we arrive at Let us compute the derivatives of P . Let C be the circle of center z and radius R in 1845 the complex plane. The Cauchy formula leads to Then, the triangle inequality implies that for any w ∈ C, which finally gives the result.

1853
Assume that V is a normed vector space.
1854 Let x = (x 1 , . . . , x n ) ∈ R n be pairwise distinct real numbers and let α ∈ N n a multi-index such that α > 0. To such a multi-index we associate elements of V that we gather in a f α ∈ V |α| and that are indexed as follows f l j , j ∈ 1, n , l ∈ 0, α j − 1 .
proposition 7.7. With the notation above, let F : R → V be any smooth function 1859 satisfying 1860 (120) 1 l! F (l) (x j ) = f l j , ∀j ∈ 1, n , ∀l ∈ 0, α j − 1 .    The recurrence assumption shows that the two terms in the numerator have 1895 weak limits that only depends on the points x, the multiplicities α and on the