A counterexample to a strengthening of a question of V. D. Milman

. — Let | · | be the standard Euclidean norm on R n and let X = ( R n , ∥ · ∥ ) be a normed space. A subspace Y ⊂ X is strongly α -Euclidean if there is a constant t such that t | y | ⩽ ∥ y ∥ ⩽ αt | y | for every y ∈ Y


Introduction
A famous theorem of Dvoretzky [Dvo61] which, in particular, solved an important conjecture posed by Grothendieck [Gro56], asserts that for every positive integer k and every ϵ > 0 there exists a positive integer n = n(ϵ, k) such that every normed space of dimension at least n has a subspace Y of dimension k such that d(Y, ℓ k 2 ) ⩽ 1 + ϵ, where d is the Banach-Mazur distance given by d(X, W ) = inf ∥T ∥ · T −1 T : X → W is a linear isomorphism .
This number has an intuitive geometric meaning: if X = (R n , ∥ · ∥ X ) and W = (R n , ∥ · ∥ W ) are normed spaces with d(X, W ) ⩽ d, and K X and K W are the unit balls of X and W , respectively, then there exists a linear map A : R n → R n such that K X ⊂ AK W ⊂ dK X . A highly influential second proof of Dvoretzky's theorem was given by Vitali Milman [Mil71], which exploited measure concentration and led to many other arguments based on the same fundamental idea. In his paper, Milman established a sharp dependence of n on k. More precisely, he showed that k ⩾ cϵ 2 log n, and demonstrated that in fact the conclusion of Dvoretzky's theorem holds for a random subspace (of appropriate dimension) with probability close to 1. For a comprehensive exposition we refer the reader to [AAGM15,Chapter 5].
Let us call an n-dimensional normed space X C-Euclidean if d(X, ℓ n 2 ) ⩽ C. A fairly straightforward use of Milman's method yields the following statement.
Theorem 1.1 (V. D. Milman [Mil71]). -For every C > 1 and every ϵ > 0 there exists c > 0 such that for every n ∈ N, every n-dimensional C-Euclidean normed space X has a subspace Y of dimension at least cn that is (1 + ϵ)-Euclidean.
In other words, under the additional hypothesis that X is C-Euclidean, one can obtain a linear dependence between the dimension of Y and the dimension of X.
There is a vast literature on finding "nice" subspaces of normed spaces under various conditions (see for example [AAGM15, BL89, FLM77, GM01, Lin92, LS08, Mil92, Sch13, STJ80]), but most of this literature pays little attention to how those subspaces sit in the main space. In particular, a desirable property for a subspace Y ⊂ X is that it should be complemented. In an infinite-dimensional context, one says that Y is complemented if Y = P X for a continuous projection P on X. In a finite-dimensional context, we need a more quantitative definition: Y is said to be α-complemented if Y = P X for a projection P of operator norm at most α. In geometric terms this means that there exists a projection P : X → Y such that P K X ⊂ αK X ∩ Y .

ANNALES HENRI LEBESGUE
There are several open problems about the existence of complemented subspaces. For example, it is not known whether there is a constant C such that for every k there exists n 0 such that for every n ⩾ n 0 , every n-dimensional normed space has a C-complemented subspace of dimension at least k and codimension at least k. (For a partial result in this direction, see [STJ09].) In this paper we consider the following question of Milman. To the best of our knowledge it has not appeared in print, except in a related paper of ours [GW22]. However, it has been promoted by Milman for many years in private correspondence with several mathematicians.
Question 1.2. -Let k ∈ N, let C ∈ R, and let ϵ > 0. Does there exist n ∈ N such that every C-Euclidean normed space X of dimension at least n has a k-dimensional subspace Y that is (1 + ϵ)-Euclidean and (1 + ϵ)-complemented?
We do not answer the question, but we give a negative answer to a question that is sufficiently close to Milman's to suggest very strongly that Milman's question has a negative answer.
To describe our result, we introduce two further definitions. We shall write | · | for the standard Euclidean norm on R n .
Note that the first definition is stronger than being α-Euclidean, because instead of asking for any linear map T such that |T x| ⩽ ∥x∥ ⩽ α|T x|, we ask for T to be a multiple of the identity. In other words, we require the unit ball K X to satisfy tB ⊂ K X ⊂ αtB, where B is the Euclidean unit ball and not merely an ellipsoid. Also, the second definition is stronger than simply being α-complemented because we require the projection to be orthogonal with respect to the standard inner product on R n .
These are natural strengthenings to consider, in the light of the fact that Milman's proof of Theorem 1.1 begins by observing that without loss of generality X is strongly C-Euclidean and then proceeds to find a strongly (1 + ϵ)-Euclidean subspace. Thus, one would expect Question 1.2 to have a positive answer if and only if the following question also has a positive answer.
Question 1.5. -Let k ∈ N, let C ∈ R, and let ϵ > 0. Does there exist n ∈ N such that every strongly C-Euclidean normed space X of dimension at least n has a k-dimensional subspace Y that is strongly (1 + ϵ)-Euclidean and strongly (1 + ϵ)complemented?
Our main theorem is an example that shows that the answer to Question 1.5 is negative.
Theorem 1.6. -There exist constants ϵ > 0 and C ∈ R such that for all sufficiently large n ∈ N there is an n-dimensional normed space that is C-Euclidean but contains no 2-dimensional subspace that is both strongly (1 + ϵ)-Euclidean and strongly (1 + ϵ)-complemented.
The rest of the paper is devoted to a proof of the above theorem. In the next section we will give some preliminary definitions and lemmas. We define the normed space in Section 3. In the final two sections we first outline the proof and then give it in detail.

Good vectors
We start with a definition that allows us to reformulate in a convenient way the condition that Y is strongly (1 + ϵ)-Euclidean and strongly (1 + ϵ)-complemented.
We write ⟨·, ·⟩ for the standard inner product.
for every vector z ∈ R n .
Note that x is ϵ-good if and only if λx is ϵ-good for every λ ̸ = 0. To see what this definition means geometrically, consider the orthogonal projection P x on to the 1-dimensional subspace of R n generated by x. Writing x ′ for the normalized vector x/|x|, this has the formula Hence, the operator norm of P x (as a map from X to X) is the maximum of the quantity ⟨x ′ , z⟩∥x ′ ∥ ∥z∥ = ⟨x, z⟩∥x∥ |x| 2 ∥z∥ over all non-zero z ∈ R n . It follows that x is ϵ-good if and only if P x has operator norm at most 1 + ϵ. This is also equivalent to saying that the subspace Span(x) is strongly (1 + ϵ)-complemented, and hence that We now show that a subspace Y of a space X is strongly (1 + ϵ)-Euclidean and strongly (1 + ϵ)-complemented for some small ϵ if and only if every y ∈ Y is δ-good for some small δ.
Lemma 2.2. -Let X = (R n , ∥ · ∥) be a normed space and let Y ⊂ X be a subspace.
Before we prove the statement, note that this characterization reduces Question 1.5 to the following question.

ANNALES HENRI LEBESGUE
Question 2.3. -Let ϵ > 0, C ⩾ 1 and k ∈ N. Does there exist n such that if ∥ · ∥ is a norm on R n such that |x| ⩽ ∥x∥ ⩽ C|x| for every x ∈ R n , then the space (R n , ∥ · ∥) has a subspace Y of dimension k such that every y ∈ Y is ϵ-good?
Proof of Lemma 2.2. -Let P Y be the orthogonal projection onto Y . If Y is strongly (1 + ϵ)-Euclidean and strongly (1 + ϵ)-complemented, then ∥P Y x∥ ⩽ (1 + ϵ)∥x∥ for every x ∈ X and there exists λ ∈ R such that λ|y| ⩽ ∥y∥ ⩽ (1 + ϵ)λ|y| for every y ∈ Y . From this it follows that for every y ∈ Y and every x ∈ X we have which implies that every point y in Y is (2ϵ + ϵ 2 )-good, as claimed. Conversely, assume that every point in Y is ϵ-good, so that for every y ∈ Y and every x ∈ X we have the inequality and therefore ∥P Y x∥ ⩽ (1 + ϵ)∥x∥. It follows that Y is strongly (1 + ϵ)-complemented. Now assume for a contradiction that the subspace Y is not strongly (1 + a)-Euclidean with 0 < a. In particular, this means that we can find two unit vectors y, w ∈ Y such that ∥y∥ = ∥w∥(1 + a). Without loss of generality we may assume that a ⩽ 1/2. Let us consider a sequence of unit vectors w = x 0 , x 1 , . . . , x m−1 , x m = y that are equally spaced along the shortest arc that joins w to y (which is unique, since w cannot equal −y). By the pigeonhole principle there exists i such that We shall choose m to ensure that x i is a witness for x i+1 not being ϵ-good. Indeed, if we assume that m is at least 3π 2 /a then since the angle between x i and x i+1 is at most π/m we get that Here we used the fact that for 0 < γ < 1 and a > 0 we have that (1 + a) γ ⩾ 1 + γa − γ(1−γ) 2 a 2 ⩾ 1 + γa − γa 2 2 , and the assumptions that 0 < a ⩽ 1/2 and m ⩾ 3π 2 /a. It follows that the point x i+1 is not a 2m -good. Therefore, if every point is ϵ-good, we must have that a ⌈6π 2 /a⌉ ⩽ ϵ, which implies that a ⩽ 3π √ ϵ. Thus, we find that Y is strongly (1 + 3π √ ϵ)-Euclidean, which completes the proof of Lemma 2.2. □ Next, we give an equivalent condition for a point to be ϵ-good for some small ϵ. Before we state the result, let us recall that a support functional of a norm ∥ · ∥ at x is any non-zero linear functional f such that for every y with ∥y∥ ⩽ ∥x∥ we have f (y) ⩽ f (x). Geometrically, if τ is the value of a support functional f at x such that ∥x∥ = 1, then H = f −1 (τ ) is a hyperplane that is tangent to the unit ball of the norm ∥ · ∥ at x. Note that if the norm is differentiable, then writing g(x) for ∥x∥, we have that any multiple of g ′ (x) is a support functional at x.
In the next proposition, we shall use the standard identification of R n with its dual. That is, we identify a vector z with the linear functional y → ⟨y, z⟩.
Proposition 2.4. -Let (X, ∥ · ∥) be a normed space and suppose that |x| ⩽ ∥x∥ ⩽ C|x| for every x ∈ X. For every δ > 0 there exists ϵ > 0 such that if x ∈ X is any ϵ-good point, then there exist y, z such that |x| = |y|, |x − y| < δ|x|, z is a support functional at y and |y − z| < δ|x|. Conversely, for every ϵ > 0 there exists δ > 0 such that if there exist y, z such that |y − z| < δ|x|, z is a support functional at y and |x − y| < δ|x|, then x is an ϵ-good point.
Proof. -We shall do the second part first. Let 0 < ϵ ⩽ 1 and suppose that there exist y, z such that z is a support functional for y, and |y − x| and |z − y| are both at most δ|x|. Now let w ∈ X. Then But z is a support functional for y, so ⟨w, z⟩ ⩽ ∥w∥ ∥z∥ * = ∥w∥ ⟨y, z⟩ ∥y∥ We also have that Finally, since |x − z| ⩽ 2δ|x| we have that |z| ⩽ (1 + 2δ)|x|, so Putting all this together, we find that It can be checked that if we set δ = ϵ/5C, then the factor in brackets is at most 1 + ϵ.
For the other direction, assume that for all y such that |y| = |x| and |x − y| < δ|x| we have that |y − z| > δ|x|, where z is the support functional at y, chosen such that |z| = |y|.
We can assume that |x| = 1 and that for every unit vector y with |y − x| < δ, we have that |y − z| ⩾ δ. It follows that and therefore that |z − ⟨y, z⟩y| 2 = 1 − ⟨y, z⟩ 2 ⩾ δ 2 − δ 4 /4, which implies that the component of z orthogonal to y has size at least √ 3δ/2 ⩾ δ/2. It follows that for any γ < δ we can find a path on the unit sphere that starts at x and ends at a point at distance at least γ from x such that the norm ∥ · ∥ decreases at a rate of at least δ/2 along the path. This gives us a unit vectorȳ such that |ȳ − x| ⩽ γ and

Definition of the norm and an important observation
The norm has a fairly simple definition. Let P be a random orthogonal projection of rank n/2 and let A = I + P . Then we define where η > 0 is an absolute constant to be chosen later. (Note that |x| ⩽ ∥x∥ ⩽ ( √ 2 + η)|x|, so as long as η ⩽ 2 − √ 2, this norm is strongly 2-Euclidean.) The first part of this norm is a weighted ℓ 2 norm with respect to a random orthonormal basis, where half the weights are 2 and half are 1, and the second is a multiple of the standard ℓ 1 norm. Our aim now is to prove that with probability greater than zero (and in fact close to 1) there is no 2D subspace that consists entirely of ϵ-good points, for some absolute constant ϵ > 0. That is, we will prove Theorem 1.6 and therefore give a negative answer to Question 1.5.
The next lemma tells us what the support functionals are at a vector x. Let us use the notation sign(t) for the multivalued function from R to R that takes t to 1 if t > 0, to -1 if t < 0, and to any element of [−1, 1] if t = 0. Then if x ∈ R n we write sign(x) for the result of applying the multivalued function sign pointwise. Let us also write ∥x∥ A for ⟨x, Ax⟩ 1/2 . Proof.
-Given y such that ∥y∥ ⩽ ∥x∥ we must show that ⟨ Ax ∥x∥ A +ηn −1/2 sign(x), y⟩ can be bounded above by ⟨ Ax ∥x∥ A + ηn −1/2 sign(x), x⟩. Indeed, we have that with equality if y = x, which finishes the proof. □ Corollary 3.2. -Let Q be such that P + Q = I, let ϵ be sufficiently small and let Y be a subspace that consists entirely of ϵ-good points. Then for every unit vector x ∈ Y there exists a unit vector y with |x − y| ⩽ δ and a value of sign(y) such that where δ tends to zero with ϵ.
Recalling that A = I + P and rearranging, we obtain the inequality where we write u ≈ δ v to mean that |u − v| ⩽ δ. (We shall use this convenient notation throughout the rest of this paper.) Finally, noting that and that max( and that |P y − P x| ⩽ |y − x| ⩽ δ and the same for |Qy − Qx|, we get that Therefore, we get that for every x ∈ Y there is a unit vector y with |x − y| ⩽ δ and such that d(ηn −1/2 sign(y), P Y + QY ) ⩽ 7δ, which finishes the proof. □ Corollary 3.2 tells us, in particular, that if we have a 2-dimensional subspace Y that consists entirely of ϵ-good points, then every unit vector x ∈ Y is close to a vector y such that n −1/2 sign(y) is close to the subspace P Y + QY , which has dimension at most 4. This is the main observation we shall use to obtain a contradiction. (In fact, what we shall use is the slightly weaker approximation (3.2).)

Outline of the proof and some technical lemmas
Let us call a non-zero vector a sign vector if all its coordinates have the same absolute value. As before, we write X for (R n , ∥ · ∥), though sometimes we abuse notation and use X to refer simply to the vector space R n .
In order to show that the norm defined in (3.1) indeed constitutes a counterexample to Question 1.5, that is, that there is no two-dimensional subspace of (R n , ∥ · ∥) that consists entirely of ϵ-good points, we shall obtain a contradiction using more precise versions of the following statements.
(1) Every ϵ-good point is close to P X or QX.
(2) With high probability, no point that is close to P X or QX can be approximated by a point with only a few distinct coordinates. (3) If Y is a 2-dimensional subspace that consists entirely of ϵ-good points, then for every x ∈ Y there exists x ′ close to x such that sign(x ′ ) is close to the subspace P Y + QY . (4) Using the first two statements, we deduce that the vectors sign(x ′ ) are not approximately contained in a 4-dimensional subspace. Corollary 3.2 is our precise version of Statement 3. Let us now prove Statement 1, which is also fairly simple.
Proof. -From the beginning of the proof of Corollary 3.2 we obtain a unit vector y such that |x − y| ⩽ δ and such that where we have written λ for 1/µ and used the fact that µ ⩾ 1/3. We have that and similarly for d(x, QX). Hence, our goal is to bound min{|P y|, |Qy|}.
Proof. -A spherical cap in S n−1 of Euclidean radius 2γ has volume at most (4γ) n−1 ⩽ 8 n γ n = (8γ) n . By standard estimates, we can also find a γ-net of Y of cardinality at most (3/γ) m . But every point of S n−1 that is within γ of Y is within 2γ of a point in the γ-net, and from this the result follows. □ Lemma 4.3. -Let k be a positive integer, let c, γ > 0, and let Y be a random subspace of ℓ n 2 of dimension m. Then the probability that Y γ = {x ∈ ℓ n 2 : d(x, Y ) ⩽ γ} contains a unit vector x with at most k distinct coordinates is at most Proof. -The number of partitions of {1, 2, . . . , n} into k sets is at most k n , and for each partition E 1 , . . . , E k the set of vectors that are constant on each E i is a k-dimensional subspace, so there is a γ-net of the unit sphere of this subspace of size at most (3/γ) k .
The probability that Y γ contains a vector with at most k distinct coordinates is at most the probability that Y 2γ contains a point in one of these γ-nets, which is at most (3/γ) k (24k) n (2γ) n−m ⩽ (3/γ) k (48k) n γ n−m by Lemma 4.2 and a union bound.
□ We present one more technical lemma that is similar to Lemma 4.3, and which will be an important part of the argument. Again we make no attempt to optimize bounds.
Lemma 4.4. -Let Z be a random subspace of ℓ n 2 of dimension m. Then the probability that Z γ contains a unit vector with support size at most r is at most 288 n γ n−m−r .
Proof. -The number of sets of size at most r is n r ⩽ 2 n . For each such set E the size of a γ-net of the unit sphere of the space of vectors supported on E is at most (3/γ) r , and for each point in such a net the probability that it is in Z 2γ is at most 24 n (2γ) n−m . Therefore, the probability we wish to bound is at most 2 n (3/γ) r 24 n (2γ) n−m ⩽ 288 n γ n−m−r , which proves the lemma. □

ANNALES HENRI LEBESGUE
The key point we shall need from the above lemma is that for any c > 0 there exists γ > 0 such that if n − m − r ⩾ cn, then the probability that Z γ contains a point with small support is small. In particular, we have the following statement.
Corollary 4.5. -Let γ = 2 −37 . Let X = ℓ n 2 with n ⩾ 2, and let P : X → X be a random orthogonal projection of rank n/2 and let Q = I − P . Then the probability that either P X 2γ or QX 2γ contains a unit vector of support size at most n/4 is at most ( 2 3 ) n . Proof. -Applying Lemma 4.4, we find that the probability that P X 2γ contains a vector of support size at most n/4 is at most 288 n (2γ) n/4 = (288/512) n . The same is true of QX 2γ and the result follows with room to spare.
□ For the remainder of the paper, we shall assume that P has been chosen in such a way that neither P X 2γ nor QX 2γ contains a vector of support size at most n/4, and neither P X γ nor QX γ contains a vector with at most five distinct coordinates. By Lemma 4.3 and Corollary 4.5 such a P exists.

The set of signs cannot be squeezed into a 4-dimensional subspace
Before we move to the heart of the argument, which will be a precise version of Statement 4, let us remark that as we move forward we shall be dealing with many parameters. Since we do not wish to choose them straight away we make sure that it is easy to keep track of all the dependencies by stating them clearly and giving each one a label.
Recall also from the beginning of the proof of Lemma 4.1 that we also have the equivalent formula it follows that λ ∈ [ √ 2 − 1, 2]. In particular, it follows that |α y | and |β y | are at most 2η −1 . This bound will be important later. Suppose now that If x ∈ Y is a unit vector, then by Lemma 4.1, either d(x, P X) or d(x, QX) is at most 3δ + 2η. Therefore, for every y ∈ Y γ , either d(y, P X) or d(y, QX) is at most 3δ + 2η + γ ⩽ 2γ, so by the assumption made at the end of the previous section, y has support size at least n/4. That is, every vector in Y γ has support size at least n/4. We shall use this property frequently in the rest of the section.
We will now use the fact that the subspace Y is 2-dimensional to get a convenient re-parametrization of the unit sphere of Y . Indeed, there exist non-negative real numbers r 1 , . . . , r n and phases ϕ 1 , . . . , ϕ n ∈ [0, 2π) such that each unit vector in Y is equal to x = x(θ) = r 1 sin(θ + ϕ 1 ), . . . , r n sin(θ + ϕ n ) for some θ ∈ [0, 2π). Note that by looking at Eθ |x(θ)| 2 we find that i r 2 i = 2. Lemma 5.1. -Let δ, ξ > 0 x and y be two vectors in R n such that |x − y| ⩽ δ. Then the number of i such that |x i | ⩾ ξn −1/2 and sign(x i ) ̸ = sign(y i ) is at most ξ −2 δ 2 n.
Proof. -For each such i we have that |x i − y i | 2 ⩾ ξ 2 n −1 , and our hypothesis is □ Let α > 0 be a constant to be chosen later, let E = {i : r i ⩾ αn −1/2 }, and write P E for the coordinate projection onto E. It will also be convenient to write E 0 for {1, 2, . . . , n} \ E, which we think of as the set of coordinates where Y almost vanishes.
It follows that which, by taking the complement, completes the proof. □ We will now introduce a notion of a "typical" θ (and x(θ)).
Definition 5.3. -For choices of ξ and c that we shall make later, we call θ and x(θ) typical if (i) the number of i ∈ E such that |x(θ) i | < ξn −1/2 is less than c|E| (that is, the conclusion of Lemma 5.2 holds), and Note that the second condition, which is there for convenience, holds with probability 1 and ensures that sign(x(θ)) i ∈ {−1, 1} for every i ∈ E. Later we shall want to be sure that typical vectors exist, for which Lemma 5.2 tells us that a sufficient condition is the inequality Let us now define Σ to be the set of all vectors of the form n −1/2 P E sign(x(θ)) such that x(θ) is a typical element of the unit sphere of Y . Let β > 0 be another parameter to be chosen later, and let V ⊂ Σ be the maximal centrally symmetric β-separated subset of Σ. Assume that V consists of 2k vectors (that is k antipodal pairs) and note that V is a β-net of Σ.
We will now show, assuming that that the set V is non-empty.
Lemma 5.4. -Let Y be a 2-dimensional subspace that consists entirely of ϵ-good points and let β ⩽ 1. Then every maximal centrally symmetric β-separated subset V of has cardinality at least 2.
Proof. -Since Y is a subspace that consists entirely of ϵ-good points we have that Y ⊂ P X γ or Y ⊂ QX γ and we chose P in such a way that both P X 2γ and QX 2γ do not contain a vector of support size at most n/4 (recall the last paragraph in Section 4). It follows that every vector in Y , and even in Y γ , has support size at least n/4. Recall that, for any vector, we denote its set of "large" coordinates by E = {i : r i ⩾ αn −1/2 } and note that for every y ∈ Y we have |y − P E y| < α and, clearly, P E y has support size |E|. Assuming that it follows that for every y ∈ Y the set of large coordinates E has cardinality at least n/4. Thus, Σ consists of vectors whose norm is at least n −1/2 |E| 1/2 ⩾ 1/2 (and at most 1), so as long as β ⩽ 1, which we assumed, we can choose any typical vector x(θ), and let V = {±n −1/2 P E sign(x(θ))}, and thereby obtain a β-separated subset of Σ. This proves the result. □ Since V is centrally symmetric it has even cardinality. Let this cardinality be 2k. We now consider three cases.
Let A be the set of typical x(θ) such that n −1/2 P E sign(x(θ)) ≈ β v and note that −A is the set of typical x(θ) such that n −1/2 P E sign(x(θ)) ≈ β −v. Note also that every typical x(θ) belongs to A or −A, and that therefore neither A nor −A is empty. Writing B for the set of points at distance at most πζ from A, we also have that B and −B are closed and that B ∪ −B is the entire unit circle of Y . To see the last assertion, let x(θ 0 ) be a point in the unit circle of Y . Then the closed interval of length 2πζ centred at θ 0 contains a typical point θ, so x(θ) belongs to A ∪ (−A) and therefore x(θ 0 ) ∈ B ∪ −B, as the distance from x(θ 0 ) to x(θ) is at most πζ. Since the unit circle is connected, B ∩ −B is non-empty, from which it follows that there exist typical unit vectors x, y ∈ Y such that |x − y| ⩽ 2πζ and such that n −1/2 P E sign(x) ≈ β v, and n −1/2 P E sign(y) ≈ β −v.
It follows that n −1/2 P E sign(x) differs from v in at most β 2 n/4 coordinates, and n −1/2 P E sign(y) differs from −v in at most β 2 n/4 coordinates. Therefore, P E sign(x) and P E sign(y) are equal in at most β 2 n/2 coordinates. Moreover, by Lemma 5.1, the number of i for which sign(x i ) ̸ = sign(y i ) and |x i | ⩾ ρn −1/2 is at most ρ −2 (2πζ) 2 n = ( 2πξ αcρ ) 2 n. From these two facts it follows that the number of coordinates i ∈ E for which |x i | ⩾ ρn −1/2 is at most (( 2πξ αcρ ) 2 + β 2 2 )n. Therefore, we find that x has distance at most ρ from a vector of support size at most (( 2πξ αcρ ) 2 + β 2 2 )n. If we choose parameters in such a way that we obtain a contradiction with the fact that Y γ does not contain a vector of support size less than n/4.
We now show that either this case can be reduced to the case k = 1 with β replaced by 48β or there is a subset V ′ of V consisting of at least two antipodal pairs such that V ′ is a 3κ-separated κ-net of Σ and β ⩽ κ ⩽ 16β.
Since V is a β-net of Σ, then if it is 3β separated then we are done. If not, we can find i ̸ = j such that |v i − v j | ⩽ 3β. Then we can remove ±v j from V and we will still have a 4β-net. Similarly, if V ′ = V \ {±v j } is 12β-separated we are done, but if it contains two distinct elements v i , v j such that |v i − v j | ⩽ 12β, then again we can remove ±v j and we will still have a 16β-net. Finally, if there are two distinct elements v i , v j with |v i − v j | ⩽ 48β, then we may remove ±v j and end up with V ′ of the form {v, −v} and we are back in case k = 1. However, now β is replaced by 48β, which we must allow for when choosing our parameters, so we need to strengthen condition (5.5) to the condition (5.9) β ⩽ 1/48.
The rough idea of what we shall now do is as follows. Because sign(y(θ)) and sign(y(ϕ)) are not roughly proportional to each other, the matrix ( α(θ) β(θ) α(ϕ) β(ϕ) ) is wellinvertible, which allows us to deduce from the approximate matrix equation that P y(θ) and Qy(θ) can both be approximated by linear combinations of n −1/2 sign(y(θ)) and n −1/2 sign(y(ϕ)), with coefficients that are not too large. Therefore, y(θ) can as well, which implies that x(θ) can be. But sign(y(θ)) and sign(y(ϕ)) take at most two values each on almost all of E, and x(θ) is small outside E, so x(θ) can be approximated by a vector whose coordinates have at most five distinct values. Then we can obtain a contradiction from Lemma 4.3.
To carry out this argument we begin by making precise the statement that sign(y(θ)) and sign(y(ϕ)) are not roughly proportional.
This is minimized when λ = (r − s)/m, and the minimum works out to be 4rs/m. The lemma follows on dividing both sides by n and taking the square root.
c d ) be a 2 × 2 real matrix, let x and y be vectors in a Euclidean space such that ⟨x, y⟩ = 0, and let Then there exists λ such that |u − λv| ⩽ |x| |y| | det(A)| |v| Proof.
-Consider first the case where x and y are unit vectors. Then This is minimized when λ = ac+bd c 2 +d 2 , and the minimum is (ad − bc) 2 c 2 + d 2 , which proves the result.
Putting together the bounds obtained in the last two paragraphs we get that P E y(θ) can be approximated by a linear combination of n −1/2 P E sign(x(θ)) and n −1/2 P E sign(x(ϕ)) to within 512η −1 β −2 σ + 1024η −1 β −2 (c + ξ −2 δ 2 ) 1/2 . Thus, P E y(θ) can be approximated by a vector with at most four distinct coordinates. This in turn implies that P E x(θ) can be approximated to within δ + 512η −1 β −2 σ + 1024η −1 β −2 (c + ξ −2 δ 2 ) 1/2 by such a vector. But |x(θ) − P E x(θ)| ⩽ α, so we end up with the conclusion that x(θ) can be approximated to within α + δ + 512η −1 β −2 σ + 1024η −1 β −2 (c + ξ −2 δ 2 ) 1/2 by a vector that takes at most five distinct values (the fifth value being zero). But x(θ) ∈ Y , so it is an ϵ-good point, which implies by Lemma 4.1 that d(x(θ), P X) ⩽ 3δ + 2η or d(x(θ), QX) ⩽ 3δ + 2η. Provided we have chosen our parameters in such a way that this contradicts the fact that P was chosen to ensure that neither P X γ nor QX γ contains a vector with at most five distinct coordinates (see the end of Section 4 where the choice of P was described).

Case 3: k ⩾ 5.
We begin with a simple lemma to estimate how well we can simultaneously approximate k orthonormal vectors by a (k − 1)-dimensional subspace.
Lemma 5.7. -Let W be a (k −1)-dimensional subspace of R n and let u 1 , . . . , u k be an orthonormal sequence. Then there exists i such that d(u i , W ) ⩾ k −1/2 .
Proof. -Without loss of generality n = k. Now let v be a unit vector orthogonal to W . Then the orthogonal projection P W to W is given by the formula P W (x) = x − ⟨x, v⟩v, from which it follows that d(x, W ) = |⟨x, v⟩|. But k i=1 ⟨u i , v⟩ 2 = 1, so there must exist i such that |⟨u i , v⟩| ⩾ k −1/2 , which proves the lemma. □ Now, with the help of the assumption that k ⩾ 5, we prove that we cannot find a 4-dimensional subspace that approximately contains all the vectors in Σ. For convenience, let us reorder the coordinates in such a way that E = {1, 2, . . . , m} and 0 ⩽ ϕ 1 ⩽ ϕ 2 ⩽ · · · ⩽ ϕ m < 2π. Then for every typical vector x(θ), the set of i such that x(θ) i > 0 is an interval mod m.

Choosing parameters
We conclude by showing that there exists a choice of parameters which fulfils all the conditions. We are not optimizing this choice.