Computing Puiseux series : a fast divide and conquer algorithm

Let $F\in \mathbb{K}[X, Y ]$ be a polynomial of total degree $D$ defined over a perfect field $\mathbb{K}$ of characteristic zero or greater than $D$. Assuming $F$ separable with respect to $Y$ , we provide an algorithm that computes the singular parts of all Puiseux series of $F$ above $X = 0$ in less than $\tilde{\mathcal{O}}(D\delta)$ operations in $\mathbb{K}$, where $\delta$ is the valuation of the resultant of $F$ and its partial derivative with respect to $Y$. To this aim, we use a divide and conquer strategy and replace univariate factorization by dynamic evaluation. As a first main corollary, we compute the irreducible factors of $F$ in $\mathbb{K}[[X]][Y ]$ up to an arbitrary precision $X^N$ with $\tilde{\mathcal{O}}(D(\delta + N ))$ arithmetic operations. As a second main corollary, we compute the genus of the plane curve defined by $F$ with $\tilde{\mathcal{O}}(D^3)$ arithmetic operations and, if $\mathbb{K} = \mathbb{Q}$, with $\tilde{\mathcal{O}}((h+1)D^3)$ bit operations using a probabilistic algorithm, where $h$ is the logarithmic heigth of $F$.


Introduction
Let F ∈ K[X, Y ] be a bivariate polynomial defined over a field K. Assuming that the characteristic p of K is zero or large enough, it is well known that for any x 0 ∈ K, the roots of F (considered as a univariate polynomial in Y ) may be expressed as fractional Laurent power series in (X − x 0 ) with coefficients in K. These are the (classical) Puiseux series 1 of F above x 0 . Puiseux series are fundamental objects of the theory of algebraic curves [30,5] and provide important information. The reader will find many applications of Puiseux series in previous papers of the first author (see for instance [26,25]).
In [11,12], D. Duval introduced rational Puiseux expansions. They are more convenient to compute and provide more geometrical informations (see Section 2). The first contibution of this paper concerns the computations of singular parts of rational Puiseux expansions.
In the sequel, we denote d X and d Y the partial degrees of F in respectively X and Y , and D its total degree. We let also υ = υ X (R F ) stands for the X-valuation of the resultant R F := Res Y (F, F Y )) of F and its derivative in Y according to the variable Y .
[Y ] a primitive and separable polynomial in Y and assume that p = 0 or p > d Y . There exists an algorithm 2 that computes singular parts of Puiseux series of F above x 0 = 0 in less than O˜(d Y υ) arithmetic operations.
This improves results of [26] (the complexity bound therein is in O˜(ρ υ d Y ) where ρ is the number of rational Puiseux expansions -bounded by d Y ). From that we will deduce: Theorem 2. Let F as in Theorem 1. There exists an algorithm that computes the singular part of Puiseux series of F above all critical points in less than O˜(d Y 2 d X ) ⊂ O˜(D 3 ) arithmetic operations.
This result improves the bound in O˜(d Y 2 d X 3 ) ⊂ O˜(D 5 ) of [23,24] ; note also that Proposition 12 of [26] suggested a way to get a bound in O˜(d Y 3 d X ) ⊂ O˜(D 4 ).
Using the Riemann-Hurwitz formula, we easily deduce: Corollary 1. Assuming p = 0 or p > D, there exists an algorithm that computes the genus of a given geometrically irreducible algebraic plane curve over K of degree D in less than O˜(D 3 ) arithmetic operations.
Moreover, using the reduction criterion of [23,25], we can bound the bit complexity of the genus computation (here ht(P ) stands for the maximum between the logarithm of the denominator of P , and the logarithm of the infinite norm of its numerator): Corollary 2. Let K = Q(γ) be a number field, 0 < ǫ ≤ 1 a real number and F ∈ K[X, Y ]. Denote M γ the minimal polynomial of γ and w its degree. Then there exists a Monte Carlo algorithm that computes the genus of the curve F (X, Y ) = 0 with probability of error less than ǫ and an expected number of word operations in: With the same notations than in Corollary 2, we have: Assuming that the degree of the square-free part of the resultant Res Y (F, F Y ) is known, there exists a Las Vegas algorithm that computes the genus of the curve F (X, Y ) = 0 with an expected number of word operations in: Finally, as described below, our algorithm will induce a fast analytic factorisation of the polynomial F . More precisely, one gets the following result: Theorem 3 has a particular interest with regards to factorization in K[X, Y ] or K[X, Y ] : when working along a critical fiber, one can take advantage of some combinatorial constraints imposed by ramification when recombining analytic factors into rational factors (see [31] Main ideas. We now describe the main ingredients of our algorithm to improve the results of [26]. Idea 1. We concentrate on the monic case. The roots above (0, ∞) require special care, explaining that the results depend on υ = υ X (Res Y (F, F Y )) and not on υ X (Disc Y (F )) ≤ υ. However, equality holds when F is monic.
The key idea is to use tight truncation bounds. Even if the bound n = υ can be reach for some Puiseux series, it is actually always possible to compute around half of them using a bound n ∈ O(υ/d Y ) (see Section 3). We will use such an idea to show that we can either proves that F is irreducible (and compute its Puiseux series), either compute a factorisation F = G H mod X n with n ∈ O(υ/d Y ) and d Y (H) ≤ d Y /2 ; and that this can be done in the aimed bound of Theorem 2.
Idea 3. The fiber X = 0 being critical, the polynomials G(0, Y ) and H(0, Y ) may not be coprime. Therefore, we cannot use the classical Hensel lemma to lift the factorisation of the previous point ; we will show (see Lemma 10) that it is possible to adapt the Hensel lemma to our case.
Idea 4. Nevertheless, such a result might require to start with a factorisation F = G · H mod X k with k ∈ Θ(υ). We will prove in Section 4 that in our context, we will have k ∈ O(υ/d Y ).
Idea 5. We then apply this divide and conquer strategy until the moment where we get an analytic factorisation of F mod X υ , together with the singular parts of its Puiseux series.
Idea 6. Finally, as suggested in the conclusion of [26], computing univariate factorisations needed in the Newton-Puiseux algorithm can be costly, in particular when we consider the computation of singular parts of Puiseux series above all critical points. As suggested in [26], we will use dynamic evaluation [10,9] to avoid this bottleneck, which leads us to work over product of fields. As in [12], this will also enable the results to stand over characteristic 0 fields.
Organisation of the paper. We remember the classical definitions related to Puiseux series and the description of the rational Newton Puiseux algorithm of [12] in Section 2. The remaining of the paper will be divided into two parts: 1. At first, we will assume that the input polynomial is monic and defined above a field ; we will use univariate factorisation, as in [24,26]. In this context, we will first present in Section 3 a variant of the rational Newton-Puiseux algorithm that will enable us to use tight truncation bounds ; we will use this algorithm to compute half of the Puiseux series above x 0 = 0 in O˜(d Y υ) arithmetic operations plus the cost of univariate factorisations thanks to Idea 2. Then, we show in Section 4 how, starting from these Puiseux series, compute two factors G (corresponding to the Puiseux series computed so far) and H of F with d Y (H) ≤ d Y /2, and then recursively compute the Puiseux series of H, getting a divide and conquer algorithm to compute all singular parts of Puiseux series above x 0 = 0 in less than O˜(d Y υ) operations (plus univariate factorisations once again). This will be possible thanks to Ideas 3, 4 and 5. Finally, we will explain in Section 4.5 how to deal with the non monic case as sketched, detailing how to deal with roots above (0, ∞).
2. The next step is to get rid of univariate factorisations. They are costly when the characteristic is 0 and even above finite fields, it has been mentioned in [26] that the cost does not able to get a bound depending on υ, which is a problem when considering the computation of Puiseux series above all critical points. This is Idea 6, i.e. dynamic evaluation. In this context, we are led to work over non integral rings and will have to pay attention to several points ; for instance, the computation of the Newton polygon is not straightforward: we first need to check if the coefficients of the input polynomials are regular, and if not to perform some suitable splittings. We detail this in Section 5, where we prove Theorem 1 using results of [9].
Finally, we deduce in Section 6 how all these results together provides us low complexity bounds to compute the desingularisation of the curve above all its critical points, in the same complexity -up to logarithmic factors -than the best known algorithm to compute bivariate resultants (namely, Theorem 2 and associated corollaries). We prove Theorem 3 in Section 7.
A brief state of the art. In [12], D. Duval defines the rational Newton-Puiseux algorithm over a field K with characteristic 0. The complexity analysis therein provides a number of arithmetic operations in less than O(d Y 6 d X 2 ) when F is monic (without using any fast algorithm). Note that this algorithm does not compute any univariate factorisation (it uses the D5-principle instead), and can trivially be generalised when p > d Y .
In [23,24], an algorithm with complexity O˜(d Y υ 2 + d Y υ log(p c )) is provided over K = F p c , with p > d Y . Moreover, from this bound, one gets an algorithm that compute the singular parts of Puiseux series of a polynomial F ∈ K[X, Y ] above all critical points in O˜(d Y 3 d X 2 log(p c )). In [26], with the same assumption on the base field, an algorithm is given to compute the singular part of Puiseux series over x 0 = 0 in less than O˜(ρ d Y υ + ρ d Y log(p c )) arithmetic operations. These two algorithms relie on univariate factorisation over finite fields. Therefore, they cannot be directly extended to the 0 characteristic case. This also explains why the second result does not able to provide an improved bound for the computation of Puiseux series above all critical points.
Finally, note that there are other methods to compute Puiseux series or analytic factorisation, as generalised Hensel constructions [16], or the Montes algorithm [21,2] (which works over general local fields). Several of these methods and a few others have been commented in previous papers of the first author [25,26]. To our knowledge, none of these method have been proved to provide a complexity which fits in the bounds obtained in this paper.
led to [26] as a first step towards the divide and conquer algorithm presented here. We also thank Francois Lemaire for many useful discussions on dynamic evaluation.
2 Main definitions and classical algorithms 2.1 Puiseux series Denote K a field and consider F ∈ K[X, Y ] as in Section 1. Up to a change of variable X ← X + x 0 , it is sufficient to give definitions and properties for the case x 0 = 0. Under the assumption that p = 0 or p > d Y , the well known Puiseux theorem asserts that the d Y roots of F (viewed as a univariate polynomial in Y ) lie in the field of Puiseux series ∪ e∈N K((X 1/e )). See [5,14,30] or most textbooks about algebraic functions for the 0 characteristic case. When p > d Y , see [8,Chap. IV,Sec. 6]. It happens that these Puiseux series can be grouped according to the field extension they define. Following Duval [12,Theorem 2], we consider decompositions into irreducible elements: where ζ e i is a primitive e i -th root of unity in K. Primitive roots are chosen so that ζ b ab = ζ a .
, we say that S ij is defined at X = 0. Proposition 1. The {F ij } 1≤j≤f i have coefficients in a degree f i extension K i of K and are conjugated by the action of the Galois group of K i /K. We call K i the residue field of R i and f i its residual degree. We have Proof. First claim is [12,Section 1]. For the second one, see e.g. [8,Chapter 4,Section 1].
This leads to the definition of rational Puiseux expansions (note that it is possible to construct classical Puiseux series from a system of rational Puiseux expansions -see e.g. [26, Section 2]): Definition 2. A system of rational Puiseux expansions over K (K-RPE) of F above 0 is a set {R i } 1≤i≤ρ such that: • the parametrisation is irreducible, i.e. e i is minimal.
which happens only for non monic polynomials.
Throughout this paper, we will truncate the powers of X of polynomials or series. To that purpose, we introduce the following notation: given τ ∈ Q and a Puiseux series S with ramification index e, we denote ⌈S⌉ τ = k≤N α k X k/e where N = max{k ∈ N | k e ≤ τ }. We generalize this notation to elements of K((X 1/e ))[Y ] by applying it coefficient- Roughly speaking, the regularity index is the number of terms necessary to "separate" a Puiseux series from all the others (with a special care when υ X (S) < 0). Since regularity indices of all Puiseux series corresponding to the same rational Puiseux expansion are equal, we define: Definition 4. The singular part of a rational Puiseux expansion R i of F is the pair:  where r i is the regularity index of R i , i.e. the one of any Puiseux series associated to R i .
Once such singular part has been computed, the Implicit Function Theorem ensures us that one can compute the series up to an arbitrary precision. This can be done in quasi linear time by using a Newton operator [17, Corollaries 5.1 and 5.2, page 251].
Finally, we formalise the precision of Puiseux series and rational Puiseux expansions that we will use in our algorithms. We also define the notion of pseudo-parametrisation: Definition 5. We say that is a Puiseux series of F known with precision n if there exists a Puiseux series S of F s.t. ⌈S ′ ⌉ n = ⌈S⌉ n .

The rational Newton Puiseux algorithm
Our algorithm in Section 3 is a variant of the well known Newton-Puiseux algorithm [30,5]. We now explain (roughly speaking) the idea of this algorithm following an example, and finally describe the variant of D. Duval [12, section 4], since we will use the improvements therein.
Let's consider the computation of the Puiseux series of the polynomial F 0 (X, From the well known Puiseux theorem (see e.g. [5, Section 8.3]), we know that the first term of any such series S(X) is of the form α X m q , α ∈ K and (m, q) ∈ N 2 .
We have F 0 (X, α X m q + · · · ) = α 6 X 6 m To get F 0 (X, S(X)) = 0, at least two terms of the previous sum must cancel, i.e. (m, q) must be chosen so that two of the exponents coincide. To that purpose, we use the following definition: The condition on (m, q) can be translated as: two points of Supp(F 0 ) belongs to the same line m a + q b = l. In order to increase the X-order of the evaluation, there must be no point under this line. On this example, we have two such lines: a + 2 b = 6 and a + b = 4. They define the Newton polygon of F 0 : is the lower part of the convex hull of its support.
Remark 1. In [26,Definition 6], a modified Newton polygon N ⋆ (H) is defined, and used in the main algorithm. This is not necessary for our strategy, since we do not focus on getting precisely the singular part. But we will need that for one proof (see Remark 5), so we use it in the desccription of RNPuiseux.
In this paper, we will use low truncation bounds ; in particular, we may truncate some points of the Newton polygon. In order to certify the correctness of the computed slopes, we will use the following definition.
Definition 8. Given F ∈ K[X, Y ] and n ∈ N, we define the n-truncated Newton polygon of F to be the set N n (F ) of edges ∆ of N (⌈F ⌉ n ) that satisfies l q ≤ n if ∆ belongs to the line m a + q b = l. In particular, any edge of N n (F ) is an edge of N (F ).
Finally, to get more terms, we can apply recursively this strategy to the polynomial F 0 (X q , X m (Y + α)). Obviously, it is more interesting to consider a root ξ = α q of the polynomial φ(Z) = Z 2 − 2 Z + 4 (we have P (Z) = Z 2 φ(Z 2 ) and are obviously not interested in a root α = 0), that is the characteristic polynomial [12]: where a 0 is the smallest value such that (a 0 , b 0 ) belongs to ∆ for some b 0 .
Up to now, we only described the ideas of the algorithm and provided all the necessary tools. We conclude this section by providing a more formal definition of the RNPuiseux algorithm for monic polynomials (see Section 4.4 for the non monic case) ; it uses two sub algorithms, for each we only provide specifications: • Factor(K,φ) computes the factorisation of φ over K, given as a list of factors and multiplicities.
Algorithm: RNPuiseux(F, K, π) Input: F ∈ K[X, Y ] monic, K a field and π the result of previous computations (π = (X, Y ) for the initial call) Output: A set of singular parts of rational Puiseux expansions above (0, 0) of F with its base field. 1 R ← {}; // results of the algorithm will be grouped in R 2 foreach ∆ ∈ N ⋆ (F ) do // we consider only negative slopes 3 Compute m, q, l, φ ∆ associated to ∆; Take ξ a new symbol satisfying φ(ξ) = 0; The key improvment of this rational version is the distribution of ξ to both X and Y variables (line 10). This avoids to work with α = ξ 1/q and to introduce any useless field extension due to ramification (see [12,Section 4]).

Complexity notions
We finally recall some classical complexity results, starting with the multiplication of univariate polynomials: Definition 10. A (univariate) multiplication time is a map M : N → R such that: • For any ring A, polynomials of degree less than d in A[X] can be multiplied in at most M(d) operations (multiplication or addition) in A Lemma 1. Let M be a multiplication time. Then we have: Proof. The first point is [15,Exercise 8.33]. The second one is an obvious consequence.
The best known multiplication time gives M(d) ∈ O(d log(d) log(log(d))) ⊂ O˜(n) [28,6]. Note in particular that we . This is why we use Kronecker substitution: Multiplication of multivariate polynomials. By Kronecker substitution, one can multiply two polynomials belonging Bivariate polynomials defined over an extension of K. In particular, in Sections 3 and 4, we perform multiplication of bivariate polynomials in X and Y (with respectives degrees bounded by d X and d Y ) defined over K[Z]/(P (Z)) with P an irreducible polynomial over K of degree d P as follows: first perform the polynomial multiplication over ) from the previous paragraph) before applying the reduction modulo P on each coefficient (for a total in less than O(d X d Y M(d P )) arithmetic operations from [15, Theorem 9.6, page 261]).
Finally, note that we postpone the discussion concerning the complexity of operations modulo triangular sets (needed for dynamic evaluation) in Section 5.2.

Refined truncation bounds to compute half of the Puiseux series
In all this section, F ∈ K[X, Y ] is monic and separable with respect to Y and the characteristic p of K satisfies p = 0 or p > d Y . Under such hypothesis, we will prove that we can compute at least half of the Puiseux series of F in less than O˜(d Y υ) arithmetic operations, not counting the factorisation of univariate polynomials: There exists an algorithm that computes some RPEs R 1 , · · · , R λ of F known with precision at least Not taking into account univariate factorisations, this can be done in less than Algorithm Half-RNP in Section 3.2 will be such an algorithm. It uses previous improvments of the first author and M. Rybowicz [23,24,26], and one additional idea, namely Idea 2 of Section 1, that we detail below.

Previous complexity improvments and Idea 2.
Given P ∈ K[Z], we denote K P the quotient K[Z]/(P (Z)) and d P = d Z (P ). We need the following results of [24,26].
Denote ∆ an edge of N (F ) belonging to m a+q b=l, and (u, v)=Bézout(m, q). One can reduce the computation of F (ξ v X q , X m (ξ u + Y ))/X l modulo X n to n univariate polynomial shifts over K P . This takes less than O(n M(d Y d P )) arithmetic operations over K.
In [26], the number of recursive call of the rational Newton-Puiseux algorithm is reduced from υ to O(ρ log(d Y )). This is due to what we call the Abhyankar's trick [1,Chapter 12]: has a unique edge (∆) m a + q b = l with q = 1, then φ ∆ has several roots in K.
In other words, after performing the shift Y ← Y − A d Y −1 /d Y , we can ensure that we will detect at least either a branch separation, a non integer slope, or a non trivial factor of the characteristic polynomial, and this can happen no more than O(ρ log(d Y )) times.
Proof. From our assumption on the characteristic of K, this computation can be reduced to bivariate polynomial multiplication via [3, Problem 2.6, page 15]. The result follows from Kronecker substitution.
We only need to apply this trick when the first term of all Puiseux series is the same. As in [26], we use in this section the following result, so that we perform such a bivariate shift only if necessary: In order to provide the monicity assumption of Lemma 3, the well-known Weierstrass preparation theorem [1, Chapter 16] is used: . Moreover, RPEs of G and G centered at (0, 0) are the same.
We get a complexity bound for this thanks to the following result: be a polynomial satisfying hypotheses of Proposition 3, n be in N and G denote the Weierstrass polynomial of G. There exists an algorithm WPT such that WPT(K P , G, n) computes ⌈ G⌉ n with Proof. This is [ Finally, in both [24] and [26], we show that powers of X can be truncated modulo X υ+1 during the algorithm while preserving the singular part of the Puiseux series. This explains the complexity bound of O˜(ρ υ d Y ) of [26] by taking n = υ in the previous results.
Representation of residue fields. As explained in [24, Section 5.1], representing residue fields as multiple extensions induces costly arithmetic cost. Therefore, we need to compute primitive representations each time we get a characteristic polynomial φ with degree 2 or more. Note that algorithms we use here are Las-Vegas (this is the only probabilistic part concerning our results on Puiseux series computation).
be two irreducible polynomials of respective degrees d P and d φ , and denote d = d P d φ . Assuming that there are at least d 2 elements in K, there exists a Las-Vegas algorithm Primitive that computes an irreducible polynomial P 1 ∈ K[Z] with degree d together with an isomorphism Ψ : 2 ) arithmetic operations plus a constant number of irreducibility tests in K[Z] of degree at most d. Moreover, given α ∈ K P,φ , one can compute Ψ(α) with O(d P M(d)) operations over K.
Proof. Consider for instance [27,Section 2.2] ; some details are provided in the proof of Proposition 15.
Remark 2. We do not precisely pay attention to the assumption on the number of elements in K in this paper ; note that we will always have d ≤ d Y in our context, so that if working over a finite field without enough elements, our assumption p > d Y means that, up to take a degree 2 field extension, we are fine.
Idea 2: using lower truncations bound. In this paper, we use tighter truncation bounds: one shows that one can use a truncation bound n ∈ O(υ/d Y ) and still get some informations, namely at least half of the singular parts of Puiseux series. But such a truncation bound will need to be updated in a different way than in [24,26] ; in particular, in case of blow-up X ← X q , one has to multiply the truncation bound by q. In the same way, one cannot divide the truncation bound by t the degree of the found extension anymore. But both these points are actually compensated by the use of the algorithm WPT, that will divide the degree in Y by the same amount, since this eliminates all the conjugates. Therefore, the size of the input polynomial will always be bounded by O(υ) elements of K. Tight truncations bounds (depending on the Puiseux series) are provided in Section 3.3 ; we also provide a bound depending on υ.

The
Half-RNP algorithm.
The following algorithm Half-RNP is a variant of algorithm ARNP of [26]: the main change is that it we allow to compute only some of the RPEs (at least half of them for our applications); to this aim we update the truncation bound in a slightly different way; also, the computation of the output is presented in a different way (we eliminate the relations of [12, Section 4.1] in another way ; this avoids to deal with field extensions twice), and we include how to deal with primitive elements for sake of completness (see Proposition 5). Correctness of the output is proved in Section 3.3. We denote v i := υ X (H Y (S i,j,k (X))) for any Puiseux series S i,j,k associated to R i .
if d = 1 then return π 1 (T, 0) else return Half-RNP(⌈H(X, Y − B)⌉ n , P, n, π 1 ); else // computing a primitive representation Remark 3. We have d X (π) ≤ n e i for any RPE deduced from π. This is obvious when π is defined from Line 3 ; changing X by X q on Line 12 is also straightforward. Also, we have m ≤ n e i since m q ≤ l q ≤ n from Definition 8. Remark 4. Except possibly at the first call, the polynomial H is then always a Weierstrass polynomial.
Theorem 4 is an immediate consequence of the following result, which will be proved in Section 3.4.
Theorem 5. Let F ∈ K[X, Y ] be as in Theorem 4. Then Half-RNP(F, Z, 6 υ/d Y , (X, Y ))) outputs a set of RPEs, with among them a set R 1 , · · · , R λ known with precision at least 4 υ/d Y and such that Not taking into account the cost of univariate factorisations, it takes less than O(M(υ d Y ) log(d Y )) ⊂ O˜(υ d Y ) arithmetic operations over K.

Using tight truncations bounds.
We study here more carefully the RNPuiseux algorithm in order to get optimal truncation bounds to compute a RPE of a Weierstrass polynomial F with this algorithm or Half-RNP. From this study, we also deduce exact relation between υ and these optimal bounds. We will therefore need additional notations: • F Y is the derivative in Y of F and R 1 , · · · , R ρ are its rational Puiseux expansions, • Let m k,h a+q k,h b = l k,h , 1 ≤ h ≤ g k be the successive edges encountered during the computation of the expansion where S is any Puiseux series associated to R i (this rational number doesn't depends on the choice of S).
Lemma 5. With the above notations, for any By the definition of the Puiseux transformations, the polynomial Remark 5. From this result, we see that the value N i actually does not depend on the algorithm. Nevertheless, the proof above rely on the algorithm RNPuiseux because it computes precisely the singular part of all Puiseux series thanks to the modified Newton polygon of [26,Definition 6]. The algorithm Half-RNP introduces two differences: • the Abhyankar's trick does not change the value of the N i : after applying it, the next value l q is just the addition of the l i q i we would have found with RNPuiseux (the concerned slopes being the sequence of integer slopes that compute common terms for all Puiseux series, plus the next one). See Example 2 below.
• not using the modified Newton polygon N ⋆ can only change the last value l q , if and only if the coefficient of X r/e is 0. This has no impact on the proof of Lemma 6 below.
In the remaining of this paper, we will define N i as r i e i + v i .
Lemma 6. Let n 0 ∈ N. To compute the RPE R i with certified precision n 0 ≥ r i e i , it is necessary and sufficient to run Half-RNP with truncation bound n = n 0 + v i . In particular, to ensure the computation of the singular part of R i , it is necessary and sufficient use a truncation bound n ≥ N i .
Proof. First note that if we start from a polynomial H known up to X n , then the greatest n 1 so that we can certify H ∆,ξ up to X n 1 is precisely n 1 = q n − l (see Figure 1 ; details can be found in [ Figure 1: Change of variables for a Puiseux transform Let's first assume that the coefficient in X r i e i of any Puiseux series asociated to R i is non zero. Then, starting from a truncation bound n = n ′ + N i , we get n 1 = q n ′ + q N i − l ; and by definition of N i , q N i − l is precisely the "N i " of the associated RPE of H ∆,ξ := H(ξ v X q , X m (Y + ξ u ))/X l . By induction, we finish at the last call of the algorithm associated to the RPE R i with a truncation bound n = e i n ′ . Moreover, at this point of the algorithm, we have d Y (H) = 1 and π = (γ i X e i , Γ i (X) + α i X r i Y ). Therefore, the ouptut of the algorithm will be R i known with precision n ′ + r i e i . Now take n ′ = n 0 − r i e i , and we are done from Lemma 5.
Finally, if the coefficient in X r i e i of any Puiseux series asociated to R i is zero, we will have π = (γ i X e i , Γ i (X)+α i X η i Y ) with η i > r i . If this is the case, then that means that the at the previous step, we already computed some zero coefficients, thus losing the same precision η i − r i . This does not change the result.
The quantity N i is therefore an optimal bound to compute the singular part of the RPE R i . We now bound it.
Proof. This is written in the proof of [24, Proposition 5, page 204].
Proof. Straightforward consequence of Lemmas 5 and 7.
We finally deduce global bounds: Proof. Let's assume that the R i are ordered by increased v i values, and define λ such The first result is a property of the resultant (see e.g. [15,Exercise 6.12]). The second one is then Corollary 4. Proof. First note that recursive call of line 4 is never done twice in a row thanks to Lemma 3. Representing the set of recursive calls as a tree T , we can group Abhyankar's recursive call with the next Puiseux transform as a single node.

Complexity results and proof of Theorem 4.
Let's consider a function call to Half-RNP(H, P, n H , π) and denote d P = deg Z (P ). We distinguish two kind of lines: From Lemma 3, when q = d φ = 1, we must have a branch separation for the corresponding node of T . Therefore, this happens at most ρ − 1 times (more precisely, the sum of the nodes of T of couples (∆, φ) with q = d φ = 1 is bounded by ρ − 1). This means that the sum of the costs for these cases is bounded by O(ρ M(n d Y )).
To conclude the proof, we still have to deal with all the cases where q > 1 or d φ > 1. In such a case, Type 2 lines are the costly ones. Moreover, we can bound q by e i and d P d φ by f i for any RPE R i issued from (∆, φ). But for each RPE R i , such situation cannot happen most than log(e i f i ) ≤ log(d Y ) times (before and/or after separation of this branch with other ones). From Definition 10, that means we can bound the total cost for all these cases by Proof of Theorem 5. As far as correctness is concerned, we only have to take care of truncations and the precision of the output ; other points have been considered in previous papers of the first author [22,24,26] (note also [12, Section 4.1] concerning the construction of the output). From Proposition 6 and Lemma 6, we know that we will get at least half of the Puiseux series with precision 4υ/d Y or greater and v i < 2υ/d Y by a function call Half-RNP(F, Z, 6υ/d Y , (X, Y ))). Finally, the complexity follows from Proposition 7.

A dichotomic Newton-Puiseux algorithm.
We now provide a dichotomic strategy to compute all Puiseux series. Assuming that F is monic, our strategy can be summarized as follows: 1. We run Half-RNP(F, Z, 6 υ/d Y , (X, Y )). If this provides us all RPEs of F , we are done. If not, from Section 3, we get at least half of the Puiseux series of F , with precision 4 υ/d Y or more and with v i < 2 υ/d Y .
2. From these Puiseux series, we construct the associated irreducible factors and their product G with precision 4 υ/d Y ; this is detailed in Section 4.1. Note that d Y (G) ≥ d Y /2 thanks to Theorem 5.

We compute its cofactor H by euclidean division modulo
4. We compute a Bezout relation U G + V H = X k mod X k+1 using [18, Algorithm 1], proving that there exists such k with k ≤ 2 υ/d Y . This is Idea 4, and is detailed in Section 4.2.
5. From this relation and the factorisation F = G H mod X 4 υ/d Y +1 , we will adapt the Hensel lemma to get a fatorisation F = G H mod X υ+1 in quasi-linear time. This is Idea 3, detailed in Section 4.3.
6. Finally, we apply our main algorithm recursively on H ; as the degree in Y is at least divided by two each time, this is done at most log(d Y ) times, for a total cost only multiplied by 2. This is Idea 5, detailed in Section 4.4.
Note that F is not assumed to be monic in Theorem 6. We will need first to use Hensel lifting to compute the factor F ∞ corresponding to RPEs centered at (0, ∞) up to precision X υ . Then we will compute the RPEs of F ∞ as "inverse" of the RPEs of its reciprocal polynomial, which is monic by construction. Details are provided in Section 4.5.

Computing the norm of a RPE.
Lemma 8. Let R 1 , · · · , R λ be a set of K-RPEs not centered at (0, ∞). Denote where for 1 ≤ i ≤ λ, S ijk is any Puiseux series associated to R i Assuming that we know the R i with precision n ≥ ν, there exists an algorithm NormRPE that computes a Weierstrass polynomial G ∈ K[X, Y ] with d Y (G) = λ i=1 e i f i and d X (G) = n + ν such that the RPE of G with precision n are precisely the R i . It takes less than O(M(n d Y (G) 2 ) log(n d Y (G))) ⊂ O˜(n d Y (G) 2 ) arithmetic operations over K.
As n ≥ ν, this can be done in O(M(e 2 i n f i ) log(e i )) operations in K using a subproduct tree. Then, we compute [15,Corollary 11.21, page 332] to a polynomial with three variables). Summing over i these two operations, this fits into our bound. Finally, we compute the product of the G i modulo X n+ν+1 in less than O(M(n d Y (G)) log(d Y (G))) using a subproduct tree.

Lifting order.
Our algorithm requires to lift some analytic factors G, H of F which are not coprime modulo (X). To this aim, we will generalise the classical Hensel lifting. The first step is to compute a generalized Bezout relation U G + V H = X η for some η as small as possible. The following proposition gives an upper bound for the lifting order which will be sufficient for our purpose.

Corollary 5. Assume that F ∈ K[[X]][Y ] is a non irreducible monic polynomial. Then there exists a factorisation
where F i if the analytic factor associated to R i (see Section 2.1), then we are done thanks to Proposition 8.
Finally, note that the relation U G + V H = X η mod X η+1 can be computed in O(M(d Y η) log(η) + M(d Y ) η log(d Y )) from [18,Corollary 1], that is less than O(M(υ) log(υ)) if we consider the factorisation of Corollary 5.

Adaptation of Hensel's lemma to our context.
We now generalise the classical Hensel lemma, starting from a relation F = G H known modulo X 2 η+1 with a Bezout relation U G + V H = X η mod X η+1 . We refer for instance to [15, section 15.4, page 444] for the classical version when G and H are coprime modulo (X).
Lemma 10. Given F, G, H as in the input of algorithm HenselStep with n 0 = 2 η + 1, there exists an algorithm that computes polynomials as in the output of HenselStep for any precision n ∈ N, additionally satisfying: We call Hensel such an algorithm. It takes less than Algorithm 1]. Then, double the value n 0 − 2 η at each call of HenselStep, until we get n 0 − 2 η ≥ n + η. Then correctness and complexity are a straightforward consequence of Lemma 9 (using [18, Corollary 1] for the computation of U and V ). Finally, uniqueness of the result is an adaptation of [15, Theorem 15.14, page 448] (this works because we take a precision such that n 0 − 2 η ≥ n + η).
Remark 6. Note that if G(0, Y ) and H(0, Y ) are coprime, then η = 0 and this result is the classical Hensel lemma.

The divide and conquer algorithm for monic polynomials.
We now provide our divide and conquer algorithm. Algorithm Quo outputs the quotient of the euclidean division in K[[X]][Y ] modulo a power of X.
We can now give our main algorithm RNP which will compute the singular part of all RPEs of F above X = 0, including those centered at (0, ∞).

Algorithm: RNP(F, n)
Input: F ∈ K[X, Y ], separable in Y and n ∈ N. Output: If n is big enough, the singular part (at least) of all the RPEs of F above X = 0 The proof of Theorem 6 follows immediately from the following proposition : Proposition 11. Assume that p = 0 or p > d Y . Not taking into acount the cost of univariate factorisations, a function call RNP(F, υ) returns the expected output with at most O(M(d Y υ) log(d Y υ)) arithmetic operations.
There is one delicate point in the proof of Proposition 11: we need to invert the RPEs of F ′ ∞ and it is not clear that the truncation bound n = υ is sufficient for recovering in such a way the singular part of the RPEs of F ∞ (see also Remark 7 below). We will need the following two results: Lemma 11. Given two distinct Puiseux series S and S ′ , we have υ X Proof. If υ X (S) = υ X (S ′ ), one can assume up to renaming the series that υ X (S) < υ X (S ′ ). Then υ X (S − S ′ ) = υ X (S) and υ X 1 Proof. Let d = d Y (F ∞ ) and let v stands for the X-valuation of the leading coefficient of F ∞ with respect to Y . Let S 1 , · · · , S d the Puiseux series of F ∞ and S i one of the Puiseux series associated to the RPE R i of F ∞ we are considering. Note that s i e i = υ X (S i ). By definition of the regularity index, we have υ (several values of i 0 are possible). We distinguish three cases:  [24,Case 3 in Proof of Proposition 5, pages 204 and 205] for details. 2. υ X (S i ) > υ X (S i 0 ) ; then r i = s i and υ X (S i ) > υ X (S j ) for any j = i by definition of i 0 . In particular, this induces and one could have define i 0 = j and deal with it as Case 1.
We can now prove Proposition 12. First, when considering Case 1, knowing each 1 i 0 ) ≥ r i −2 s i e i , we are done. Then, concerning Case 2, from Proposition 9 and Lemma 6, we know the RPE R i with precision at least v i + v d Y , that is at least v d Y thus − s i e i from the discussion above. As r i = s i , this is at least r i −2 s i e i . Finally, Case 3 requires more attention. Let's first assume that υ X (S i ) > υ X (S j ) for some j = i ; then υ X (S i − S j ) − υ X (S i ) − υ X (S j ) = υ X (S i ) = − s i e i , and we are done since r i = s i . If not, then we have υ X (S i ) < υ X (S j ) for all j. This means that e i = f i = 1, and that N (H) for H a factor of F ′ ∞ in K[[X]][Y ] has an edge [(0, α − s i ), (1, α)]. We just need to prove that the truncation bound used when dealing with this Puiseux series is at least α − s i . As long as this is not the case, this edge is not considered from the definition of N n (H) ; also, at each recursive call of MonicRNP (Line 8), the value of α decreases and the truncation bound increased (since d Y is at least divided by 2). If the truncation bound is not high enough before, we will end with a polynomial of degree 1, corresponding to α = 0, and a truncation bound equal to n = v, that is more than −s i . This concludes.
Proof of Proposition 11. Let us show that the truncation bound for F ′ ∞ is sufficient for recovering the singular part of the Puiseux series of F ∞ . First note that the inversion of the second element is done as follows: given and denoting s i = −υ T (Γ ′ i (T )) < 0, we compute the inverse of T s i Γ ′ i (T ) (that has a non zero constant coefficient) via quadratic Newton iteration [15, Algorithm 9.3, page 259] in less than O(M(τ i + s i )) arithmetic operations [15,Theorem 9.4,page 260]. In order to get the singular part of the corresponding RPE R i of F ∞ , we need to know R i with precision r i e i , i.e. to know at least r i − s i + 1 terms. That means that we need to know R ′ i with precision r i − 2 s i or more. This holds thanks to Proposition 12. Correctness and complexity of Algorithm RNP then follows straightforwardly from Proposition 9 and Proposition 10.
Remark 7. Note that precision υ X (Disc Y F ) is not always enough to get the singular part of the Puiseux series centered at (0, ∞), as shows the following example: consider F (X, Y ) = 1+X Y d−1 +X d+1 Y d , that has a reciprocal polynomial Here we have υ X (Disc Y F ) = d, and ⌈F ′ ⌉ d = Y d + X Y . This is enough to compute the singular part of Puiseux series of F ′ , but not to get one term of the Puiseux series X d . Nevertheless, the precision υ = v + υ X (Disc Y F ) = 2d + 1 is sufficient.
Proof of Theorem 6. The algorithm mentionned in this theorem is RNP described above, run with parameter (F, υ). We can compute the parameter υ with at most O(M(d Y υ) log(d Y υ) operations from [18, Corollary 1].
Remark 8. Another way to approach the non monic case is the one used in [24]. The idea is to use algorithms MonicRNP and Half-RNP3 even when F is not monic. This would change nothing as far as these algorithms are concerned, but the proof concerning truncation bounds must be adapted: defining s i = min(0, υ X (S i )), Figure 3, page 211] to deal with the possible positive slopes of the initial call. Then the remaining results of Section 3.3 are preserved, replacing v i by v ′ i and N i = N ′ i , using some intermediate results of [24] (in particular, to prove r i e i ≤ v ′ i , we need to use some formulas in the proof of [24, Proposition 5, page 204]). We chose to consider the monic case separately, since it makes one of the main technical results of this paper (namely tight truncation bounds) less difficult to apprehend, thus the paper more progressive to read.

Avoiding univariate factorisation
As pointed out in the conclusion of [26], even over finite fields (when considering all critical points), the cost of univariate factorisation cannot be bounded so that it fits into our objectives. Up to now, we proved Theorem 1 up to the cost of univariate factorisations. To conclude the proof, one would additionally need to prove that the cost of all univariate factorisations computed when calling Algorithm Half-RNP is in O˜(υ d Y ). As υ can be small, that means that we would need a univariate factorisation algorithm for a polynomial in K[Y ] of degree at most d with complexity O˜(d). Unfortunately, this does not exists. We will solve this point via Idea 6: another approach is the one of Della Dora, Dicrescenzo and Duval, who introduced the "dynamic evaluation" techniques [10,9] (also named "D5 principle") as a mean to compute with algebraic numbers, while avoiding factorisation (replacing it by squarefree factorisation). This technique will lead us to consider polynomials with coefficients belonging to a direct product of field extensions of K ; more precisely to a zero-dimensional non integral K-algebra K I = L[Z 1 , · · · , Z s ]/I where I is defined as a triangular set in K[Z] := K[Z 1 , · · · , Z s ]. In this context, zero-divisors might appear, causing triangular decomposition and splittings (see Section 5.1 for details). In our context, three main subroutines of the Half-RNP algorithm can lead to a decomposition of the coefficient ring: There are two other points that we need to take care of for our main program: (iv) subroutine Hensel, via the initial use of [18, Algorithm 1], (v) the initial factorisation of algorithm RNP.
To simplify the comprehension of this section, we will not mention logarithm factors in our complexity results, using only the O˜notation. This section is divided as follows: 1. We start by recalling a few definitions on triangular sets and in particular our notion of D5 rational Puiseux expansions in Section 5.1.

2.
The key point of this section is to deal with these splitting with almost linear algorithms ; to do so, we mainly rely on [9]. We briefly review in Section 5.2 their results ; additionally, we introduce a few algorithms needed in our context. In particular, this section details points (iii) and (iv) above.
3. Points (i) and (ii) above are grouped in a unique procedure Polygon-Data, detailed in Section 5.3.

4.
We provide the D5 versions of algorithms Half-RNP, MonicRNP and RNP in Section 5.4.
We abusively call an ideal I ⊂ K[Z] a triangular set if it can be generated by a triangular set (P 1 , . . . , P s ). We denote by K I the quotient ring K[Z]/(I).
Note that this defines a zero-dimensional lexicographic Gröbner basis for the order Z 1 < · · · < Z s with a triangular structure. Such a product of fields contains zero-divisor: Definition 13. We say that a non-zero element α ∈ K I is regular if it is not a zero-divisor. We say that a polynomial or a parametrisation is defined over K I is regular if all its non zero coefficients are regular.
Given a zero-divisor α of K I , one can divide I as I = I 0 ∩ I 1 with I 0 + I 1 = (1), α mod I 0 = 0 and α mod I 1 is invertible. Moreover, both ideals I 0 and I 1 can be represented by triangular sets of K[Z].

Definition 14.
A triangular decomposition of an ideal I is I = I 1 ∩· · ·∩I k such that I i +I j = (1) for any 1 ≤ i = j ≤ k, and such that every I i can be represented by a triangular set.
Thanks to the Chinese remainder theorem, the K-algebra K I is isomorphic to K I 1 ⊕ · · · ⊕ K I k for any triangular decomposition of I. We extend this isomorphism coefficient wise for any polynomial or series defined above K I .
Definition 15. Considering any polynomial or series defined above K I , we defined its splitting according to a triangular decomposition I = I 1 ∩ · · · ∩ I k the application of the above isomorphism coefficient-wise.
A key point as far complexity is concerned is the concept of non critical triangular decompositions. We recall [9, Definition 17. Let (P 1 , · · · , P s ) and (P ′ 1 , · · · , P ′ s ) be two distinct triangular sets. We define the level l of this two triangular sets to be the least integer such that P l = P ′ l . We say that these triangular sets are critical if P l and P ′ l are not coprime in K[Z 1 , · · · , Z l−1 ]/(P 1 , · · · , P l−1 ). A triangular decomposition I = I 1 ∩ · · · ∩ I k is said non critical if it has no critical pairs ; otherwise, it is said critical.
D5 rational Puiseux expansions. We conclude this section by defining systems of D5-RPEs over fields and product of fields. Roughly speaking, a system of D5-RPE over a field K is a system of RPEs over K grouped together with respect to some square-free factorisation of the characteristic polynomials, hence without being necessarily conjugated over K. We have to take care of two main points. Since we want to get informations before fields splittings, the computed parametrisations by our algorithm will be regular (i.e. not containing any zero divisors). And of course, we want to recover usual system of RPEs after fields splittings. In particular, the computed parametrisation will fit the following definition: Definition 18. Let F ∈ K[X, Y ] be a separable polynomial over a field K. A system of D5 rational Puiseux expansions over K of F above 0 is a set {R i } 1≤i≤ρ such that: • R i ∈ K P i ((T )) 2 for some square-free polynomial P i , • Denoting P i = j P ij the univariate factorisation of P i over K and {R ij } j the splitting of R i according to the decomposition K P i = ⊕ j K P ij , then the set {R ij } i,j is a system of K-RPE of F above 0 (as in Definition 2).
In order to deal with all critical points in Section 6, we will not compute the RPE's of F above 0 but above a root of a squarefree factor Q of the resultant R F . This leads us to the following definition: squarefree. We say that F admits a system of D5-RPE's over K Q above 0 if there exists parametrisations as in Definition 18 that are regular over K Q . Now, a system of D5 rational Puiseux expansions over K of F above the roots of Q is a set {Q i , R i } i such that: • R i is a system of D5 RPE's over K Q i of F (X + Z, Y ) mod Q i (Z) above 0 (in the sense of definition above).

Complexity of dynamic evaluation.
Results of [9]. We start by recalling the main results of [9], providing them only with the O˜notation (i.e. forgetting logarithm factors). In particular, we will take M(d) ∈ O˜(d) in the following. In our paper, we also assume the number of variables defining triangular sets to be constant (we usually have s = 2 in our context).
Definition 20. An arithmetic time is a function I → A s (I) with real positive values and defined over all triangular sets in K[Z 1 , · · · , Z s ] such that the following conditions hold: Proposition 14. Let K be a field of characteristic p satisfying p = 0 or p > d. There exists an algorithm SQR-Free that, given a bivariate triangular set I = (Q, P ) and φ ∈ K I [Y ] monic of degree d, computes a set {(I k , (φ k,l , M k,l ) l ) k } such that I = ∩ k I k is a non critical triangular decomposition of I and, denoting φ k := φ mod I k , φ k = l φ M k,l k,l is the square-free factorisation of φ k ∈ K I k [Y ]. It takes less than O˜(d d I ) operations over K.
Proof. We just need to compute successive gcds and euclidean divisions, using Yun's algorithm [15,Algorithm 14.21, page 395] (this result is in characteristic 0, but works in positive characteristic when p > d). Each gcd computation is Proposition 13, and we just need to add splitting steps (if needed) of the ideal I in between two calls. The complexity follows by using Proposition 13 in the proof of [15,Theorem 14.23,page 396], since there are less than d calls to the algorithm removeCriticalPairs.
Computation of a primitive element. We conclude this section by a brief description of the algorithm PrimElt. This is the same algorithm than the one described in Proposition 5, with additional attention on splittings. Indeed, in our context, we will need to compute primitive element above K Q for some squarefree polynomial Q.

Proposition 15.
Assuming there is at least d 2 elements in K, there exists a Las-Vegas algorithm that, given φ squarefree in K I [Z 3 ] for some triangular set I = (Q, P ), compute a set (Q k , P ′ k , ψ k ) of squarefree polynomials over K Q k with d = deg(P ′ k ) = d P deg Z 3 (φ), Q = k Q k and such that ψ k is an isomorphism between K I k and K I ′ k (denoting I k = (Q k , P mod Q k , φ mod Q k ) and I ′ k = (Q k , P ′ k )) in less than O˜(d ω+1 2 d Q ) operations over K. We call PrimElt such an algorithm.
Proof. In order to compute a primitive element and the associated isomorphism, we follow the Las Vegas algorithm 3 given in [27,Section 2.2]. This involves trace computation of the monomial basis, that is reduced to polynomial multiplication thanks to [20,Proposition 8] ; it takes O(M(d d Q )) operations in K. Then, picking a random element A, power projection is used to compute 2 d traces of powers of A ; using methodes based on [29], this involves only polynomial, transposed polynomial and matrix multiplications for a total in O(d ω+1 2 M(d Q )) operations in K. Finally, the primitive element can be deduced via Newton's method in O(M(d d Q )) operations ; testing its squarefreeness involves gcd over K Q , thus possible decomposition of Q. It takes less than O˜(d d Q ) operations over K from Proposition 13. Finally, one just need to use algorithm removeCriticalPairs when needed to deal with triangular decomposition that appear. This fits in our bound from Theorem 7.
For each ψ k computation, we need to compute d additional traces ; this is once again power projection. Finally, one has to solve a linear system defined by a Hankel matrix (see [29,Proof of Theorem 5]). This can be solved using algorithm described in [4], that reduces the problem to extended gcd computation. This step thus involves potential decomposition of Q, and can be done once again in less than O˜(d d Q ) operations over K.
Finally, rewriting the coefficient of a polynomial 2 , that can be D ω+1 when the factor Q of the resultant has high degree (see Section 6). As ω > 2, we could not achieve our aimed bounds. Moreover, Q and P do not provide the same geometrical information ; it is thus interesting to distinguish them.
Extending [18, Algorithm 1] to the D5 context. Proposition 16. Given G, H ∈ K I [X, Y ] with I = (P, Q) and whose degrees in Y are bounded by d, one can compute a set (I k , G k , H k , U k , V k , η k ) k such that: • I = ∩ k I k is a non critical decomposition of I, • G k = G mod I k and H k = H mod I k , 3 here the assumption on the number of elements of K is used This can be done in less than O˜(d η) arithmetic operations, with η = max k η k .
Proof (sketch). Algorithm given in [18] is similar to the fast-gcd algorithm: the two key ingredients in there are polynomial multiplication and euclidean division. As a consequence, one can adapt this algorithm as done in [9] for the fast-gcd algorithm. Writting that properly would require a lot of space ; as this is not a key point of this paper, we skip it.

Computing polygon datas in the D5 context.
We now present the main algorithm where we have to deal with dynamic evaluation: the computation of the Newton polyon and associated datas, namely squarefree decomposition of characteristic polynomials. These computations are grouped in algorithm Polygon-Data below. Given H ∈ K I [X, Y ] known with precision n, it returns a list {(I i , H i , ∆ ij , φ ijk , M ijk )} where: • I = ∩I i is a non critical triangular decomposition and the non zero coefficients of H i := H mod I i are invertible, is the set of edges of H i which can be computed from ⌈H i ⌉ n ; see Definition 8, ijk is the squarefree factorisation of φ ∆ ij the characteristic polynomial of H i associated to ∆ ij .
Algorithm: Polygon-Data(H, I, n) Input: I a triangular set and H ∈ K I [X, Y ] a bivariate polynomial known modulo X n+1 . We assume n > 0 and d Y (H) > 0.

Computing half Puiseux series using dynamic evaluation.
We finally provide the D5 variant Half-RNP3 of algorithm Half-RNP: here we replace univariate factorisation by square-free factorisation and use dynamic evaluation to deal with potential zero divisors.
In order to compute also the RPEs of F above the roots of any factor Q of the resultant, we are led to consider I = (Q, P ) instead of P as an input for Half-RNP3. More precisely, the input is a set F, I, n, π such that: • I = (Q, P ) is a bivariate triangular set over K (P = Z 2 for the initial call ; Q = Z 1 is admitted) • n ∈ N is the truncation order we will use for the powers of X during the algorithm, • F ∈ K I [X, Y ] separable and monic in Y with d Y > 0 and known with precision n Proposition 20. Assuming that Q is a square-free factor of R F with multiplicity υ Q ≤ n, a function call RNP3(F, Q, n) returns the correct answer in less than O˜(d Q d Y n) operation overs K.
Proof. The correctness follows from Proposition 11 and Proposition 19 (the trailing coefficient of the resultant of F i,0 and F i,∞ is not a zero divisor by construction). The complexity follows from Proposition 3, Theorem 7 and Proposition 19, combined with the relation d Y (F i,0 ) + d Y (F i,∞ ) = d Y and i deg(Q i ) = d Q .
Proof of Theorem 1. The algorithm mentionned in Theorem 1 is Algorithm RNP3, run with parameters Q = Z 1 and n = υ. The computation of υ can be done via [18,Algorithm 1] in the aimed bound. Note that as we consider the special case Q = Z 1 , F has coefficients over a field and this operation does not involve any dynamic evaluation. The function call RNP3(F, Z 1 , υ) fits in the aimed complexity thanks to Proposition 20.
6 Desingularisation and genus of plane curves: proof of Theorem 2 and corollaries.
It's now straightforward to compute a system of singular parts of D5 rational Puiseux expansions above all critical points. We include the RPEs of F above X = ∞, defined as RPEs above X = 0 of the reciprocal polynomial F ′ := X d X F (X −1 , Y ). We denote by υ ∞ := υ X (R F ′ ), equal to d X (2 d Y − 1) − deg(R F ).
Definition 21. Let F ∈ K[X, Y ] be a separable polynomial over a field K. A D5-desingularisation of F over K is a collection {(R 1 , Q 1 ), . . . , (R s , Q s ), R ∞ } such that: • Q k ∈ K[X] are pairwise coprime square-free polynomials such that R F = s k=1 Q υ k k , υ k ∈ N * ; • R k is a system of singular parts (at least) of D5-RPEs of F above the roots of Q k ; • R ∞ is a system of singular parts (at least) of D5-RPEs of F above X = ∞.
Note that we can deduce from a D5-desingularisation of F the singular part of the RPE's of F above any roots of the resultant R F . Also, note that we allow equality υ k = υ l for k = l so that the factorisation of the resultant R F does not necessarily coincides with its square-free factorisation. We obtain the following algorithm: Proof. Correctness is straightforward from Proposition 20. The computation of the resultant R F fits in the aimed bound [15,Cor. 11.21]. The complexity is then straightforward from Proposition 20 combined with the classical formula k deg(Q k )υ k + υ ∞ = deg(R F ) + υ ∞ = d X (2 d Y − 1).
Proof of Theorem 2. Follows immediately from Proposition 21.
4. Let F i := NormRPE(R i , 4υ/d Y ). We have by construction F = F 1 · · · F λ H mod X 4υ/d Y +1 for some polynomial H. The polynomials F i can be computed in the aimed bound, so does H by euclidean division.
5. By Proposition 8, we have κ(F 1 , . . . , F λ , H) < max v i ≤ 2υ/d Y and we deduce from Proposition 22 that we can lift the factorisation of step 4 and compute with at most O˜(d Y υ) operations a new factorisation where all factors coincide with the corresponding analytic factors F * 1 , . . . , F * k , H * of F up to precision > υ−2υ/d Y . Moreover, the F * i are irreducible. 6. Previous equality forcesH to be separable. Moreover we have d Y (H) < d Y (F )/2 by Corollary 5. By applying recursively steps 2 to 5 onH (keeping υ = υ F but replacing d Y with d Y (H)), we obtain within O˜(d Y υ) operations a factorisation F =F 1 · · ·F ρ mod X υ+1 , where the polynomialsF i are in bijection with the irreducible analytic factors F * i of F . 7. Since κ(F I ,F J ) ≤ υ X Res Y (F I ,F J ) ≤ υ/2, Proposition 22 implies that F * i =F i mod X υ/2 for all i. 8. If N ≤ υ/2, we are done. Otherwise, we apply again Proposition 22 to lift the previous factorisation up to precision X N +υ/2 and compute in such a way the analytic factors of F truncated with precision N within O˜(d Y N ) operations.
The non monic case. If F is not monic (but separable), we adapt now algorithm RNP3. We compute the triple (u, F 0 , F ∞ ) := Monic(F, υ) in the aimed bound. By construction, F 0 and the reciprocal polynomial F ′ ∞ of F ∞ are both monic. Moreover, they are separable (because known with precision ≥ υ), so we can compute their irreducible factors with precision υ in the aimed complexity bound by applying the monic case strategy. By considering the reciprocal factors of F ′ ∞ , we obtain in such a way a factorisation F = uF 1 · · ·F ρ mod X υ+1 , where theF i are one-to-one with the irreducible factors of F . If necessary, we lift further this factorization to the desired precision as in step 8 above thanks to Proposition 22. Theorem 3 is proved.