The Lie symmetry group of the general Lienard-type equation

We consider the general Lienard-type equation $\ddot{u} = \sum_{k=0}^n f_k \dot{u}^k$ for $n\geq 4$. This equation naturally admits the Lie symmetry $\frac{\partial}{\partial t}$. We completely characterize when this equation admits another Lie symmetry, and give an easily verifiable condition for this on the functions $f_0, \dots , f_n$. Moreover, we give an equivalent characterization of this condition. Similar results have already been obtained previously in the cases $n=1$ or $n=2$. That is, this paper handles all remaining cases except for $n=3$.


Introduction
In this paper we consider the Liénard-type second order ordinary differential equation where the dot denotes differentiating by the independent variable t representing the time. Eq. (1.1) is a special case of the Levinson-Smith-type equationü = g 1 (u,u)u + g 0 (u) [1, G.3, p. 198-199], for which existence and uniqueness of a limit cycle have been established under certain conditions [2,3]. Eq. (1.1) is a common generalization of the Rayleigh-type equationü + F(u) + u = 0 when F is a polynomial, the classical Liénard-type equationü = f 1 (u)u + f 0 (u), and the quadratic Liénardtype equationü = f 2 (u)u 2 + f 1 (u)u + f 0 (u). These equations come up quite often in Physics or Biology. Rayleigh-type systems play an important role in the theory of sound [4] or in the theory of non-linear oscillations [5,Chapter 2.2.4]. Classical Liénard-type equations arise in the model of the van der Pol oscillator applied in physical and biological sciences [6], but electric activity of the heart rate [7] or nerve impulses are modelled by a Liénard-type model, as well [8,9] or [10,Chapter 7]. In [11] the population of Easter Island is modelled, and the system of differential equations is then reduced to a second order quadratic Liénard-type equation. One can even find applications in economy [12][13][14][15].
Symmetry analysis is a very useful tool developed to understand and solve differential equations. Several examples come from Physics (see e.g. [16,17] for comprehensive studies on the topic), and an increasing number of examples from Biology (see e.g. [11,[18][19][20]). Finding some symmetries for a differential equation can be used to derive an appropriate change of coordinates which then helps to eliminate the independent variables or to decrease the order of the system. In many cases mentioned above (e.g. the Fitzhugh-Nagumo model [8,9] or the model for the population of Easter Island [11]) the model is based on a first order system of two equations equivalent to a second order Liénard equation. In such a case, one might benefit to consider the equivalent second order system, which would only admit a finite dimensional Lie symmetry group instead of an infinite one. If this Lie group is at least two-dimensional, then pulling the symmetries back to the original system could yield two independent symmetries of the original system, and solutions can be determined by quadratures. This method has been applied successfully in several situations in the past (see e.g. [11,18,20] for some recent examples in Biology). This motivates to study the Lie symmetries of (1.1).
Pandey, Bindu, Senthilvelan and Lakshmanan [21,22] considered the classical Liénard equation where f 1 and f 0 are arbitrary, infinitely many times differentiable functions. They classified when (1.2) has a 1, 2, 3, or 8 dimensional Lie symmetry group depending on f 0 and f 1 . Then Tiwari, Pandey, Senthilvelan and Lakshmanan [23,24] classified the dimension of the Lie symmetry group of quadratic Liénard-type equations withoutu term, and then more generally [25] the mixed quadratic Liénard-type equationü where f 0 , f 1 and f 2 are arbitrary, infinitely many times differentiable functions. Further, Paliathanasis and Leach [26] showed how one can simplify (1. 3) by removing f 2 from (1.3) in the case f 1 = 0. The question naturally arises: what are the Lie symmetries if the right-hand side of (1.3) is a higher order polynomial inu? In this paper we consider (1.1) for n ≥ 4 and for differentiable functions f k depending only on u, and not on t. Note, that (1.1) is autonomous, therefore the tangential Lie algebra L of the Lie group of all its symmetries always contains the 1-dimensional subalgebra generated by the vector field ∂ ∂t . Determining another generator of L would then lead to a solution by quadratures of (1.1), and of any first order system equivalent to it. In Theorem 3.1 (see Section 3 for details) we completely characterize the case when (1.1) admits a more than 1 (in fact, 2) dimensional symmetry group. In particular, we give conditions (3.1-3.4) such that the symmetry group is 2-dimensional if and only if these conditions hold.
Here, conditions (3.1-3.3) are natural, but the meaning of the system (3.4) seems less intuitive, even though the system (3.4) is easily verifiable for a particular choice of F. In Theorem 4.1 (see Section 4 for details) we provide a necessary and sufficient condition for f 0 , . . . , f n to satisfy (3.4). It turns out that f 0 , . . . , f n satisfy (3.4) if and only if they are expressible by F and some constants.

The symmetry condition
We formulate the symmetry condition for (1.1) in this section. Consider (1.1) on the plane (t, u), where t is the independent variable, and u is the dependent variable. Further, the computations will be slightly easier if we consider the right-hand side as an infinite sum ∑ k f ku The general form of an infinitesimal generator of a symmetry of (1.1) has the form Let D denote the total derivation by t, that is Dξ = ξ t +uξ u , Dη = η t +uη u . We use the convention of writing partial derivatives into the lower right index. Then the first prolongation of X is Further, let be the spray corresponding to the differential equation (1.1). The vector field (2.1) is an infinitesimal symmetry of (1.1) if and only if its first prolongation X 1 satisfies the Lie bracket condition on the space (t, u,u) (cf. [16,Chapter 4,§3]). Substituting X 1 and S 1 into (2.2) we obtain therefore the symmetry condition is

Lie symmetry algebra
We consider (1.1) for n ≥ 4 and for differentiable functions f k depending only on u, and not on t.
In Theorem 3.1 we completely characterize the case when (1.1) admits more than 1 (in fact, 2) dimensional symmetry group. We prove following and with the notation g(u) = (n−2)F(u) (n−1)F ′ (u) the following hold: 1. do not hold, then the symmetry algebra L is generated by ∂ ∂t , 2. hold, then the 2 dimensional Lie symmetry algebra L is generated Proof. The left-hand side of (2.3) has to be zero for all (t, u,u).
In particular, ξ only depends on t and not on u. Substituting ξ u = 0 into (2.3) and considering the coefficients, we obtain that In the following we analyze the system (3.5). Note, that ξ = c, η = 0 (for any c ∈ R) satisfies (3.5). Further, if η = 0 then from (3.5e) we have ξ t = 0 and ξ = c for some c ∈ R. Thus, in the following we assume that η is not constant 0.
Here, the left-hand side is the and let g be defined as in Theorem 3.1, that is Thus we obtained that Note, that g(u) cannot be constant 0 on U ′ (otherwise both F and F ′ = | f n | 1 n−1 were constant 0), thus there exists an open interval U ⊆ U ′ such that F(u) = 0 for all u ∈ U , and hence g(u) = 0 (u ∈ U ). By substituting (3.6) into (3.5d) we have In particular, for k = n − 1 we have f n (u)g(u) = 0 for all u ∈ U , thus Now, f n , f n−1 , and g only depend on u, and ξ tt and ξ t only depend on t. Therefore there exists a ∈ R such that 8) or else ξ t = 0 implying η = 0 by (3.6), a contradiction. Thus, ξ t = ce at , (3.9) η = ce at g(u) (3.10) for some c ∈ R. Substituting (3.9) and (3.10) into (3.5), one obtains Thus, either f 0 , f 1 , f 2 , . . . f n , g, F satisfy the conditions (3.1-3.4), or else c = 0, implying ξ t = 0 and η = 0, a contradiction. Finally, assume that F 1 and F 2 both satisfy conditions (3.2-3.3) on U . Then let g 1 and g 2 be defined from F 1 and F 2 , and let a 1 and a 2 be defined using (3.7). Let ξ 1 , ξ 2 be such that (ξ i ) t = e a i t , let η i = e a i t g i (u), and let well. However, we have proved that if either ξ t = 0 or η = 0, then they are of the form (3.9) and (3.10). Thus, e a 1 t − e a 2 t is of the form ce at for some a, c ∈ R, that is a 1 = a 2 and c = 0. Then ξ t = 0, implying η = 0 by (3.6), which yields g 1 = g 2 . Finally, by F ′ 1 = F ′ 2 , the definition of g immediately implies and hence F 1 = F 2 . This finishes the proof of Theorem 3.1.

Equivalent description of the conditions
In this paragraph we provide a necessary and sufficient condition for f 0 , . . . , f n to satisfy (3.4). We have the following and such that f 0 , . . . , f n are of the form In particular, if a = 0, then B k (u) = 0 for all u ∈ U (0 ≤ k ≤ n), and thus f 0 , . . . , f n , F, g satisfy (3.

Further, if F is positive on U , then
First, in Section 4.1 we show that A k (0 ≤ k ≤ n) defined by (4.1) satisfy a recursive system of differential equations. The details are contained in Lemma 4.1. Then in Section 4.2 we consider the case a = 0, when (3.4) results in homogeneous equations for f k . Finally, in Section 4.3 we prove Theorem 4.1 by considering the general case a = 0 and applying the method of variation of parameters.

Auxiliary functions
Let us use the notations of Theorem 4.1.
Lemma 4.1. Let A n (u) = 0. For all 0 ≤ k ≤ n − 1 a particular solution of the ordinary differential equation and a particular solution is This proves (4.1) for k = n − 1. Assume now, that (4.1) holds for an integer m = n − k, 1 ≤ m ≤ n, that is Putting this into (4.6) for k = n − (m + 1) one obtains By integrating, one can obtain a particular solution as Hence, (4.1) holds for k = n − (m + 1) and by induction it holds for all integers 0 ≤ k ≤ n − 1.

The general (inhomogeneous) case
Proof.
[Proof of Theorem 4.1] If a = 0, then (3.4d) is an inhomogeneous linear differential equation for f k , and by (4.8) its general solution (by variation of parameters) is for some function B k = B k (u) and constant b k ∈ R. Write f k in the form Putting (4.10) into (3.4d) we obtain Now, h k is a particular solution of the homogeneous differential equation (4.7d), thus Since h k (u) = 0 for all u ∈ U , B k is a particular solution of the differential equation We prove by induction on m = n − k that B k = A k (3 ≤ k ≤ n − 1) by showing that B k satisfies the recursive system of differential equations (4.6) of Lemma 4.1. Let m = 1, that is k = n − 1. From (3.3) we have f n = ε(F ′ ) n−1 . Applying (4.11) we obtain Comparing (4.12) and (4.6) for m = 1, we find B ′ n−1 = A ′ n−1 by choosing b n = ε n−1 n−2 1−n . Hence, a particular solution B n−1 of the differential equation Assume that for an integer 4 ≤ k ≤ n − 1 B k = A k holds, thus from (4.10) for k = n − m + 1 we have Putting this into (4.11) for k = n − m we obtain We continue by proving (4.3b) and (4.2b), that is we show the condition on f k for k = 2. Now, f 2 is the solution of the inhomogeneous linear differential equation (3.4c). The general solution of (3.4c) by (4.9) and by variation of parameters has the form for some function B 2 = B 2 (u) and constant b 2 ∈ R. Putting (4.14) into (3.4c), then using 1 g ′ g = − g ′ g , and the fact that g ′ +b 2 g is the solution to the homogeneous differential equation (4.7c), one has Using the form of f 3 given by (4.10) and the definition of g we obtain Now, A 3 = B 3 by (4.2a), thus comparing (4.15) and (4.6) for k = 2 yields B ′ 2 = νA ′ 2 . Hence, a particular solution B 2 of the differential equation B ′ 2 = νA ′ 2 has the form B 2 = νA 2 . Thus, (4.3b) and (4.2b) hold. Now, we obtain the condition on f k for k = 1. The function f 1 is the solution of the inhomogeneous linear differential equation (3.4b). The general solution of (3.4b) by (4.8) and by variation of parameters is (4.16) for some function B 1 = B 1 (u) and constant b 1 ∈ R. Putting (4.16) into (3.4b), and using the fact that |F| 1−n n−2 is the solution to the homogeneous differential equation (4.7b), one obtains As f 2 has the form (4.14) and B 2 = νA 2 by (4.2b), we obtain that Comparing (4.17) and (4.6) for k = 1 we obtain Hence a particular solution B 1 of the differential equation (4.18) has the form Therefore, (4.3a) holds for k = 1, and (4.2c) is proved. Finally, we prove that (4.3a) holds for k = 0. For k = 0 the function f 0 is the solution of the inhomogeneous linear differential equation (3.4a). The general solution of (3.4a) by (4.8) and by variation of parameters is for some function B 0 = B 0 (u) and constant b 0 ∈ R. Putting (4.20) into (3.4a), and using the fact that n−2 n−1 |F| −n n−2 (F ′ ) −1 is the solution to the homogeneous differential equation (4.7a), we obtain Comparing (4.21) and (4.6) for k = 0 we have Therefore, a particular solution B 0 of (4.22) is Hence, (4.3a) holds for k = 0, and (4.2d) is proved. This finishes the proof of Theorem 4.1

Open problems
Several questions arise after determining the symmetries of (1.1). Indeed, if the Lie group of symmetries is at least two dimensional, then one can apply the two-dimensional solvable Lie group to obtain the solutions of (1.1).
Problem 5.1. Determine the solutions of (1.1) provided f k (0 ≤ k ≤ n, n ≥ 4) satisfy the conditions of Theorem 3.1.
The only remaining case for (1.1) not covered by Theorem 3.1 or by [21][22][23][24][25][26] is when n = 3. Then one cannot immediately conclude ξ u = 0 from (2.3), because theu 4 term of (2.3) is identically 0. In fact, for n = 3 the symmetry condition translates to A potential simplification of the system (5.1) might be to eliminate f 2 from (5.1) by using a coordinate change v = G(u), for some bijective, two-times differentiable G, for which G ′′ (u) = G ′ (u) f 2 (u) is satisfied (see e.g. [26]). This, however, still does not give an immediate answer as to what the solutions of (5.1) are.