Deflation for semismooth equations

Variational inequalities can in general support distinct solutions. In this paper we study an algorithm for computing distinct solutions of a variational inequality, without varying the initial guess supplied to the solver. The central idea is the combination of a semismooth Newton method with a deflation operator that eliminates known solutions from consideration. Given one root of a semismooth residual, deflation constructs a new problem for which a semismooth Newton method will not converge to the known root, even from the same initial guess. This enables the discovery of other roots. We prove the effectiveness of the deflation technique under the same assumptions that guarantee locally superlinear convergence of a semismooth Newton method. We demonstrate its utility on various finite- and infinite-dimensional examples drawn from constrained optimization, game theory, economics and solid mechanics.


Introduction
Variational inequalities are a fundamental class of problem that arise in many branches of applied mathematics. The problem to be solved is: given a real reflexive Banach space U , a closed convex subset K ⊂ U , and an operator Q : K → U * mapping to the dual space U * of U , find u ∈ K such that Q(u), v − u ≥ 0 for all v ∈ K.
(1) This is denoted by VI(Q, K). As an elementary example, consider the problem of minimizing a differentiable function f : R → R over the closed interval I ⊂ R. The necessary condition for z ∈ R to be a (local) minimum is that z satisfies VI(f , I).
More generally, the minimizers of a general smooth nonlinear program satisfy a variational inequality, which is related to the familiar Karush-Kuhn-Tucker conditions under a suitable constraint qualification, see e.g. [53]. Variational inequalities also arise naturally in problems of solid mechanics involving contact [21,38], in game theory for the calculation of Nash equilibria [28], in phase separation with nonsmooth free energy [7], and other fields. For more details on variational inequalities, see [16,20,25,26,29,39,48] and the references therein. A very popular and successful strategy for computing a solution of a variational inequality is to reformulate it as a semismooth rootfinding problem [11,13,31,42,44,55,61], where it is possible to do so. That is, (1) is equivalently reformulated as the task of finding z ∈ Z such that for a residual function F : Z → V , where Z and V are real Banach spaces and Z is reflexive. The space Z is typically constructed via Z = U × Λ, where Λ is a suitable space of Lagrange multipliers. The residual F may not be differentiable in the classical Fréchet sense but enjoys a weaker property called semismoothness (defined later in Definition 2.3). The problem (2) is constructed in such a way that there is a bijection between solutions of (1) and roots of (2). While the standard Newton-Kantorovich iteration requires the existence of the Fréchet derivative F of F , it is possible to define a semismooth Newton iteration for semismooth residuals (defined later in section 2.3). This method exhibits locally superlinear convergence under certain regularity conditions on the solution z.
Variational inequalities often admit multiple solutions, and these are typically significant for the application at hand. For example, a nonconvex optimization problem may permit several local minima, while a game may permit multiple Nash equilibria. Identifying these distinct solutions is important for understanding the system as a whole. The question of calculating distinct roots of semismooth residuals such as (2) naturally follows. In this paper, we analyze a numerical technique called deflation that can successfully identify multiple solutions of variational inequalities, provided they exist and are isolated from each other.
The central idea of deflation is to compute distinct roots of the semismooth residual F in the following manner. Let us suppose we are given a semismooth residual F : Z → V and a single known root r ∈ Z that satisfies some regularity conditions to be made precise later. Deflation constructs a modified residual G : Z → V with the following properties: (1) G preserves roots: G(z) = 0 ⇐⇒ F (z) = 0, for z ∈ Z \ {r}, (2) A semismooth Newton method applied to G from any initial guess in Z \ {r} will not converge to r.
This latter property holds even if semismooth Newton on G is initialized from the same initial guess that led to the convergence to r in the first instance. That is, if semismooth Newton is applied to G, and it converges, it will converge to a distinct solution. By enforcing nonconvergence of the semismooth Newton method to known solutions, deflation enables the discovery of unknown ones. The deflated problem is constructed via the application of a deflation operator to the underlying problem F . The idea of deflation was first investigated in the context of differentiable maps F : R n → R n by Brown and Gearhart [9], and was subsequently reinvented in the context of optimization as the tunneling method of Levy and Gómez [47]. Birkisson [6] and Farrell et al. [18] analyzed it in the context of Fréchet differentiable maps between Banach spaces, allowing for its application to smooth partial differential equations. The main contribution of this work is to extend the theory to the case where F is semismooth, but not Fréchet differentiable.
The importance of multiple solutions of variational inequalities has motivated other authors to develop various approaches for computing them. A simple strategy is to vary the initial guess given to the solver [64], but this is heuristic and labour-intensive [60]. By contrast, the deflation technique does not involve modifying the initial guess; the problem itself is modified. Judice and Mitra [36] develop an algorithm for enumerating the solutions of linear complementarity problems, variational inequalities of the form: find x ∈ R n + such that where R n + is the non-negative orthant of Euclidean space, M ∈ R n×n , and q ∈ R. Their algorithm requires exhaustive exploration of a binary tree whose size is exponential in terms of the size of the problem, and is thus impractical for large problems. Tin-Loi and Tseng [60] develop an algorithm for finding multiple solutions of linear complementarity problems by augmenting the problem with constraints that eliminate known solutions; while very successful on the problems considered, the size of each problem increases with each solution eliminated. By contrast, the deflation technique does not increase the size of the problems to be solved after each solution found. Another strategy is to extend classical path-following algorithms to parameter-dependent variational inequalities, and trace out the bifurcation diagram as the parameter is varied [12,50]. By continuing around turning points, distinct solutions for the same parameter values may be identified; however, this strategy will only identify distinct solutions that happen to lie on a connected branch. The deflation technique enables the discovery of solutions on disconnected branches.
Deflation was first applied in the context of finite-dimensional mixed complementarity problems (a particular kind of variational inequality) by Kanzow [37]. Kanzow reports some success on rather difficult test problems, but remarks that deflation is "not . . . very reliable for larger problems". This impression is more widely shared: Allgower & Georg [3] remark (in the context of nonlinear equations, not variational inequalities) that "it is often a matter of seeming chance whether one obtains an additional solution". We hypothesize that these negative experiences are a consequence of using a poor deflation operator, the norm deflation operator proposed by Brown & Gearhart. We will demonstrate that deflation is much more robust and effective for semismooth problems with an elementary modification to the deflation operator that recovers the correct behaviour of the deflated problem at infinity.
This paper is laid out as follows. In section 2 we define a deflation operator, and prove its effectiveness in the semismooth case. The regularity conditions required on the known solution z are exactly those used to prove locally superlinear convergence of the semismooth Newton method itself in [11,31]; no additional assumptions are required. In section 3, we apply the technique to calculating distinct solutions of several illustrative finite-dimensional variational inequalities, while in section 4 we apply the technique to infinite-dimensional problems with a mesh-independent function-spacebased algorithm. These examples demonstrate that the shifted deflation operator applied in this work is effective in numerical practice. We conclude with some remarks and open questions in section 5.

Deflation operators
Given a known solution r, the deflated problem G(z) = 0 is constructed by the application of a deflation operator to the original problem F (z) = 0: The requirements for M to enable the discovery of new solutions are captured in the definition of a deflation operator. The definition has been slightly modified from [18] to allow it to apply to semismooth problems in the sequel. (2) The deflated residual does not converge to zero as z → r: Property (6) is referred to as the deflation property. The fundamental example of a deflation operator, proposed by Brown & Gearhart in 1971 [9], is where I V is the identity map on V , and p ≥ 1. The power p controls the rate of blowup as z → r. This is known to be a deflation operator in the case where F is continuously Fréchet differentiable [18]. This operator was one of two considered by Kanzow, and the operator considered by Allgower & Georg. However, this operator has a major drawback: as z − r → ∞ in any direction, M (z; r) → 0. This often leads to G(z) → 0 as well, depending on the behaviour of F (z) at infinity. Farrell et al. [18] suggested a simple modification to the deflation operator to recover the behaviour M (z; r) → 1 and hence G(z) → F (z) at infinity: the addition of a shift. The shifted deflation operator is This is much more effective in numerical practice than (7); the incorrect behaviour at infinity likely accounts for the unsatisfactory performance reported by Kanzow and Allgower & Georg. This will be investigated further in the examples in section 3.

Remark 1.
After the deflated problem (5) is constructed, the Newton-Kantorovich or semismooth Newton algorithms will be applied to it, and therefore the differentiability of the deflation operator should be established. These deflation operators (7) and (8) are differentiable away from z = r if the norm used on the Banach space Z is differentiable away from zero, i.e. if the Banach space Z is Fréchet smooth. Note that a reflexive Banach space always admits an equivalent Fréchet smooth norm [23], and hence this requirement is satisfied after possibly renorming. We therefore assume this property henceforth.

Remark 2.
In practice only an approximationr ≈ r is available for use in the deflation operator. The question then arises of how this approximation affects the computation of the roots of G (e.g. Wilkinson [65, pp. 55] considered this issue in the context of unshifted deflation for polynomial rootfinding). Since the shifted deflation operator (8) satisfies M (z;r) ≈ 1 away fromr (and r), no difficulties are encountered for solutions that are sufficiently far apart. If two solutions are very close together, a simple remedy discussed by Wilkinson is to calculate a root of G(z), then use that as initial guess for further Newton iterations on F (z) = 0.

The Fréchet-differentiable case
For completeness, we state the result arguing that (7) and (8) are deflation operators in the Fréchet-differentiable case. Incidentally, this result can be proven analogously to the semismooth case as in Theorem 2.4 below, which provides an alternative proof to the one found in [18].
Theorem 2.2 (Deflation for Fréchet differentiable problems [18]). Let F : D → V be a continuously Fréchet differentiable operator with derivative F : D → L(Z, V ), and let M be given by (7) or (8). Let r ∈ D be an isolated solution of F , i.e. satisfy F (r) = 0 with F (r) invertible. Then M is a deflation operator for F at r.

Remark 3.
It may be more convenient to use another norm · X in the deflation operator, provided Z → X. It is also possible to use a seminorm | · | X , provided

The semismooth case
We now consider the semismooth case.
for all z in N .
With this Newton derivative, the semismooth Newton iteration is given by We now state the main result of this work. The following theorem is novel.
Theorem 2.4 (Deflation for semismooth problems). Let Z and V be Banach spaces, and let F : Then the operators (7) and (8) are deflation operators for p ≥ 1.
Remark 4. These are the same assumptions used to prove the locally superlinear convergence of the semismooth Newton method in [11,31].
Proof. For brevity, define with σ = 0 corresponding to (7) and σ = 1 corresponding to (8). Invertibility of M (z; r) for all z = r is obvious. Consider z ∈ N \ {r}. Let γ = Γ −1 , and define As before, T (z) = o(z − r) from the definition of semismoothness. We then have as required.
It remains to show that the deflated problem (5) is in fact semismooth. This property is verified in the following result.
for some σ ≥ 0. Then the product G(z) = M (z; r)F (z) is also semismooth at D with Newton derivative action where z * ∈ Z * is the derivative of the norm · Z at z − r, i.e. satisfies Remark 5. If Z is a Hilbert space, then the Riesz representation of z * is Proof. This follows from the well-known calculus rules for semismooth and continuously Fréchet differentiable mappings, see e.g., [31,61] Remark 6. Since the deflated problem is semismooth, the usual sufficient conditions guaranteeing local superlinear convergence may be applied [31] to the deflated residual.

Remark 7.
Since the deflated problem is also semismooth, any devices developed for globalizing convergence may be applied, such as line search techniques and continuation e.g. [37,57].
Remark 8. It would be of significant interest to derive sufficient conditions guaranteeing the convergence of the same initial guess to two distinct solutions via deflation. Some initial results in this vein in the smooth case are discussed in [17], where it is shown that repeated applications of the well-known Rall-Rheinboldt global convergence theorem [56,59] can assure convergence to two distinct solutions starting from the same initial guess. A Rall-Rheinboldt-type result, as opposed to Newton-Kantorovich, would be essential to prove convergence to multiple solutions in the context of deflation. This is because the Rall-Rheinboldt theorem places conditions on the radii of convergence of the balls centered at the solutions, rather than guaranteeing the existence of a unique solution in a ball around the initial guess. We briefly investigate the limitations of the classical theory in the framework of semismooth equations below, and in doing so we explain the need for a result of Rall-Rheinboldt-type that is native to the semismooth case. Many infinite-dimensional semismooth equations of interest share a common structure. In particular, due to low multiplier regularity for bound constrained variational problems, one often resorts to a Moreau-Yosida-type approximation and considers a sequence of (semismooth) equations with residuals taking the form where A is a continuously Fréchet differentiable operator, Φ is a semismooth superposition operator, γ > 0 is a penalty parameter, and f is constant, cf. [32]. For the sake of argument, assume that Ω ⊂ R n is a nonempty, open, and bounded set; Z = H 1 (Ω) the usual Sobolev space of L 2 -functions with weak derivatives in L 2 , and Φ is generated by the function φ(x) := max{0, x}, i.e., Φ(z)(x) := max{0, z(x)}. Since φ and thus Φ are nonsmooth, we cannot directly employ the arguments in [17].
However, by smoothing the max-function, we can obtain a further approximation of the original problem that is regular enough to exploit the Rall-Rheinboldt theory. The remaining question is whether the convergence guarantees for the smooth problem are stable as ε ↓ 0. Suppose we replace max{0, x} by One of the essential ingredients of the sufficient conditions in the Rall-Rheinboldt theorem are the (local) Lipschitz properties of the derivative F γ,ε at the distinct solutions. In particular, we require an open neighborhood E i,ε of each solution z i,ε along with a constant ω i,ε > 0 such that Whereas the operator A can be assumed to be unproblematic, one readily derives the estimates for any two functions u, v ∈ H 1 (Ω). Although the difference of the smoothed operators is uniformly bounded in u, v, and ε ≥ 0, the pointwise relation, which holds as an equality on the set {x ∈ Ω |u(x), v(x) ∈ (0, ε) }, would indicate that any form of affinecovariant Lipschitz constant ω i,ε would unfavorably depend on ε. As a result, the ω i,ε -dependent radii associated with the balls of convergence for the distinct roots would converge to zero. Additional assumptions on the structure of E i,ε that would avoid these issues are unrealistic, e.g., suppose Ω ⊂ R 1 so that H 1 (Ω) → C(Ω) and assume that there exists η > 0 (independent of ε) such that z i,ε ≤ −η < 0.

Finite-dimensional examples
We investigate the effectiveness of the deflation approach by applying it to various semismooth problems in the literature that exhibit distinct solutions.

Complementarity problems
We consider the nonlinear complementarity problem NCP(F ): given F : R n → R n , find z ∈ R n such that This is equivalent to the variational inequality where with all inequalities understood componentwise. We apply a standard semismooth reformulation of the problem using the Fischer-Burmeister NCP function which has the property that φ F B (a, b) = 0 ⇐⇒ a ≥ 0, b ≥ 0, ab = 0 [22]. The nonlinear complementarity problem (21) is equivalent to finding roots of the semismooth residual Φ : R n → R n defined by This is then solved with a semismooth Newton method [37,45,55]. As roots {r 1 , . . . , r n } are discovered, semismooth Newton is applied to where M is given by (8). That is, the deflation operators for each solution are concatenated to deflate all known solutions. Unless noted otherwise, the parameter choice p = 2 was used.

Kojima and Shindoh (1986)
This problem was first proposed by Kojima and Shindoh [43] and is an NCP with F : R 4 → R 4 given by It admits two solutions, This problem was used again by Dirkse and Ferris [14] as an example of a problem in which classical Newton solvers struggle to find a solution. This is because one of the two solutions, z (2) , has a degenerate third component, i.e. z (2) 3 = F 3 (z (2) ) = 0, and hence does not satisfy strict complementarity. Another feature of this problem is that the linear complementarity problem formed through linearization of the residual F around zero has no solution, causing difficulties for the Josephy-Newton method there [35]. This is a relatively easy problem to solve and deflation with shifting (σ = 1) successfully finds both solutions from many initial guesses. We chose initial guess [7/10, . . . , 7/10] T . With no line search, semismooth Newton converged to z (1) in 7 iterations; after deflation, semismooth Newton converged to z (2) in 12 iterations. By contrast, without shifting (σ = 0) deflation did not identify any additional solutions.

Gould (2001)
This is a nonconvex quadratic programming problem with linear constraints suggested by N. I. M. Gould in an invited lecture to the 19 th biennial conference on numerical analysis [27]. It is a quadratic minimization problem with an indefinite Hessian of the form The first order Karush-Kuhn-Tucker optimality conditions yield an NCP with residual where z = [x, λ], with λ = [λ 1 , λ 2 ] the vector of the Lagrange multipliers associated with F 3 (z) ≥ 0 and F 4 (z) ≥ 0 respectively. Note that in this case it is not necessary to use Lagrange multipliers to enforce x ≥ 0 as this is implicit in the NCP formulation. These are the saddle point, the global minimum and the local minimum respectively. The KKT conditions make no distinction between minima and saddle points, and hence the solver finds both kinds of stationary points. The number of iterations required was 5, 7 and 10 respectively. As before, without shifting deflation did not successfully identify any additional solutions.

Aggarwal (1973)
This is a Nash bimatrix equilibrium problem arising in game theory. This kind of problem was first introduced by von Neumann and Morgenstern [63] and the existence of its solutions was further studied by Nash [52] and Lemke and Howson [46]. In the same paper, Lemke and Howson also presented a numerical algorithm for computing solutions to these kinds of problems. This example was introduced by Aggarwal [2] to prove that it is impossible to find all solutions of such problems using a modification of the Lemke-Howson method that had been conjectured to compute all solutions. The problem consists of finding the equilibrium points of a bimatrix (non-zero sum, two person) game. Let A and B be the n × n payoff matrices of players 1 and 2 respectively. Let us assume that player 1 plays the i th pure strategy and player 2 selects the j th pure strategy amongst the n strategies available to each. The entries of A and B, a i,j and b i,j respectively, correspond to the payoff received by each player. It is then possible to define a mixed strategy for a player which consists of a n × 1 vector x such that x i ≥ 0 and x 1 + ... + x n = 1. Denote by x and y the mixed strategies for player 1 and 2 respectively. The entries of these vectors stand for the probability of the player adopting the corresponding pure strategy. The expected payoffs of the two players are then x T Ay and x T By respectively. An equilibrium point (x * , y * ) is reached when, for all x, y, i.e. neither player can unilaterally improve their payoff. Aggarwal's counterexample admits three Nash equilibria. These equilibria are related to the solutions of the NCP with residual This problem is quite difficult, and we therefore turned to continuation to aid convergence, as described below. The problem was modified to introduce an artificial parameter µ with the original problem given by µ = 1. With deflation with shifting, three solutions were found for µ = 1/1000 from the initial guess [0, . . . , 0] T , in 5, 24 and 26 iterations of semismooth Newton respectively. (As in the previous examples, deflation without shifting did not identify any additional solutions.) All three branches were then successfully continued to µ = 1 using 50 equispaced continuation steps and simple zero-order continuation, i.e. the solution for the previous value µ − is used as initial guess for the solution of the next value µ + . The three solutions found were Aggarwal observed that the conjectured scheme mentioned above could compute z (1) and z (3) , but could not compute z (2) .

Gérard, Leclère and Philpott (2017)
Gérard et al. describe a stochastic market where the agents are risk-averse, i.e. estimate their welfare using a coherent risk measure [24]. They give an example of an incomplete market with three different equilibria, two stable and one unstable. The authors examine the convergence of the well-known PATH solver [15,19], and discover that PATH always yields the unstable equilibrium, even when initialized from many distinct initial guesses. (An alternative tâtonnement algorithm does discover all three equilibria when initialized from different initial guesses.) We therefore investigate whether deflation can assist a semismooth Newton method in discovering all three solutions from a single initial guess. Mathematically, the problem is a mixed complementarity problem, a generalization of nonlinear complementarity problems. Let R ∞ := R ∪ {−∞, +∞}. Given F : R N → R N , a lower bound l ∈ R N ∞ and an upper bound u ∈ R N ∞ , the task is to find z ∈ R N such that exactly one of the following holds for each i = 1, . . . , N : This is referred to as MCP(F, l, u). NCP(F ) is a special case with the particular choice l = [0, . . . , 0] T and u = [∞, . . . , ∞] T .
The problem at hand is given by F : R 10 → R 10 , where (32) with z = (x 0 , x 11 , x 12 , y 1 , y 2 , π 1 , π 2 , u 4 , u 5 , θ P ). To demonstrate that the deflation concept is not confined to a particular semismooth reformulation, in this example we use an alternative NCP function. Define which again has the property that φ MP (a, b) = 0 ⇐⇒ a ≥ 0, b ≥ 0, ab = 0. The semismooth reformulation of the MCP employed is Deflation with shifting was applied from the initial guess z 0 = [0, . . . , 0] T with p = 1, and a line search algorithm was used to aid convergence (Alg. 2 of Brune et al. [10]). With these parameters, the procedure identified all three solutions. With p = 2 or without the line search, only two solutions were found. In order, the three solutions found were where only the equilibrium prices are shown for brevity. The solutions were found with 15, 9 and 17 semismooth Newton iterations respectively. The solution found by PATH is the latter. As with the previous examples, deflation without shifting did not identify any additional solutions.

Solving infinite-dimensional variational inequalities
When solving inequality-constrained infinite-dimensional problems, additional care must be taken. The main issue here is a general lack of regularity of the Lagrange multipliers. As a result, it is often impossible to derive a complementarity system (analogous to KKT-conditions for nonlinear programs) that can be reformulated as a single semismooth equation. Even in situations where the associated multiplier is regular enough to allow such a reformulation, we encounter insurmountable issues in the derivation of a function-space-based generalized Newton method. Ignoring these issues and taking a first-discretize-then-optimize approach will generally lead to meshdependent convergence, i.e. the number of iterations required to converge increases significantly as the mesh is refined. This is illustrated in the following example. Suppose Ω ⊂ R n is a nonempty, open, and bounded subset and let J : H 1 0 (Ω) → R be Gâteaux differentiable. Consider the model problem We denote the feasible set by K. If (38) admits a solution u, then we have with J (u) ∈ H −1 (Ω). Since K is a cone, an equivalent formulation holds: where K • is the polar cone to K given by According to the Radon-Riesz theorem, λ ∈ K • is in fact a locally finite Radon measure on Ω [8, pg. 564]. Therefore, λ cannot in general be evaluated pointwise, in which case (39) cannot be reformulated as a semismooth system of equations. Suppose further that Ω is a convex polyhedron and that J (u) = Au − f , with A a second-order linear elliptic operator with smooth coefficients and f ∈ L 2 (Ω). Then u ∈ H 2 (Ω) ∩ H 1 0 (Ω) [39, chap. IV] and thus λ ∈ L 2 (Ω). We may then rewrite (39) as where (x) + := max{0, x}. Even in this ideal case, the nonsmooth superposition operator Φ(λ, u) := λ − (λ − u) + must be defined from L 2 (Ω) × H 1 0 (Ω) into L 2 (Ω). The natural choice for the generalized derivative of Φ is given by In order for this to be a Newton derivative, the following approximation property must hold: Φ(λ + δλ, u + δu) − Φ(λ, u) − G(λ + δλ, u + δu)(δλ, δu) = o( (δλ, δu) ).
However, this only holds true if Φ is defined from L 2+ε (Ω) × H 1 0 (Ω) → L 2 (Ω) for ε > 0 [31,61], which is not available even in this ideal case. This so-called "missing norm gap" persists for all other known NCP-functions. As a result, the infinite-dimensional problem (40) is not semismooth, which manifests itself as mesh-dependence on the discrete level [34].
An alternative mesh-independent scheme can be constructed from the Moreau-Yosida regularization of the indicator functional for the constraints with respect to the L 2 (Ω) topology. The key property of this scheme is that the plus function (·) + is only applied to u, and since u ∈ H 1 0 (Ω) → L 2+ε (Ω), a norm gap holds and this operator is semismooth.
We sketch the approach taken in our implementation; for more details, see [32,33]. Let where i R+ (x) is the indicator function for R + (i.e. 0 if x ≤ 0 and +∞ otherwise). The minimization problem (38) is equivalent to the unconstrained problem min J (u) + I(u) over u ∈ H 1 0 (Ω) .
Approximating I(u) by its L 2 (Ω) Moreau-Yosida regularization we obtain a sequence of γ-dependent problems of the form with penalty parameter γ → ∞. The associated first-order necessary condition is which is semismooth for the reasons outlined above. An initial γ is chosen and u γ computed. Once this is found, the solver continues in γ, using an analytical pathfollowing scheme to drive the penalty parameter γ → ∞ efficiently [1,32,33]. The mesh and γ are linked; as γ → ∞, the mesh is uniformly refined to ensure balanced error estimates (cf. [30]). At every update of γ, the mesh is refined zero or more times until is satisfied, where h is the characteristic mesh size. The process terminates once γ reaches a target value γ max , which in this work is taken to be γ max = 10 6 .

Computing multiple solutions of infinite-dimensional variational inequalities with deflation
Deflation can be combined with the Moreau-Yosida solver as follows. For the initial value of γ, a set of initial guesses is supplied. Each guess is used as a starting point for semismooth Newton applied to (45); if the guess is successful, the solution found is deflated, and the guess is attempted again. When all guesses have been exhausted, the analytical path-following strategy is applied to all solutions found, and the next value of γ is taken to be the minimum of these (the most conservative update of all solutions found). If necessary, the mesh is refined and all solutions prolonged. The solutions found for the previous step are then used as initial guesses for the next, until the process terminates with γ ≥ γ max .

Zeidler (1988)
Zeidler [66, pg. 320] and Phú [54] study a long thin elastic rod under the action of a compressive load constrained to lie in a channel of fixed width. Let s ∈ [0, L] denote the arclength of the rod, y ∈ H 1 0 (0, L) denote its vertical displacement from centerline of the channel, and θ ∈ H 1 (0, L) denote the angle between the rod and the centerline of the channel. The potential energy of the system is given by where B ∈ R is the bending stiffness of the rod, P ∈ R is the compressive load applied to the right end-point, ρ ∈ R is the mass per unit length of the rod, and g ∈ R is the acceleration due to gravity. The rod is placed in a channel of width 2α such that is satisfied almost everywhere. Equilibria of the system are therefore given by the local minimizers of where irrelevant constant terms in the functional have been neglected. Substituting the constraint θ = y , we arrive at the final system minimize y∈H 2 (0,L)∩H 1 0 (0,L) In the absence of the inequality constraint on y, the optimality conditions for this linearized problem comprise a linear beam equation, with either a unique solution or a one-dimensional nullspace. However, in the presence of the inequality constraint |y| ≤ α, the problem remains nonlinear and can support distinct isolated solutions.
In the absence of gravity (g = 0), Phú proved that the first bifurcation of the system (51) occurs at P = Bπ 2 /L 2 . In the presence of small gravity, the reflective symmetry  (52). The dashed red lines denote the inequality constraints on the vertical displacement of the beam.
of the system is broken and the zero solution is no longer a trivial solution, but the bifurcation point will be nearby. We therefore consider the system for parameter values B = 1, g = 1, ρ = 1, L = 1, α = 0.4 and P = 10.4. This choice of P is sufficiently greater than Bπ 2 /L 2 ≈ 9.87 that it is reasonable to expect the system to support distinct solutions. The mesh-independent Moreau-Yosida solver with deflation was applied with initial guess y = 0 for γ = 10. This converged in one iteration to the first solution, Figure  1a. This is expected as the inequality constraints are inactive at this solution and the problem is therefore equivalent to the linear beam equation. Deflation with shifting was then applied with deflation operator where r y denotes the solution already known. The solver was reinitialized from the zero initial guess and converged after 6 semismooth Newton iterations to a second solution that violates the lower bound. (For this low value of γ, the bound constraints are only weakly enforced.) This solution was then deflated using the same operator (53) and the solver was re-initialized from the zero initial guess. The procedure then converged after 14 semismooth Newton iterations to the third solution that violates the upper bound 1 . These three solutions were then continued to γ = γ max in 9 continuation steps, with no further solutions found. The three solutions found for γ = γ max are shown in Figures 1a-1c. This experiment demonstrates an important property of the deflation strategy: deflation is capable of computing distinct solutions of infinite-dimensional variational inequalities whose solutions exhibit both nontrivial active sets and no activity whatsoever, from the same initial guess.

A two-dimensional beam under axial compression with obstacle constraints
In this example we consider a two-dimensional analogue of the previous problem, and compute several equilibrium configurations of a hyperelastic beam under axial compression with obstacle constraints.
In addition, box constraints are imposed on the vertical component u 2 of the displacement vector field u : Ω → R 2 on ∂Ω top and ∂Ω bottom . Let where tr Γ : H 1 (Ω) → H 1/2 (Γ) is the standard trace operator, and let τ bottom be defined analogously. Since H 1/2 (Γ) → L 2 (Γ), we may impose pointwise bound constraints of the type τ top (u) ≤ α and τ bottom (u) ≥ α. For a given axial compression ε, we seek a displacement vector that satisfies where ψ(u) is the isotropic compressible neo-Hookean strain energy density with Young's modulus E = 10 6 and Poisson ratio ν = 0.3, B = (0, −1000) is the body force density due to gravity, and α is the value of the bound constraint enforced. The problem is discretized using piecewise linear finite elements; the coarsest grid employed has 3200 triangular elements.
The goal is to solve this problem for ε = 0.15 with α = 8 × 10 −2 . To do this, continuation is employed. The Moreau-Yosida regularization of (56) is solved with fixed γ = 100 from ε = 10 −3 to ε = 0.15 with steps of ∆ε = 10 −3 , with deflation employed at each continuation step. This initial continuation process yields seven solutions at ε = 0.15, three inactive solutions and four active solutions. The continuation procedure with mesh refinement described in section 4.2 is then applied to continue these solutions from γ = 100 to γ = 10 6 . On the coarsest grid, the LU algorithm of MUMPS [4] is used to solve the linear systems arising in semismooth Newton; on finer meshes GMRES-accelerated geometric multigrid is employed, using all levels in the hierarchy, with three iterations of Chebyshev-accelerated point-block SOR as a smoother (see Ulbrich et al. [62] for rigorous analysis of multigrid in a Moreau-Yosida regularization context). For all solves, the full undamped semismooth Newton step is used, i.e. no line search is found to be necessary for this problem.
This procedure yields seven solutions for ε = 0.15 and γ = 10 6 . To give a sense of the work involved, we report the number of semismooth Newton iterations required for the initial discovery of each solution as ε is continued in Table 1, and the average number of semismooth Newton iterations per solve and GMRES iterations per semismooth Newton step as γ is continued in Table 2. The absolute and relative tolerances of the nonlinear solver were both set to 10 −8 , while the absolute and relative tolerances of the linear solver were set to 0 and 10 −8 . In all cases iteration counts are modest. In particular, the results of Table 2 show that the number of semismooth Newton   Table 2.: Number of mesh refinements, degrees of freedom, nonlinear and linear iteration counts required for the continuation in γ along ε = 0.15. The semismooth Newton solver exhibits γ-and mesh-independence, while the number of Krylov iterations per semismooth Newton step grows very slowly as γ and h are refined.  iterations required does not increase as γ and h are refined, while the number of GMRES-accelerated multigrid V-cycles grows very slowly. The active solutions are shown in Figure 2 and the inactive solutions are shown in Figure 3. For each active solution, we solve (56) without the obstacle constraints; the corresponding solutions are also plotted to indicate the extent to which the obstacle constraints influence the solutions. As can be seen, the bound constraints significantly change the solutions, and are active on a set of positive measure on the boundary.
These results are encouraging. The function-space-based semismooth Newton method combined with analytical path-following, parameter continuation, multigrid and deflation appears very promising for constrained non-convex variational problems with multiple solutions.

Conclusion
Deflation is a useful technique for identifying distinct solutions of variational inequalities with semismooth Newton methods. In particular, employing shifted deflation operators significantly improves the robustness of the approach. The main strengths of the deflation method are that it is effective, straightforward to implement and that it does not significantly increase the cost per Newton iteration.
While the method is found to be effective in numerical experiments, at present no sufficient conditions are known that guarantee convergence of the method to additional solutions. While such conditions are unlikely to be necessary, and may be difficult to verify a priori in computational practice, their availability would establish the foundations of the method and give insight into the design of appropriate deflation operators. The identification of such sufficient conditions forms an important open question and a direction for future research.