Quadratic Neural Networks for Solving Inverse Problems

Abstract In this paper we investigate the solution of inverse problems with neural network ansatz functions with generalized decision functions. The relevant observation for this work is that such functions can approximate typical test cases, such as the Shepp-Logan phantom, better, than standard neural networks. Moreover, we show that the convergence analysis of numerical methods for solving inverse problems with shallow generalized neural network functions leads to more intuitive convergence conditions, than for deep affine linear neural networks.


Introduction
We consider an inverse problem, which consists in solving an operator equation where F : X → Y denotes an operator, mapping between function spaces (X, ∥•∥) and (Y, ∥•∥).
For the numerical solution of Equation 1.1 an appropriate set of ansatz-functions P has to be selected, on which an approximate solution of Equation 1.1 is calculated, for instance a function p ∈ P ⊆ X, which solves where y is an approximation of y 0 .The history on the topic of approximating the solution of Equation 1.1 by discretization and regularization is long: For linear inverse problems this has been investigated for instance in [23].Much later on neural network ansatz functions have been used for solving Equation 1.1; see for instance [24].In this paper we investigate solving Equation 1.2 when P is a set of neural network functions with generalized decision functions.In order to spot the difference to classical neural network functions we recall the definition of them first: Definition 1.1 (Affine linear neural network functions) Let m ≥ n ∈ N. Line vectors in R m and R n are denoted by w = (w (1) , w (2) , . . ., w (m) ) T and ⃗ x = (x 1 , x 2 , . . ., x n ) T , respectively.
• More recently deep affine linear neural network functions (DNNs) have become popular: An (L + 2)-layer network looks as follows: where p j,l (⃗ x) = w T j,l ⃗ x + θ j,l with α j,l , θ j,l ∈ R and w j,l ∈ R n for all l = 1, . . ., L.
We demonstrate exemplary below (see Section 3.2) that the convergence analysis of an iterative Gauss-Newton method for solving a linear inverse problem restricted to a set of deep neural network ansatz functions is involved (see Example 3.4), while the analysis with shallow neural networks ansatz functions leads to intuitive conditions (see also [28]).Moreover, it is wellknown in regularization theory that the quality of approximation of the solution of Equation 1.2 is depending on the approximation qualities of the set P: Approximation of functions with shallow affine linear neural networks is a classical topic of machine learning and in approximation theory, see for instance [26,1,5,14,18,21,32,31].The central result concerns the universal approximation property.Later on the universal approximation property has been established for different classes of neural networks: Examples are dropout neural networks (see [34,20]), convolutional neural networks (CNN) (see for example [36,37]), recurrent neural networks (RNN) (see [27,13]), networks with random nodes (see [35]), with random weights and biases (see [25,15]) and with fixed neural network topology (see [11]), to name but a few.
In this paper we follow up on this topic and analyze neural network functions with decision functions, which are higher order polynomials, as specified now: Definition 1.2 (Shallow generalized neural network function) Let n, m, N ∈ N.Moreover, let f j : R n → R m , j = 1, . . ., N be vector-valued functions.Neural network functions associated to {f j : j = 1, . . ., N } are defined by Again we denote the parametrization vector, containing all coefficients, with ⃗ p and the set of all such functions with P. We call the set of decision functions associated to Equation 1.7.The composition of σ with a decision function is called neuron.
We discuss approximation properties of generalized neural networks (see Theorem 2.5), in particular neural network functions with decision functions, which are at most quadratic polynomials in ⃗ x.This idea is not completely new: In [33] neural networks with parabolic decision functions have been applied in a number of applications.In [10], the authors proposed neural networks with radial decision functions, and proved an universal approximation result for ReLU-activated deep quadratic networks.
Outline of this paper.The motivation for this paper is to solve Equation 1.2 on a set of neural network functions P with iterative algorithms, such as for instance a Gauss-Newton method.Good approximation properties of functions of interest (such as the Shepp-Logan phantom) is provided with quadratic decision functions, which have been suggested already in [21].We concentrate on shallow neural networks because the resulting analysis of iterative methods for solving inverse problems is intuitive (see Section 3.2).We apply different approximation theorems (such as for instance the universal approximation theorem of [5]) to cover all situations from Definition 2.1, below.Moreover, by constructing wavelet frames based on quadratic decision functions, we give an explicit convergence rate for shallow radial network approximations; see Section 4, with main result Theorem 4.6.In Appendix A the important conditions of the approximation to the identity (AtI) from [7] used for proving the convergence rates in Section 4 are verified.

Examples of networks with generalized decision functions
The versatility of the network defined in Equation 1.7 is due to the flexibility of the functions f j , j = 1, . . ., N to chose.
We give a few examples.
That is, f j is a graph of a quadratic function.Then Equation 1.7 reads as follows: (2.1) ⃗ p denotes the parametrization of p: 2) Note that in Equation 2.1 all parameters of the entries of the matrices A j are parameters.
That is A j = 0 for all j = 1, . . ., N .This set of CQNNs corresponds with the ALNNs defined in Equation 1.3.
(ii) Let A j ∈ R n×n , j = 1, . . ., N be chosen and fixed.We denote by MCNN the family of functions which is parametrized by the vector In particular, choosing A j = I we get: (2.5) Radial quadratic neural networks (RQNN)s: For ξ j ̸ = 0 the argument in Equation 2.5 rewrites to We call the set of functions from Equation 2.5, which satisfy ξ j ≥ 0 and κ j ≤ 0 for all j = 1, . . ., N, RQNNs, radial neural network functions, because the level sets of ν j are circles.These are radial basis functions (see for instance [3] for a general overview).
or alternatively In the first case, and in the second case similarly, we obtain the family of functions We call these functions signed squared neural networks.Note, here We obtain the family of functions (2.12) We call these functions cubic neural networks.
Remark 1 (i) It is obvious that generalized quadratic neural network functions and matrix constrained neural networks are more versatile than affine linear works, and thus they satisfy the universal approximation property (see [5]).
(ii) The constraint Equation 2.8 does not allow to use the universal approximation theorem in a straight forward manner.In fact we prove an approximation result indirectly via a convergence rates result (see Section 4).
(iii) Sign based quadratic and cubic neural networks satisfy the universal approximation property, which is proven below in Corollary 2.6 by reducing it to the classical result from [5].
In order to prove the universal approximation property of SBQNNs and CUNNs we review the universal approximation theorem as formulated by [5] first.It is based discriminatory properties of the functions σ.

Definition 2.2 (Discriminatory function)
A function σ : R → R is called discriminatory (see [5]) if every measure µ on [0, 1] n , which satisfies Example 2.3 Note that every non-polynomial function is discriminatory (this follows from the results in [18]).Therefore the choices of activation function in Definition 1.1 are discriminatory for the Lebesgue-measure.
With these basic concepts we are able to recall Cybenko's universal approximation result.

Theorem 2.4 ([5]
) Let σ : R → R be a continuous discriminatory function.Then, for every function g ∈ C([0, 1] n ) and every ϵ > 0, there exists a function In the following we formulate and prove a modification of Cybenko's universal approximation result [5] for shallow generalized neural networks as introduced in Definition 1.2.
Then for every g ∈ C([0, 1] n ) and every ϵ > 0 there exists some function Proof We begin the proof by noting that since ⃗ x → f j (⃗ x) is injective, the inverse function on the range of f j is well-defined, and we write f −1 is continuous relies on the fact that the domain [0, 1] n of f j is compact, see for instance [9, Chapter XI, Theorem 2.1].Then applying the Tietze-Urysohn-Brouwer extension theorem (see [16]) to the continuous function g • f −1 j : f j ([0, 1] n ) → R, we see that this function can be extended continuously to R m .This extension will be denoted by g * : R m → R.
We apply Theorem 2.4 to conclude that there exist α j , θ j ∈ R and w j ∈ R m , j = 1, . . ., N such that Then, because f j maps into R m we conclude, in particular, that It is obvious that the full variability in w is the key to bring our proof and the universal approximation theorem in context.That is, if w j , θ j , j = 1, . . ., N are allowed to vary over R n , R, respectively.RQNNs are constrained to θ j < ∥ ⃗ w j ∥ 2 /(4ξ j ) in Equation 2.7 and thus Theorem 2.5 does not apply.Interestingly Theorem 4.6 applies and allows to approximate functions in L 1 (R n ) (even with rates).Proof The proof follows from Theorem 2.5 and noting that all our functions f j , j = 1, . . ., N defined in Definition 2.1 are injective.□

Motivation
3.1.Motivation 1: The Shepp-Logan phantom.Almost all tomographic algorithms are tested with the Shepp-Logan phantom (see [30]).It is obvious that it can be exactly represented with a GQNN with 10 coefficients, while we conjecture that it can neither be exactly represented with an ALNN or a DNN with a finite number of coefficients.This observation extends immediately to all function that contains "localized" features.In this sense sparse approximations with linear neural networks would be more costly in practice.

Motivation 2:
The Gauss-Newton iteration.Let us assume that F : X → Y is a linear operator.We consider the solution p of Equation 1.2 to be an element of an ALNN or DNN, as defined in Definition 1.1.Therefore p can be parameterized by a vector ⃗ p as in Equation 1.4 (for shallow linear neural networks) or Equation 1.6 (for deep neural networks).For GQNNs the adequate parametrization can be read out from Definition 2.1.In other words, we can write every searched for function p via an operator Ψ which maps a parameter Therefore, we aim for solving the nonlinear operator equation For ALNNs we have proven in [28] that the Gauss-Newton method (see Equation 3.7, below) is locally, quadratically convergent under conditions, which guarantee that during the iteration the gradients of Ψ does not degenerate, i.e., that P is a finite-dimensional manifold.Let us now calculate derivatives of the radial neural network operators Ψ (see Equation 2.5), which are the basis to prove that also the Gauss-Newton with RQNNs is convergent: Example 3.1 Let σ : R → R be continuous activation function, where all derivatives up to order 2 are uniformly bounded, i.e., in formulas such as the sigmoid function.Then, the derivatives of a radial neural network (RQNN) Ψ as in Equation 2.5 with respect to the coefficients of ⃗ p in Equation 2.2 are given by the following formulas, where ν s is defined in Equation 2.6: • Derivative with respect to α s , s = 1, . . ., N : • Derivative with respect to w s where s = 1, . . ., N , t = 1, . . ., n: • Derivative with respect to θ s where s = 1, . . ., N : • Derivative with respect ξ s : Note, that all the derivatives above are functions in For formulating a convergence result for the Gauss-Newton method (see Equation 3.7) below) we need to specify the following assumptions, which were postulated first in [28] to prove local convergence of a Gauss-Newton method on a set of shallow affine linear network functions.
Here we verify convergence of the Gauss-Newton method on a set of RQNNs, as defined in Equation 2.1.The proof is completely analogous as in [28].
→ Y be a linear, bounded operator with trivial nullspace and dense range.
where N ′ (⃗ p k ) † denotes the Moore-Penrose inverse of N ′ (⃗ p k ). 1oreover, let ⃗ p † ∈ D (Ψ) be a solution of Equation 3.1, i.e., Then the Gauss-Newton iterations are locally, that is if ⃗ p 0 is sufficiently close to ⃗ p † , and quadratically converging.

Remark 2
The essential condition in Theorem 3.3 is that all derivatives of Ψ with respect to ⃗ p are linearly independent, which is a nontrivial research question.In [28] we studied convergence of the Gauss-Newton method on a set of shallow affine linear neural networks, where the convergence result required linear independence of the derivatives specified in Equation 3.3 -Equation 3.5.
Here, for RQNNs, on top, linear independence of the 2nd order moment function Equation 3.6 needs to hold.Even linear independence of neural network functions (with derivatives) is still a challenging research topic (see [17]).
Under our assumptions the ill-posedness of F does not affect convergence of the Gauss-Newton method.The generalized inverse completely annihilates F in the iteration.The catch however is, that the data has to be attained in a finite dimensional space spanned by neurons (that is Equation 3.8 holds).This assumption is in fact restrictive: Numerical tests show that convergence of Gauss-Newton methods becomes arbitrarily slow if it is violated.Here the ill-posedness of F enters.The proof of Theorem 3.3 is based on the affine covariant condition (see for instance [8]).
In the following we show the complexity of the convergence condition of a Gauss-Newton method in the case when affine linear deep neural network functions are used for coding.

Example 3.4 (Convergence conditions of deep neural networks)
We restrict our attention to the simple case n = 1.We consider a 4-layer DNN (consisting of two internal layers) with σ 1 = σ 2 = σ, which reads as follows: (3.9) Now, we calculate by chain rule ∂Ψ ∂w 1,1 [⃗ p](x).For this purpose we define .
The complicating issue in a potential convergence analysis concerns the last identity, Equation 3.10, where evaluations of σ ′ at different arguments appear simultaneously, which makes it complicated to analyze and interpret linear independence of derivatives of Ψ with respect to the single elements of ⃗ p.Note that for proving convergence of the Gauss-Newton method, according to Equation 3.10, we need to verify linear independence of then functions In other word, the potential manifold of quadratic neural networks is extremely complex.
The conclusion of this section is that the analysis of iterative regularization methods, like a Gauss-Newton method, gets significantly more complicated if deep neural networks are used as ansatz functions.In contrast using neural network with higher order nodes (such as radial) results in transparent moment conditions.So the research question discussed in the following is whether shallow higher order neural networks have similar approximation properties than deep neural network functions, which would reveal clear benefits of such for analyzing iterative methods for solving ill-posed problems.

Convergence rates for universal approximation of RQNNs
In the following we prove convergence rates of RQNNs (as defined in Equation 2.5) in the L 1 -norm.To be precise we specify a subclass on RQNNs for which we prove convergence rates results.This is a much finer result than the standard universal approximation result, Theorem 2.5, since it operates on a subclass and provides convergence rates.However, it is the only approximation result so far, which we can provide.We recall that the constraints, ξ j ̸ = 0 and κ j /ξ j < 0 (see Equation 2.7), induce that the level-sets of the neurons are compact (circles) and not unbounded, as they are for shallow affine linear neurons.Therefore we require a different analysis (and different spaces) than in the most advanced and optimal convergence rates results for affine linear neural networks as for instance in [32,31].In fact the analysis is closer to an analysis of compact wavelets.
The convergence rate will be expressed in the following norm: Definition 4.1 L 1 denotes the space of square integrable functions on [0, 1] n , which satisfy [2, (1.10)] Here c g are the coefficients of an wavelet expansion and D is a countable, general set of wavelet functions.A discrete wavelet basis (see, for example, [6,12,19,4]) is a (not necessarily orthonormal) basis of the space of square integrable functions, where the basis elements are given by dilating and translating a mother wavelet function.
Remark 3 Notice that the notation L 1 does not refer to the common L 1 -function space of absolutely integrable functions and depends on the choice of the wavelet system.For more properties and details on this space see [29,Remark 3.11].Definition 4.2 (Circular scaling function and wavelets) Let r > 0 be a fixed constant and σ be a discriminatory function as defined in Definition 2.2 such that where C n is a normalizing constant such that R n φ(⃗ x)d⃗ x = 1.Then we define for k ∈ Z the radial scaling functions and wavelets Often ⃗ y ∈ R n is considered a parameter and we write synonymously In particular, this notation means that S C k,⃗ y and ψ C k,⃗ y are considered solely functions of the variable ⃗ x.
We consider the following subset of RQNNs This is different to standard universal approximation theorems, where an uncountable number of displacements are considered.Moreover, we define the discrete Wavelet space According to Theorem A.3, in order to prove the following approximation result, we only require to verify that S C d satisfies the conditions for symmetric AtI and the double Lipschitz condition (see Definition A.1), which originate from [7,Def. 3.4].

Corollary 4.3 W C
d is a frame and for every function f ∈ L 1 (R n ) there exists a linear combination of N elements of W C d , denoted by f N , satisfying For proving the approximation to identity, AtI, property of the functions S C k , k ∈ Z, we use the following basic inequality.Lemma 4.4 Let h : R n → R be a twice differentiable function, which can be expressed in the following way: Then the spectral norm of the Hessian of h can be estimated as follows:2 Proof Since ∇ 2 h(⃗ x) is a symmetric matrix, its operator norm is equal to its spectral radius, namely the largest absolute value of an eigenvalue.By routine calculation we can see that is an eigenvalue of C.Moreover, C = ⃗ x⃗ x T is a rank one matrix and thus the spectral values are 0 with multiplicity (n − 1) and ∥⃗ x∥ 2 .This in turn shows that the eigenvalues of the Hessian are +2h ′ s (∥⃗ x∥ 2 ) (with multiplicity n − 1) and 4 ∥⃗ x∥ 2 h ′′ s (∥⃗ x∥ 2 ) + 2h ′ s (∥⃗ x∥ 2 ), which proves Equation 4.7.
In the following lemma, we will prove that the kernels (S C k ) k∈Z are an AtI.Lemma 4.5 Let r > 0 be fixed.Suppose that the activation function σ : R → R is monotonically increasing and satisfies for the i-th derivative (i = 0, 1, 2) Then the kernels (S C k ) k∈Z as defined in Equation 4.2 form an AtI as defined in Definition A.1 that also satisfy the double Lipschitz condition Equation A.4.
Proof We verify the three conditions from Definition A.1 as well as Equation A.4.First of all, we note that • Verification of item (i) in Definition A.1: Equation 4.1 and Equation 4.8 imply that Thus item (i) in Definition A.1 holds with ϵ = 1/n and C ρ = 1 and C = C n C σ .
• Verification of item (ii) in Definition A.1 with C ρ = 1 and C A = 2 −n : Because σ is monotonically increasing it follows from Equation 4.1 and the fact that S 0 (⃗ x, ⃗ y) = φ(⃗ x − ⃗ y) (see Equation 4.2) and Definition 4.2 that Then Equation 4.9 implies that From the definition of S C k (⃗ x, ⃗ y), it follows From the mean value theorem it therefore follows from Equation 4.11 and A.6 that Then application of Equation A.6 and noting that 1−2 −n 2 −n ≥ 1 gives • Verification of item (iii) in Definition A.1: From the definition of S C k (see Equation 4.2) it follows that for every k ∈ Z and • Verification of the double Lipschitz condition Equation A.4 in Definition A.1: By using the integral version of the mean value theorem, we have Following this identity, we get where and

.14)
Thus from Equation 4.9 it follows that In the next step we note that from Equation A.7 it follows that

.15)
Thus we get because which shows Equation 3.2.The limit result is an easy consequence of the fact that t → e −t 2 and its derivative are faster decaying to zero than the functions t → (1 + |t| n ) −1−(2i+1)/n .Therefore, it follows from Theorem A.3 that the set W C d is a wavelet frame and satisfies the estimate Equation A.14.We summarize this important result: Theorem 4.6 (L 1 -convergence of radial wavelets) Let σ be an activation function that satisfies the conditions in Lemma 4.5.Then, for every function f ∈ L 1 (R n ) (see Definition 4.1 for a definition of the space) and every N ∈ N, there exists a function and span N denotes linear combinations of at most N terms in the set, such that The arguments where σ is evaluated are in general different, and thus we get from which the coefficients can be read out.Note that for the sake of simplicity we have not explicitly stated the exact index in the subscript of the coefficients (⃗ α, ⃗ θ, ⃗ ξ, y), or in other words we use the formal relation j = (k, ⃗ j) to indicate the dependencies.This shows the claim if we identify θ j = r 2 and w T = 2 1+2k/n ⃗ y. □ Remark 4 Our results in section 4 are proved using "similar" methods as in [29], but can not be used directly.Because the main ingredient of the proof is the construction of localizing functions as wavelets.This task requires two layers of affine linear decision functions to accomplish, since the linear combinations of functions of the form ⃗ x → φ(⃗ x) := C n σ( ⃗ w T ⃗ x + θ) are not localized; they cannot satisfy item (iii) in Definition A.1, as they are not integrable on R n .In comparison, some quadratic decision functions -such as those used in Section 4 -naturally gives localized functions, and therefore can work as wavelets on their own.We mention that there exists a zoo of approximation rates results for neural network functions (see for instance [32,31], where best-approximation results in dependence of the space-dimension n have been proven).We have based our analysis on convergence rates results of [29] and [7].This choice is motivated by the fact that the AtI property from [7] is universal, and allows a natural extension of their results to quadratic neural network functions.The motivation here is not to give the optimal convergence rates result for quadratic neural networks, but to verify that convergence rates results are possible.We believe that AtI is a very flexible and elegant tool.

Definition 2 . 1 (
Neural networks with generalized decision functions) We split them into three categories: General quadratic neural networks (GQNN): Let

Corollary 2 . 6 (
Universal approximation properties of SBQNNs and CUNNs) Let the discriminatory function σ : R → R be Lipschitz continuous.All families of neural network functions from Definition 2.1 satisfy the universal approximation property on [0, 1] n .

Theorem 3 . 3 (
a shallow RQNN generated by a strictly monotonic activation function σ which satisfies Equation 3.2, like a sigmoid function.•All derivatives in Equation 3.3 -Equation3.6 are locally linearly independent functions.Now, we recall a local convergence result: Local convergence of Gauss-Newton method with RQNNs) Let N be as in Equation3.1 be the composition of a linear operator F and the RQNN network operator as defined in Equation 2.5, which satisfy Assumption 3.2.Moreover, let ⃗ p 0 ∈ D (Ψ), which is open, be the starting point of the Gauss-Newton iteration