Mean-square asymptotical synchronization control and robust analysis of discrete-time neural networks with time-varying delay

ABSTRACT This paper investigates the controller design problem for mean-square asymptotical synchronization of discrete-time neural networks with time-varying delay. We proposed the design method of synchronization controller, which considered the nonlinearity of controller input. Based on the designed controller, a delay-dependent synchronization criterion is proposed and formulated in the form of linear matrix inequalities (LMIs) by applying the Lyapunov function method. The result is extended to the delayed discrete-time neural network with uncertainty. Two numerical examples are presented to illustrate the effectiveness of the proposed method.


Introduction
A better understanding of the working principle of human brain is regarded as one of the most important challenges of this century. Over the past three decades, recurrent neural networks (RNNs) have been extensively studied because of its wide application in many areas such as computational intelligence, associative memory, combinational optimization and machine learning [1,2]. As a result of inherent signal transmission among neurons, time delays are often encountered in neural networks due to electronic implementation. It is well known that time delays are often the sources for instability, divergence and poor performance [3][4][5]. Therefore, the stability of neural networks with time delay has been investigated by many researchers in the last few decades and many interesting results have been reported [6][7][8][9][10][11][12][13][14][15][16]. Delay-dependent criteria are derived to guarantee the asymptotic stability of neural networks with time-varying delays in terms of linear matrix inequalities (LMIs) by constructing a newly augmented Lyapunov function in [8]. Two novel sufficient conditions are established to guarantee the exponential stability of discrete-time delayed neural networks with distributed delay in [13]. The criteria are presented in the form of LMIs.
Since Pecora and Carroll introduced the concept of synchronization in their pioneering work [17], much attention has been attracted to the master-slave synchronization of chaotic delayed neural networks [17][18][19][20][21][22][23][24][25]. In [17], some criteria for computing the lag synchronization of coupled chaotic delayed neural networks are derived based on the adaptive control with the linear feedback updated law. In [21], time-delay feedback control techniques are proposed to guarantee the exponential synchronization of two identical chaotic delayed neural networks with stochastic perturbation using the Lyapunov stability theory. In [25], a sampled data controller is designed to ensure the master systems synchronize with the slave systems by using the linear LMI approach. It is noticed that all synchronization controllers designed in [17][18][19][20][21][22][23][24][25] are effective under the ideal premise that the input is linear. But this premise sometimes cannot hold, which motivates the work of this paper. Input nonlinearities are commonly encountered in engineering. Input nonlinearities appear in various forms, such as input saturation, dead zone, backlash, hysteresis, and signal quantization. Among these nonlinearities, input saturation is one of the most important non-smooth nonlinearities in sensors and actuators. In this paper, the nonlinearity we considered is input saturation. It is well known that input nonlinearity can cause instability or failure of the system [26][27][28][29]. So it is important to take the nonlinearity of controller input into account. Synchronization condition of dead-zone nonlinearity is discussed for delayed neural networks in [26]. The synchronization criterion of sector-bounded nonlinearity is considered for neural networks with time delay in [27,28]. Local synchronization is investigated in [29] for chaotic neural networks subject to saturating actuators. It should be pointed out that the synchronization criteria above are derived in the continuous case for delayed neural networks subject to input nonlinearity. It seemed that the corresponding results for delayed discrete-time neural networks are few. In fact, discrete-time neural networks play a more important role than the continuous counterparts in our digital life. Moreover, parameter uncertainties often exist in practical application due to the modeling inaccuracies and parameter changes of the model. Particularly, the weight coefficients of neurons often encounter unavoidable uncertainties in neural network systems. Robust analysis of neural networks is investigated in [30][31][32].
In this article, we studied the synchronization control problem of neural networks with time-varying delay. We first give the method of controller design, then a sufficient condition is given to ensure asymptotical synchronization of the described master-slave system by constructing a series of Lyapunov functions. Finally, two numerical examples are presented to illustrate the effectiveness of the proposed theoretical results. The contributions of this article can be summarized as follows: (1) The controller input nonlinearity is taken into account in the discrete-time case compared with other literature [17][18][19][20][21][22][23][24][25]. (2) The design of the feedback controller can avoid the intense fluctuation of controller input compared with [33]. It is obvious that μ(k) = 0 when θ(k) = 0 in [33]. However, in this paper μ(k) = σ (Ke(k)) when θ(k) = 0, which avoids the intense fluctuation of controller input.
(3) The obtained results are extended to uncertain neural networks with time-varying delay and input nonlinearity.
Notation: Throughout the paper, R denotes real numbers and N denotes non-negative integers. R n stands for the n-dimensional real vector space. x denotes the Euclidean norm of the vector x ∈ R n . E{x} means the expectation of stochastic variable x. R n×m is the set of real matrices of n × m dimension. A real matrix P > 0(≥ 0) means that P is a positive-definite (positive semi-definite) matrix, and A > B(A ≥ B) means A − B > 0(A − B ≥ 0). I and 0 denote the identity matrix and a zero matrix with proper dimensions, respectively. The superscript 'T ' represents the transpose and the symmetric terms in a symmetric matrix are denoted by ' * ', and diag{· · · } denotes a block-diagonal matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions.

Problem formulation
Consider the master discrete-time neural networks with time-varying delays described by Master : where τ m and τ M are the lower and upper bounds of the allowed time-varying delay, respectively. A = diag{a 1 , a 2 , . . . , a n } is the state feedback coefficient matrix (|a i | < 1), W 1 = (b ij ) n×n and W 2 = (c ij ) n×n denote the connection weight matrix and delayed connection weight matrix, respectively. The nonlinear · · · f n (x n (k))] T ∈ R n is the activation function satisfying the following assumption:

Assumption 2.1: Each activation function f i (.) is continuous and bounded, and there exist constants α
for any μ, υ ∈ R and μ = υ.
The slave system for (1) is given as Slave : where u(k) ∈ R n is the control input of the slave system, ψ 2 (k) is the initial condition, and e(k) = y(k) − x(k) is the synchronization error.
Subtracting (1) from (3) yields the following error system: where g(e(k)) = f (y(k)) − f (x(k)). According to Assumption 2.1, it follows that In this case, we say that σ belongs to the sector [H 1 , The control input u(k) takes the following form: where K ∈ R n×n is the control gain matrix to be determined. The value of θ(k) is 0 or 1. When θ(k) = 1 it means that u(k) is a linear input and θ(k) = 0 means that u(k) is a nonlinear input.

Assumption 2.2:
In this paper, we assume that the probability distribution of θ(k) can be observed and satisfies Substitute (5) into (4) to obtain the following closedloop system: Definition 2.2: The master system (1) and the slave system (3) under the controller (5) are said to be asymptotically synchronized in the mean square if the error of system (7) satisfies Lemma 2.1: Let X and Y be n-dimensional real vectors and P be a n × n positive-definite matrix. Then, the following matrix inequality holds: is equivalent to any one of the following conditions:

Main result
In this section, a sufficient condition is derived to ensure that the error system (7) is asymptotically stable in mean square. Based on this stability criterion, the master system (1) and (3) system are asymptotically synchronized in the mean square. For our convenience in the following derivation, we denote  (7) is asymptotically stable in mean square with Proof: We construct the following Lyapunov-Krasovskii function of system (7) : where where On the other hand, it is clear from Assumption 2.1 that According to (6), the following inequality is true: Next, substituting K = G −1 X into (21) and, we have Based on (11), we get Then we obtain from (15), (17)-(21), (23) and (24)that where ω(k) = e T (k) g T (e(k)) g T (e(k − τ (k))) σ T T So we can conclude that E{ e(k) 2 } is convergent and lim k→+∞ E{ e(k) 2 } = 0. The proof is completed. In engineering, modelling uncertainties is inevitable. Therefore, it is of great importance for us to discuss the synchronization control problem for the master-slave system with modelling uncertainties. When modelling uncertainties occur, the master-slave system can be rewritten as It is easy to get the error system: We assume that the matrices A(k), W 1 (k), W 2 (k) satisfy:

are known constant matrices with proper dimensions and
In order to simplify the equations, we write A(k), W 1 (k), W 2 (k) as A, W 1 , W 2 .

Theorem 3.2:
Under Assumption 2.1, (29) and (30), the error system (28) is asymptotically stable in mean square with K = G −1 X if there exist positive-definite matrices G > 0, Q 1 > 0, Q 2 > 0, Q 3 > 0, Q 4 > 0, matrix S, and diagonal positive-definite matrices F 1 > 0, F 2 > 0, such that the following LMIs hold: Proof: During the course of derivation we use ij to denote the corresponding terms ij with uncertainties. Displacing the A, W 1 , (13), it yields that where It follows from Lemma 2.1 and (34) that Similarly, we obtaiñ Similar to (38), we get On the other hand, it is easy to verify that Considering the symmetry of 12 and 21 and (29), With regard toθY 4 , it follows that e T (k)θX T W 1 g(e(k)) + g T (e(k))θ( W 1 ) T Xe(k) Considering the symmetry of 13 and 31 , it is easy to compute that e T (k)( A) T GW 2 g(e(k − τ (k))) Considering the uncertainty in 14 and 41 , we obtain Considering the symmetry of 23 and 32 , it is easy to compute that Considering the uncertainty in 24 and 42 , we get Considering the uncertainty in 34 and 43 , we have If 1 and V(k) denote 1 and V(k) with uncertainties, respectively, according to (37)-(55), it is So we can conclude that E{ e(k) 2 } is convergent and lim k→+∞ E{ e(k) 2 } = 0, and the master system synchronized with the slave system. Hence, Theorem 3.2 is obtained.

Numerical examples
In this section, we provide two numerical examples to illustrate the usefulness of the proposed methods.The master-slave system without uncertainty is discussed in the first example. The master-slave system with uncertainty is discussed in the second example.

Example 4.1:
Consider the master-slave system (1) and (3) with the following parameters: The corresponding results are given in Figures 1-3. Figure 1 shows the value of θ(k). From the error curve ( Figure 3) we can conclude that the whole system is stable with the designed controller against the timevarying time delay (Figure 2).

Example 4.2:
Uncertainty parameters of the discretetime recurrent neural networks system (26) and (27) are     The θ(k) is shown in Figure 4. The time-varying delay is plotted in Figure 5, and the error curve is displayed in Figure 6. Obviously, the master system is synchronized with the slave system.

Conclusions
In this paper, we have studied the mean-square asymptotical synchronization control problem for discretetime neural networks with time-varying delay. A controller design method is formulated and a sufficient criterion is derived to ensure the error system to be mean-square asymptotically stable and thus the master system is synchronized with the slave system. The result is extended to the case of system with uncertainty. Finally, two numerical examples are provided to show the effectiveness of the proposed methods.
Recently, much attention has been paid to stability analysis for delayed neural networks by means of the delay-partitioning approach, for its great effect on deriving less-conservative results and improving the stability criteria. This approach deserves further study. The application of the delay-partitioning approach in the stability condition for uncertain discrete-time recurrent neural networks with time-varying delays is studied in [34].

Disclosure statement
No potential conflict of interest was reported by the authors.