New result on the mean-square exponential input-to-state stability of stochastic delayed recurrent neural networks

ABSTRACT In this paper, we solve the mean-square exponential input-to-state stability problem for a class of stochastic delayed recurrent neural networks with time-varying coefficients. With the aid of stochastic analysis theory and a Lyapunov-Krasovskii functional, we derive a novel criterion that ensures the given system is mean-square exponentially input-to-state stable. Furthermore, the new criterion generalizes and improves some known results. Finally, two examples and their numerical simulations are provided to demonstrate the theoretical results.


Introduction
Since the seminal study of Hopfied (1982Hopfied ( , 1984 on neural networks, the recurrent neural networks have been successfully used in many fields to solve pattern recognition, aerospace, telecommunications, signal and image processing, associative memory and optimization problems (Cichocki & Unbehauen, 1993;Joya, Atencia, & Sandoval, 2002;Shitonga, Duana, Mina, & Dewenc, 2007). It is well known that neural networks, as complex systems, are often affected by external perturbations and time delays. In consideration of these two important factors, Haykin (1994) pointed out that one should view synaptic transmission in real neural networks as a noisy process introduced by random fluctuations resulting from the release of neurotransmitters and other probabilistic causes. Furthermore, due to the finite switching speed of amplifiers and information processing, time delays are inevitably encountered in hard implementations, which may cause the neural networks to oscillate, diverge and even become unstable. Therefore, Liao and Mao (1996a), Liao and Mao (1996b) initiated research of stochastic delayed neural networks.
The stability problem of stochastic delayed recurrent neural networks (SDRNNs) is the most important object in the real world because such stability means that a practical system can run regularly and reliably. Fortunately, there are many published articles about stability results on SDRNNs, such as Rakkiyappan and Balasubramaniam (2008), Balasubramaniam and CONTACT Wei Chen weichen@lixin.edu.cn , Chen, Gaans, and Lunel (2014), Meng, Tian, and Hu (2011), Chen, Li, Shi, Gansa, and Lunel (2018), Peng and Huang (2008), Huang, He, and Wang (2008), Yu and Cao (2007), Zhu, Luo, and Shen (2014), Zhu and Cao (2014), Zhou and Liu (2017) and the references found in these papers. For example, the authors of Rakkiyappan and Balasubramaniam (2008), Balasubramaniam and Rakkiyappan (2008) employed a linear matrix inequality approach and Lyapunov-Krasovskii functional to study the globally asymptotic stability of SDRNNs. In Chen et al. (2014), Meng et al. (2011), Chen et al. (2018, pth moment exponential and almost sure exponential stabilities were discussed for a class of SDRNNs with impulses, discrete, distributed and unbounded delays. Using the Razumikhin technique and Dini derivative, Peng and Huang (2008) and Huang et al. (2008) investigated the exponential stability of SDRNNs with timevarying delays. Robust control and robustness analysis were addressed in Yu and Cao (2007),  for a class of uncertain SDRNNs. It is important to note that all these studies focus mainly on traditional stability criteria, such as asymptotical stability, almost sure stability and exponential stability. This means that the states of the SDRNNs converge to an equilibrium point as time tends to infinity. However, it is unnecessary to converge to an equilibrium point in many practical systems, such as stock markets, pendulums, air temperature and finance markets. Zhu and Cao presented a more general definition of stability, mean-square exponential input-to-state stability, and studied it in Zhu and Cao (2014). Three years later, Zhou and Liu (2017) studied the mean-square exponential input-to-state stability of SDRNNs with multiproportional delays. Recently, there have been some results reported on the input-to-state stability of SDRNNs (Yang, Zhou, & Huang, 2014;Zhu & Shen, 2013). Besides time delay and stochastic perturbation, it is more likely for neural networks to exhibit time-varying coefficient effects, which are more ordinary than constant coefficients. Indeed, the variation of the environment (e.g. temperature, moisture, pressure, seasonal effects of weather, reproduction, food supplies, mating habits, etc.) plays an important role in many neural networks so that some classic neural networks models (Shu, Liu, Wang, & Qiu, 2018;Xu, Luo, Zhong, & Zhu, 2014;Zhou, Teng, & Xu, 2015) have been generalized to the non-autonomous differential equations with time-varying coefficients and delays. Unfortunately, to the best of our knowledge, few scholars have investigated the mean-square exponential input-tostate stability of SDRNNs with time-varying coefficients, so it is important to consider the its dynamical behaviours.
Spirited by the above discussion, we consider in the present study the mean-square exponential input-tostate stability of SDRNNs with time-varying coefficients, which includes stochastic delayed Hopfield neural networks, stochastic delayed cellular neural networks and SDRNNs with constant coefficients as its special cases. It is well to be reminded that the SDRNNs with time-varying coefficients is the non-autonomous system, then the approaches to handle autonomous systems are invalid to it. The main purpose of the paper is to overcome this difficulty.
The remaining part of this paper contains four sections. In Section 2, we describe a model of SDRNNs with time-varying coefficients, suppositions and definitions. Section 3 provides key results and proofs. In Section 4, two examples and their numerical simulations are provided to illustrate the effectiveness of the results. Finally, concluding remarks are put forward in Section 5.

Model description and preliminaries
In this paper, we consider the following SDRNNs with time-varying coefficients: is the state variable of the ith neuron, and d i (t) denotes the selffeedback connection weight strength of the ith unit at time t. a ij (t), b ij (t) and c ij (t) are the connection weight strengths of the jth unit on the ith unit at time t. f j (·), g j (·) and h j (·) are the neuron activation functions of the jth unit. u i (t) is the control input of the ith neuron at time t, stands for the correspondent expectation operator with respect to a given probability measure P.
It is convenient to introduce some notations. For a bounded continuous function g defined on R, let g − , g + and |g| + be defined as Moreover, it will be assumed that the following conditions (A 1 ), (A 2 ) and (A 3 ) hold.
(A 1 ) For all x, y ∈ R and i ∈ J, there exist positive constants L i , M i and N i such that For all x, x , y, y , z, z ∈ R and i, j ∈ J, there exist positive constants μ ij , ν ij and υ ij such that It is obvious that (A 1 ) and (A 2 ) ensure that the local Lipschitz and linear growth conditions are satisfied, so there is a unique solution of system (2.1) due to Theorem 5.2.9 in Mao (1997). Let x(t; t 0 , ϕ) denote the solution from the initial data ϕ(s) in L 2 It is easy to see that under (A 3 ), system (2.1) admits a trivial solution or zero solution x(t; t 0 , 0) = 0 corresponding to the initial value ϕ = 0.

Main results
In this section, we prove our main results under assumptions (A 1 )-(A 3 ).

Theorem 3.1: Under assumptions
Proof: For simplicity, let x(t; t 0 , ϕ) = x(t) = (x 1 (t), . . . , x n (t)) T , We construct the Lyapunov-Krasovskii functional as follows: It is easy to show by Itô's formula that dw 1 (t), . . . , dw n (t)) T and L is defined by In light of (3.2) and the fact that ϒ is a symmetric positive semi-definite matrix, we obtain the following from (3.1): Now, denote a stopping time (or Markov time) by η k = inf{s ≥ t 0 : |x(s)| ≥ k}. Next, we can integrate both sides of (3.2) from t 0 to t ∧ η k and take expectations to get On the other hand, it follows from the definition of V(t, x(t)) that Together with (3.4) and (3.5), we have the following inequality: which is the required mean-square exponential input-tostate stability. This completes the proof of Theorem 3.1.

Theorem 3.2:
Under the conditions of Theorem 3.1, the trivial solution of system (2.1) with u(t) ≡ 0 is mean-square exponentially stable.
Remark 3.1: In Theorems 3.1 and 3.2, we derive several new sufficient conditions of the mean-square exponential input-to-state stability for SDRNNs with time-varying coefficients. As far as we know, there has been almost no study of the input-to-state stability of SDRNNs with time-varying coefficients in the literature. It should be stressed that the model in Zhu and Cao (2014) is a special case (i.e. SDRNNs with constant coefficients) in this paper. Moreover, the condition that ϒ is a symmetric positive semi-definite matrix in Theorem 3.1 improves the three inequalities (2-4) of Theorem 1 in Zhu and Cao (2014). In the next section, we will provide an example to illustrate this point.

Two illustrative examples
In this section, we give two examples with numerical simulations to demonstrate the results obtained in previous sections.  δ 2 (t))) .
Remark 4.1: Obviously, the above three inequalities are over against formulas (2-4) of Theorem 1 in Zhu and Cao (2014). Although example (4.1) involves SDRNNs with constant coefficients that belongs to the neural networks mentioned in Zhu and Cao (2014), all results obtained in Zhu and Cao (2014) are invalid for this case due to those inequalities. Fortunately, Theorem 3.1 of this paper works in this case. Thus, the results of this paper generalize and improve Theorem 1 of Zhu and Cao (2014).

Remark 4.2:
To the best of our knowledge, this is the first time there has been a focus on the meansquare exponential input-to-state stability of the trivial solution for SDRNNs with time-varying coefficients. We find that all results obtained in Rakkiyappan and Balasubramaniam (2008) (2013), Yang et al. (2014) are concerned with SDRNNs with constant coefficients, so they are invalid for example (4.2). The method used in this paper provides an approach to analyze the mean-square exponential input-to-state stability of the trivial solution for SDRNNs with time-varying coefficients, which is the kernel of our study.

Concluding remarks
In this paper, we have derived novel sufficient conditions of the mean-square exponential input-to-state  Zhu and Cao (2014). The advantage of model (2.1) is the time-varying coefficients and delays, such that many stochastic delayed neural networks fall in this model. The main superiority of method in this paper is the weaker condition of matrix ϒ in Theorem 3.1, which is the symmetric positive semi-definite matrix. Furthermore, two numerical examples and their simulations have been presented to demonstrate the theoretical results. Considering the work of Zhou and Liu (2017), we will study the mean-square exponential input-to-state stability for SDRNNs with timevarying coefficients and multi-proportional delays, which is an interesting and challenging task.