Theoretical analysis of garden balsam optimization algorithm

Garden balsam optimization (GBO) is a new proposed evolutionary algorithm based on swarm intelligence. Convergence and time complexity analyses are very important in evolutionary computation, but the research on GBO is still blank. Same as other evolutionary algorithms, the optimization process of the GBO algorithm can be regarded as a Markov process. In this paper, a Markov stochastic model of the GBO algorithm is defined and used to prove the convergence of GBO algorithm. Finally, the approximation region of the estimated convergence time of GBO algorithm is calculated, which characterizes the evolution of the evolutionary process of the proposed algorithm.


Introduction
Evolution algorithm (EA) is a kind of adaptive search algorithm that is generated by a natural evolution process. With the fruitful results of evolutionary algorithms in the application of various types of optimization problems, the theoretical study of such algorithms has increasingly attracted the attention of scholars.
As a new evolution algorithm, garden balsam optimization (GBO) was proposed by . The GBO algorithm was inspired by the unique propagation process of garden balsam seeds. The fruit of the garden balsam is called a capsule. When the capsules are ripe, they crack on their own. The mechanical force of the bursting process scatters the seeds around the parent body. The number and distribution of seeds are directly related to the environment in which the parent grows. The seed that falls to the ground has the opportunity to move under the force of nature. To simulate this process, the GBO algorithm generates seeds by a mechanical propagator operator and a second propagator operator to search for an optimal solution in the problem space.
The flow chart of GBO algorithm is shown in Figure 1. It can be seen that the algorithm first needs to initialize the population, and then, the mechanical transmission operator and the second transmission operator are executed in turn. When the seed crosses the boundary, it needs to be pulled back according to the mapping rules. If the number of seeds is too large, a selection strategy should be implemented to eliminate them. This cycle iterates CONTACT Shengpu Li sunyz_dhu@163.com until the termination condition is met, that is, the accuracy requirement of the problem is met or the maximum number of iterations is reached . Although the mechanism of GBO algorithm is simple, it has been proved that it can converge effectively on constrained optimization problems and get the optimal solution . The ability of the algorithm to solve multi-dimensional functions was also proved by . The GBO algorithm is also used to solve practical problems, for example, Li et al. (2019) used it to optimize the parameters of the adaptive-network-based fuzzy inference system (ANFIS).
As a new kind of population-based evolutionary algorithm, the GBO algorithm has not been analyzed for its running time, since it was proposed. The analysis of runtime is a hot topic in the theoretical study of evolutionary algorithms in recent years (Han et al., 2008;Oliveto et al., 2007). Intuitively, the goal of runtime analysis is to find at least one optimal solution or a good approximate optimal solution in operation. The runtime can be measured by the time it first arrives at a particular set of states in the dependent process (Doerr et al., 2012). Due to the random nature of evolutionary algorithms, the computational time analysis of such algorithms is not easy. The runtime is helpful to deepen the understanding of the evolutionary algorithm, evaluate the efficiency of the algorithm, and improve it.
Early studies focused on the runtime of (1 + 1)EA and other simple evolutionary algorithms to solve pseudo- Boolean functions, which usually have good structural properties (Droste et al., 2002). These studies show some useful mathematical methods and tools and lead to some theoretical results related to some examples. At present, the computational time analysis of (1 + 1)EA has gradually expanded from simple pseudo-Boolean functions to combinatorial optimization problems with a practical application background. Oliveto et al. analyzed the computing time of some instances of (1 + 1)EA solving vertex covering problems (Oliveto et al., 2009). Lehre et al. selected several examples of computing unique inputoutput sequences and analyzed the computing time of (1 + 1)EA (Lehre & Yao, 2014). Zhou et al. carried out a series of approximate performance analysis for some examples of (1 + 1)EA solving the following combinatorial optimization problems: minimum label spanning tree problem (Lai et al., 2014), multi-processor scheduling problem , maximum cutting problem , and maximum leaf spanning tree problem (Xia et al., 2015), and achieved fruitful theoretical results.
With the development of (1 + 1) EA theory research, many mathematical methods and tools have been proposed, such as Markov chain (He & Yao, 2003), absorbing and absorbing Markov chain (Yu & Zhou, 2008), switch analysis (Yu et al., 2015), and a method based on adaptive value partitioning (Sudholt, 2013). Drift analysis (He & Yao, 2001), introduced by He et al., has proved to be a powerful technique for evolutionary algorithms to runtime analysis. He and Yao (2002) used the Markov Modal and first hitting time theory to study the first hit probability of the (N + N) evolutionary algorithm, and found that it is feasible to appropriately increase the population size. A new method for estimating the expected first hit time was proposed by Yang and Zhou (2008), which is also used to analyze the evolutionary algorithm of different configurations. Based on the absorption Markov process, the convergence of ACO algorithm was studied by (Huang et al., 2009). Chen et al. (2010) analyzed the time complexity of a simple EDA to further understand its complexity. Yi et al. (2011) drawed a conclusion that QEA converges in odds under some loose assumptions. Ding and Yu (2012) introduced some time complexity analysis techniques of EAs based on limited search space. The precise analytical expression of the average first hit is obtained, that is, evolutionary algorithms reach the optimum solution.
In this paper, the Markov process and expected convergence time are used to analyze the convergence and time complexity of GBO algorithm. First, the Markov stochastic process of GBO is given and its theoretical model is established. Then, combining with the basic mechanism of GBO algorithm, its convergence will be studied. Finally, the expected convergence time of GBO will be discussed in detail.
The remainder of this paper is organized as follows. Section 2 constructs the Markov modal of the GBO algorithm. The global convergence of GBO algorithm is proved in Section 3. Section 4 analyzes the time complexity of GBO algorithm. Section 5 concludes this paper.

The random modal of GBO
The GBO algorithm is mainly used to solve continuous optimization problems (take the global minimization problem for example) as follows: where the objective function f (x) = const maps from problem space S to state space R and d is the number of dimensions. In combination with the GBO algorithm flow described in the previous section, its mathematical model is given here.

Definition 2.2:
In accordance with this definition, for the objective function f (x) to have a solution, it has to satisfy υ (R ε In accordance with the above definition, it means that the optimum parent in the optimum condition ξ which means that the odds of (τ + 1) -th condition occurring are not related to the odds of τ -th condition occurring.
In accordance with Definition 2.4, when the state of the garden balsam optimization can search the optimum state space, there is a seed in the optimum region R ε and the optimum position of search space has been fined by the GBO. After that, the optimal solution must always be in the optimal region. Definition 2.5: Given a Markov random process is called an absorbing Markov course.
, is an absorbing Markov course. The proof is completed.

Convergence of GBO
Convergence is an important index to measure the performance of evolutionary algorithms. Because there are many kinds of optimization problems, it is impossible for any optimization algorithm to converge quickly on all optimization problems. Without the loss of generality, this paper discusses the convergence of GBO when dealing with simple continuous optimization problems. A detailed derivation is given below.
The proof is completed.
The GBO algorithm has a second transmission operator. In the algorithm implementation process, the secondary propagation operator can use Gaussian variation and so on. Here, for simplicity, simple random variation is used.

Proof:
The GBO algorithm has a second transmission operator. After the second transmission, the probability of the seed falling into R ε can be expressed as where υ(·) denotes Lebegue measure, n sec denotes the number of seeds in the second transmission. We know that υ(R ε ) > 0, so P se (τ ) > 0 for any τ ≥ 0. In terms of the random Markov course {ξ(τ )} +∞ τ =0 of GBO, it holds that where P me (τ ) denotes the probability of the seed falling into R ε by mechanical transmission. So, P{ξ(τ ) ∈ U * |ξ(τ − 1) / ∈ U * } ≥ P se (τ ) > 0. Hence, because the course {ξ(τ )} +∞ τ =0 of GBO is an absorbing Markov course, then Consequently, the Markov course of GBO will converge to the optimum condition U * .
The proof is completed.
When the expected value E γ is small, it means that the convergence of GBO algorithm is faster.
The calculation method of E γ is as follows: ). The proof is completed.
In accordance with Theorem 3.1, it is difficult to compute E γ because it is hard to acquire the value of λ(τ ). Therefore, its estimation is as follows. The proof of the following theorems can be referred to (Wegener, 2002).
The proof is completed.

From Theorem 3.3,
In the same way: The proof is completed.
The time complexity of GBO algorithm needs to calculate E γ . Based on the above corollary, the expression P{ξ(τ ) ∈ U * |ξ(τ − 1) / ∈ U * } represents the probability that the seed of GBO algorithm will find the global optimal solution from the non-optimum condition. The estimation of expected convergence time E γ can be estimated from the range of the expression P{ξ(τ ) ∈ U * |ξ(τ − 1) / ∈ U * }. GBO includes two important operations: a mechanical propagator and a second propagator. In this section, the equation is further analyzed to obtain the time complexity of GBO.

Theorem 4.4: Let GBO's Markov course {ξ(τ )} +∞
τ =0 and optimum condition space U * ⊂ U, then GBO is such that Where υ(R ε ), υ(S), υ(L i ) are the Lebegue measure values of R ε ,S and L i , respectively. L i is the seed diffusion range of i-th parent.
Proof: In terms of the steps of GBO, it includes two operations to generate the seeds: a mechanical propagator and a second propagator. So the following equation is obtained: where, P(mec) represents the probability that the seeds generated by all parents fall into the optimal region through the mechanical propagation operator, and the expression for P(mec) is as follows: where, L i denotes the diffusion range of seeds produced by the i-th parent; z i is the number of seeds that the i-th parent generates.
The proof is completed.
Because the actual formula is difficult to calculate, the above theorem gives a rude result. To make the calculation result more accurate, the formula of P(mec) can be changed as follows: As we know, the formulas υ(G i ∩ R ε ) and z i in Equation (7) play a central role because they change dynamically as the algorithm runs, and the formula υ(G i ∩ R ε ) is related to the parent location. According to the selection strategy of GBO, seeds with better fitness are more likely to enter the next population, so it can be assumed that only one parent can stay in the optimum region R ε in the same time, and there are the highest odds for the best seed to fall into the optimum region R ε .

Corollary 4.3: GBO algorithm's ECTE γ such that
It can be seen from this that the ECT of GBO algorithm is not only related to S, but also to population size n, the number of secondary propagation seeds n sec , especially to the optimal individuals. It should be noted that the above results are obtained under certain assumptions. Therefore, to propose a more accurate analysis method, some equations of GBO need to be analyzed in detail.

Conclusion
The GBO algorithm is a novel swarm intelligence optimization algorithm. Compared with the classical evolutionary algorithm, this algorithm has its own advantages in searching for a target space through seed propagation. In this paper, the risk of convergence is also carried on the preliminary theoretical analysis, using the algorithm of swarm intelligence and analysis of other same Markov course; at the same time, the cluster algorithm convergence theorem is given. The basic concept of the Markov random process is defined and the global convergence of the algorithm is proved. In addition, the expected convergence time of the impatiens GBO algorithm in an approximate region is provided in this paper.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This work is partially supported by Key specialized research and development breakthrough of Henan province [22210232 0456], National Key R&D Program of China [2018YFB1308800].

Data availability statement
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.