Dual sub-swarm interaction QPSO algorithm based on different correlation coefficients

ABSTRACT A novel quantum-behaved particle swarm optimization (QPSO) algorithm, the dual sub-swarm interaction QPSO algorithm based on different correlation coefficients (DCC-QPSO), is proposed by constructing master-slave sub-swarms with different potential well centres. In the novel algorithm, the master sub-swarm and the slave sub-swarm have different functinons during the evolutionary process through separate information processing strategies. The master sub-swarm is conducive to maintaining population diversity and enhancing the global search ability of particles. The slave sub-swarm accelerates the convergence rate and strengthens the particles’ local searching ability. With the critical information contained in the search space and results of the basic QPSO algorithm, this new algorithm avoids the rapid disappearance of swarm diversity and enhances searching ability through collaboration between sub-swarms. Experimental results on six test functions show that DCC-QPSO outperforms the traditional QPSO algorithm regarding optimization of multimodal functions, with enhancement in both convergence speed and precision.


Introduction
Optimization techniques are a mathematically-based technology used to search for optimal or satisfactory solutions to various engineering problems [1]. The Particle Swarm Optimization algorithm (PSO) proposed by James Kennedy and Russell Eberhart [2] is a recent implementation of these techniques. The essential idea of PSO is to emulate the flocking behaviour of birds [3]. Initially, PSO is given with a population of random solutions, and then it searches for optima by updating generations [4]. Unlike some other metaheuristic algorithms, the standard PSO sustains the global search strategy, avoids complex evolutionary operations and enhances convergence capability. Therefore, the PSO algorithm has drawn broad attention in various application fields. The random inertia weight Particle Swarm Optimization (RNW-PSO) is introduced for trajectory tracking of wheeled mobile robots [5] and coefficients optimizing of proportion integration differentiation (PID) controller [6]. An improved PSO algorithm is used to deal with the problem of controlling a class of uncertain nonlinear systems in the presence of external disturbances [7]. A new and powerful optimization algorithm, known as chaotic accelerated PSO (CAPSO), is used for determining the coefficients of the proportional-integral controller of Dynamic voltage restorer [8].
Nevertheless, some problems remain to be solved in the PSO algorithm. One that has been raised and proven by Bergh [9] is that PSO is not guaranteed to always converge to the global optimal solution. Although a significant amount of work [10][11][12] has been done in recent years to modify and improve PSO, the state-of-the-art shows that some limitations remain. Aiming at PSO's convergence bottleneck, through the comparison between the human learning processes and particles' behaviour in quantum spaces, Sun [13] proposed the quantum-behaved PSO (QPSO) algorithm, which leverages the aggregation tendency of collective intelligence in a population. In QPSO model, individuals are represented as particles in quantum space, which continuously iterate according to characteristics seen in human society, such as self-organization, collaboration, etc. Theoretical proofs have shown that QPSO is a globally convergent algorithm. Thus, it has become an active area of research in several fields [13][14][15].
Although the standard QPSO outperforms original PSO in search ability, its significant drawback is the prematurity. When one particle finds a local optima, the others will quickly close to it due to the single potential well centre. If the particles could not find any better locations during the process to the local optima, the algorithm will run into prematurity. In order to jump out of the local optima, some improved ways have been presented in the practical applications, such as the QPSO with the extremum disturbed arithmetic operators [16], the chaos QPSO [17], QPSO with mutation operator [18], etc. However, these improved algorithms increase the complexity of the algorithm while avoiding prematurity through the immune mechanism. In order to further improve the performance of the QPSO algorithm without introducing complexity, this paper proposed a novel algorithm building on existing research results, which is the dual sub-swarm interaction DCC-QPSO. First, this new algorithm divides the entire population into two sub-populations equal in size. Then, each sub-group applies different processing methods in order to achieve the separation of potential well centres. The collaboration between main-group and slave-group can take advantage of effective information, avoid rapid loss of population diversity and prevent particles becoming trapped in local optima by transferring information between the two subgroups with different potential well centres. Experimental results on test functions show that DCC-QPSO outperforms the traditional QPSO algorithm in the optimization of multimodal functions, with enhancement in both convergence speed and precision.

Binary correlation QPSO algorithm
The standard QPSO algorithm In a PSO optimization algorithm, the solution space is abstracted as the birds' foraging space, where each bird is abstracted as a massless and size-free particle flying at a certain speed. Each particle has a fitness value determined by the function to be optimized. According to the fitness value, a random search is carried out by each particle. In every round of iteration, each particle updates itself by tracking two optima: The first one is the optimal solution found by each particle itself, commonly referred to as the personal optimum p best ; the other is the optimal solution found by the entire population, commonly referred to as the global optimum g best . For simplicity, in this article, P i = (p i1 , p i2 , …, p iD ) and G = (p g1 , p g2 , …, p gD ) are used to describe the personal optimum and global optimum of particle i in the D-dimensional search space, respectively. Particle i's personal best position p best is determined by Equation (1) The index g of the global best position G = (p g1 , p g2 , …, p gD ) is determined by Equation (2): Reference [19] has demonstrated that standard PSO is not guaranteed to converge on the global optimum solution with probability 1, which is a major shortcoming of the traditional PSO. In order to achieve global convergence, on the basis of previous studies of the particles' convergence behaviours, the QPSO algorithm was proposed based on the d potential well by representing the PSO system as a quantum space [13].
According to the analysis of particle orbits in the PSO algorithm done by Clerc and Kennedy [20], a d potential well can be established at the local attraction point p i = (p i1 , p i2 , …, p iD ) to impact particles in the population, whose coordinate is: p i;j ðtÞ ¼ c 1 r 1;i;j ðtÞP i;j ðtÞ þ c 2 r 2;i;j ðtÞG j ðtÞ c 1 r 1;i;j ðtÞ þ c 2 r 2;i;j ðtÞ In this equation, r 1 and r 2 are random numbers independently distributed within the interval [0, 1], called random factors; c 1 is the individual cognitive acceleration coefficient, whereas c 2 is the global cognitive acceleration coefficient.

Equation (3) can be simplified as
where, In our representation, particles are moving in a quantum space, thus the particles' states can be described using the wave function C(X, t). From the point of view of the theory of dynamics, the convergence process of a particle can be described as follows: a particle is continuously approaching the local attractor p i with decreasing speed, and eventually overlaps with p i . The steady-state of a particle in the d potential well can be expressed using the Schr€ odinger equation.
By solving the above equation, the probability distribution function in every dimension can be obtained for each particle.
And the position-updating equation of every particle in each generation in the QPSO can be deduced as: where u i,j (t) » U(0,1), L i,j (t) is the length of the potential well. L i,j (t) can be evaluated using the following equation: where C(t) is the average of all personal best positions, known as the gravity position of the population, and is evaluated as follows: CðtÞ ¼ ðC 1 ðtÞ; C 2 ðtÞ; . . . ; C n ðtÞÞ ¼ 1 N Therefore, a particle's position evolution equation in (6) is finally defined as follows: In the above equation, a is referred to as the contraction-expansion coefficient, which is the only parameter in this algorithm except for the population size and iteration count. In the iteration process, convergence performance is controlled by fine-tuning a. The value of a can be fixed or decreased in a linear manner.

Binary correlation QPSO algorithm
The coordinate formula of the potential well centre, which is denoted as p i = (p i1 , p i2 , …, p iD ) can be divided into two portions (1) Individual cognitive component: c1r1i;jðtÞ c1r1i;jðtÞ þ c2r2i;jðtÞ P i;j ðtÞ, which indicates the experiences of the particles themselves; (2) Social cognitive component: c 2 r2 i;j ðtÞ c 1 r1 i;j ðtÞþc 2 r2 i;j ðtÞ G j ðtÞ, which represents shared information among the particle population.
Under the combined effect of the two components above, the QPSO algorithm aims to find the optimal solution, adjusting the position of pi constantly in the solution space according to the sharing of information and the experiences of each particle.
The coefficients c 1 , c 2 represent the statistic weights of the particle's acceleration, reflecting the information exchanged in the particle swarm. Setting a large c 1 will cause the particles to wander in the local area because of an undue reliance on their own experience, while a large c 2 will cause the particles to converge to a local optimum prematurely [19].
As they are important parameters in the standard PSO algorithm, there are a lot of related studies about how best to set the values of the acceleration factors c 1 and c 2 [1,12,[20][21][22][23][24]. These policies have obtained some improvement in the PSO algorithm, however, they do not take into account the impacts of the random factors r1 and r2 on algorithm performance.
The independence assumption between r1 and r2 in the p i formula means the algorithm cannot distinguish the utilization of pbest and gbest. At present there are few studies on the effects of parameters r1 and r2 on the algorithm. However, it is necessary to analyse the random factors in order to further study the impacts of the utilization of the particles' own experiences and community sharing information on the performance of the QPSO algorithm, respectively.
To analyse the connection between r1 and r2 in QPSO, reference [20] suggested the concept of binary correlation factors and proposed the Binary correlation QPSO algorithm, referred to as the BC-QPSO algorithm. The BC-QPSO algorithm constructed the relations between r1 and r2 using the bivariate normal Copula function: F r (F ¡1 (r1), F ¡1 (r2), the Fr echet-Hoeffding lower bound: W(u, v) = max(u + v -1, 0), the Fr echet-Hoeffding upper bound: M(u, v) = min (u, v) and the product Copula: P(u, v) = u v. The particle's position evolution equation of BC-QPSO is finally defined as follows: where H is the joint distribution function of the binary correlation factors r1, r2; C is the binary normal Copula function; r is the specified correlation coefficient, which is an indicator of the relevant strength and could reflect linear correlation properties between variables r1 and r2; F r is the two-dimensional standard normal distribution function of the correlation coefficient r; and F ¡1 is an inverse function of the one-dimensional standard normal distribution function.

The dual sub-swarm interaction QPSO algorithm based on different correlation coefficients
The BC-QPSO proposed by reference [20] enhances the optimization performance to some extent. However, this algorithm builds the potential well on a single position, thus the global searching ability is compromised by the optimum's attraction for particles. The evaluation pattern offered by the multi-group interaction has revealed an effective method to enhance the overall performance of the BC-QPSO. In order to further enhance the BC-QPSO's searching ability and convergence precision, on the basis of our previous work, a novel interactive QPSO is proposed in this paperthe dual sub-swarm interaction DCC-QPSO. This new algorithm separates the single potential well into a double one, and then the entire population is divided into two sub-swarms in which different information processing strategies are adopted, respectively, to expand the search range. Meanwhile, the wait effect [15] among the particles avoids excessive accumulation of the population. Therefore, superior convergence performance is achieved for the new algorithm.

Learning strategies
The analysis of BC-QPSO done by reference [20] shows that the diversity of the population decreases with increasing correlation coefficient r(¡1 r 1) of the binary correlation factors (i.e. r1 and r2, seen in Equation (13)). In the BC-QPSO model, given that r = ¡1, the particle's unbalanced use of p best and g best in determining the position of the potential well centre deviates the potential well from g best , resulting in a trend of expansion of the population, which helps extending the search space as well as population diversity. In the BC-QPSO model given that r = 1, particles make the most of p best and g best in a highly balanced manner, making the potential well rapidly move towards extremum, resulting in a quick contraction of the population, which helps to speed up the convergence. Based on the above analysis, a conclusion can be drawn that the higher the degree of correlation between the binary correlation factors r1 and r2, the faster the convergence of the algorithm; conversely, the lower the degree of correlation, the greater the diversity of the population and the higher the precision of convergence. In order to guarantee the desired convergence precision and efficiency, by the combination of the fully positively correlated BC-QPSO (i.e. r = 1) and the fully negatively correlated BC-QPSO (i.e. r = ¡1), we derive a new QPSO algorithm -DCC-QPSO. Suppose that the entire population is S and the size of the population is N; the master group is denoted as S 1 , its population size is N S1 , and its global best is g best1 ; the slave group is S 2 , its population size is N S2 , and its global best is g best2 ; thus we have S 1 [S 2 = S and N s1 + N S2 = N.
The basic ideas of DCC-QPSO are: in a randomized and quantum-behaved particle swarm, we divide the whole population into two fully independent sub-groups. One group adopts the fully negative correlation strategy against p best and g best when determining its potential well, that is, we carry out an iterated search of the solution space given r = ¡1 (i.e. the correlation coefficient between r1 and r2). This group is referred to as the master group S 1 . The other group adopts the fully positive correlation strategy against p best and g best when determining its potential well, that is, we carry out an iterated search of the solution space given r = 1. This group is referred to as the slave group S 2 .
The evolution equation of each particle in the DCC-QPSO based on the d potential well is as follows: Due to the different learning strategies, the two subgroups play different roles in the evolution of the swarm. The master group's learning strategy against existing information helps maintaining the diversity and enhancing the global searching ability of each particle. The slave group's learning strategy against existing information helps in accelerating the convergence and enhancing the local searching ability of each particle. By enabling information exchange between these two sub-groups, DCC-QPSO takes advantage of the two learning strategies and compensates for the shortcomings of each one.

Interaction between sub-swarms
Information exchange between the two sub-groups is achieved through their respective global best fitness values. At the end of each iteration, a comparison between the fitness values corresponding to S 1 and S 2 's current best position is carried out. If g best2 is more optimal than g best1 , then g best2 is assigned to g best1 ; otherwise, g best1 is assigned to g best2 . The essence of the above operation is to update the whole swarm's best position, which helps sub-groups escape from local optima. Thus, the whole swarm does not become trapped in a local optimum even if a "super individual" appears.
By the mutually complementary and collaborative evolution of the master group S 1 and salve group S 2 , advantageous information is utilized to prevent the swarm from having its evolution halted by settling into local optimal values. This is done without increasing the swarm size, adding parameters, complicating the algorithm or compromising convergence. In addition, the mutual collaborative iteration during the search process maintains higher diversity and global search ability, while avoiding excessive aggregation of swarm particles.

Algorithm execution process
Based on the designs and definitions discussed above, the execution process of the DCC-QPSO algorithm is as follows: Step 1: Parameters setting. The parameters to be set up include the individual cognitive acceleration coefficient c 1 , the global cognitive acceleration coefficient c 2 , the contraction-expansion factor a, the swarm population size N and the maximum number of iterations iterMax or the error precision of fitness. Meanwhile, the whole population is divided into two fully independent sub-groups. The sub-group adopting the fully negative correlation (i.e. the correlation coefficient r = ¡1) is referred to as the master group S 1 , and its population size is N s1 ; the other subgroup adopting the fully positive correlation is referred to as the slave group S 2 , and its population size is N s2 .
Step 2: Initialization of the population. For S 1 , initialize the position of every particle in the solution space by randomly generating X i,j (0) for every particle and let be the personal best position P i,j (0) = X i,j (0), where i2{1,…,N S1 }, j2{1,…, D}. For S 2 , initialize the position of every particle in the solution space by randomly generating X i,j (0) for every particle and let it be the personal best position P i,j (0) = X i,j (0), where i2 {N s1 +1,…,N}, j2{1,…,D}; Step 3: Calculate the fitness values for all particles in S 1 and S 2 . Suppose that the optimization problem to be solved is a minimization one, and then assign the position corresponding to the smallest fitness value to the global best position of sub-groups respectively, i.e. g best1 = {X i jmin(f (X i )), i2{1,…,N S1 }, g best2 = {X i jmin(f(X i )), i2{1, …,N S2 }.
Step 4: Calculate the average best positions of the entire population C(t) according to Equation (10) and evaluate the parameters L i,j (t) of the master and slave sub-groups, respectively.
Step 5: Update the position for particle i (1 I N), that is calculate new positions for all particles according to the objective function. (for t iterations).
Step 6: Update the personal best positions using Equation (1) for the master and slave subgroups, respectively. If f(X i (t)) < f(P i (t ¡ 1)), then let be P i (t) = X i (t); otherwise, P i (t) = P i (t ¡ 1).
Step 7:If i2S 1 and the fitness value of P i (t) is better than the fitness value of the global best position for the whole swarm P g (t ¡ 1), i.e. f(P i (t)) < f (P g (t ¡ 1)), then P i (t) is saved as the global best position of master group S 1 , which is denoted as P gs1 (t); otherwise, P gs1 (t) = P g (t ¡ 1). If i2S 2 and the fitness value of P i (t) is better than the fitness value of P g (t ¡ 1), i.e. f(P i (t)) < f(P g (t ¡ 1)), then P i (t) is saved as the global best position of the slave group S 2 , which is denoted as P gs2 (t); otherwise, P gs2 (t) = P g (t ¡ 1).
Step 9: Termination determination. If the maximum number of iterations iterMax has been executed or the error precision of fitness value is achieved, then stop the searching process and output the results. Otherwise, let t = t + 1 and repeat steps 3-9.
The execution flow chart of the DCC-QPSO algorithm is as shown in Figure 1.

Experimental design
The performance and efficiency of the intelligent algorithms tend to be affected by the experimental parameter settings [16]. How to determine the parameters that achieve optimal performance is in itself a very complex optimization problem. In order to obtain reasonable experimental results, a set of benchmark functions, including the Sphere function, Rosenbrock function, Rastrigin function, Griewank function, Ackley function and Schaffer function, were adopted to test QPSO, BC-QPSO, DIR-QPSO and DCC-QPSO in a performance comparison.

Benchmark functions
Benchmark functions with various characteristics are a major tool for performance evaluation in evolutionary algorithms. Unimodal and multimodal problems are commonly seen in engineering projects, thus, unimodal and multimodal functions are used as testing functions in this paper. Expressions, search range of variables, initialization range, optimal solution and optimal values are given in Tables 1 and 2.

Experimental parameter configuration
The parameter a decreases linearly from 1 to 0.5 in every algorithm to be tested, and the correlation parameter r is set to be ¡1 in the BC-QPSO algorithm. The dimensionality of the benchmark functions is set to be 20 and the maximum number of iterations is set to be 1000. The size of the swarm population is set as 50, the size of the master group is 25 and the slave group is 25 as well; every benchmark function is tested times independently, and the average value of each test function is evaluated at the end of 30 iterations.

Experimental results
The mean fitness value and the central processing unit (CPU) time of each iteration using QPSO, BC-QPSO, DIR-QPSO [23], DCC-QPSO solving benchmark functions is as shown in Table 3.
We see from Table 3. that compared with QPSO, DIR-QPSO and BC-QPSO, DCC-QPSO finds the optimal value for the Rastrigin function through interaction between sub-groups. In addition, it leads to the best optimization for the Sphere function, multimodal Ackley function and Expanded Shaffer function. DIR-QPSO finds the optimal value for the Rastrigin function. The standard QPSO offers the best optimization precision while processing the Rosenbrock function and non-linear multimodal Griewank functions, whereas it provides the optimal value while processing the Ackley function. Figure 2 depicts the convergence curves of the unimodal and multimodal test functions using QPSO, DIR-QPSO, DCC-QPSO and BC-QPSO ; 10 D f 2 ð1; 1; . . . ; 1Þ ¼ 0 algorithms when the population size is 50 and the problem dimensionality is 20. The comparison shows that the d potential well based DCC-QPSO algorithm not only attains a better optimization precision, but also provides faster convergence when processing unimodal functions, including Sphere function and Rosenbrock function, which are illustrated in Figure 2(a,b), respectively. The convergence curve of Rastrigin is showed in Figure 2(c). This function is a non-linear multimodal function with many local optimal solutions. Therefore, it is hard to find the global optimum and easy to become trapped in a local optimum. In the solving of the 20 dimensional Rastrigin function, the two singlepopulation QPSO algorithms are trapped in the local optima, whereas the two interactive dual-group QPSO algorithms exhibit better performance in solving the problem due to the separation of potential well centres and the interaction between the master group and the slave group.
Algorithm   2(d) shows the convergence curve of the Griewank function. The Griewank function is a strongly nonlinear multimodal function. This function has some features such as the strong interference among the product terms of the function, local optima distribute in a predictable manner, and the number of the local optima reduces as the dimensionality of the problem increases. The experimental results show that DIR-QPSO, DCC-QPSO and BC-QPSO exhibit similar performance when solving the 20-dimensional Griewank function, while DIR-QPSO slightly outperforms the others.
The convergence curve of the Ackley function is shown in Figure 2(e). The optimal solution of Ackley lies deep down in a "valley", surrounded by regularly distributed local optima. The DCC-QPSO algorithm proposed by this paper gives the highest convergence precision, whereas, the QPSO shows its advantage in convergence speed when dealing with the Ackley function. Figure 2(f) illustrates the convergence curve of the Expanded Shaffer function. The optimal value of this function is surrounded by a number of concentrically distributed local minima. In addition, the strong fluctuation of the function surface makes it even harder to find the real optimal solution. The four algorithms mentioned in this section consistently fall into the local optima in the process of iteration. The standard QPSO converges after 650 iterations and the DCC-QPSO does so after 750 iterations.
In summary, the single-population QPSO algorithms (the standard QPSO and the BC-QPSO) are capable of solving the simpler unimodal functions with one optimal solution in an efficient manner because they maintain the diversity of the population while elevating the average best position C to its optimal level. Meanwhile, unnecessary complexity is added to the optimization process when multi-population interactive algorithms are adopted, offering no advantages in solving unimodal functions. The dual-group QPSO algorithms exhibit better convergence performance when dealing with different multimodal functions. Compared with the other QPSO algorithms, the DCC-QPSO constructs two potential wells for a master and a slave group through each using a strategy based on different correlation coefficients. This policy offers the novel algorithm outstanding global searching ability due to the sharing of best positions between S1 and S2 through mutual complementation and evolution. The search pattern that two sub-groups carry out involving simultaneous searching and mutual learning enhances the probability of finding the optimal solution. It also offers higher convergence precision and speed within limited iterations.
Although superior performance can be gained from the multi-group interactive algorithms when solving multimodal functions, the size of the population needs to be expanded. For single-population QPSO algorithms, the size of the population can be chosen between 20 and 60 while for multi-population interactive algorithms, the size of the population should not be too small, otherwise optimization may be compromised [24]. For instance, in this experiment, the size of population in the DIR-QPSO and that in DCC-QPSO are both set to be 50, divided equally into 25 for the main and slave groups, respectively. The energyconsuming problem brought by multi-groups can be solved by parallel computing techniques, thus the multi-group algorithm is a rational and effective route towards performance enhancement.

Conclusion
This paper proposed the DCC-QPSO, a new Quantum-behaved PSO algorithm. This new algorithm divides the entire population into two sub-populations equal in size and applies different processing methods to these sub-groups in order to achieve the separation of potential well centres. The collaboration between main and slave sub-groups makes possible a more thorough mutual learning between particles, which leads to an enhancement in global searching ability, avoidance of falling into local optima prematurely and improvement of QPSO convergence performance. By the comparison of the solution processes of unimodal and multimodal functions, the DCC-QPSO proves to be a promising global optimization algorithm. This algorithm exhibits higher convergence precision and speed when dealing with multimodal functions, compared with traditional QPSO algorithms, and it also reflects its broad application potential in related areas where high solution precision is required.

Disclosure statement
No potential conflict of interest was reported by the authors.