A hybrid differential evolution for multi-objective optimisation problems

In order to effectively use differential evolution (DE) to solve multi-objective optimisation problems, it is necessary to consider how to ensure the search ability of DE. However, the search ability of DE is affected by related parameters and mutation mode. Based on decomposition, this paper proposed a hybrid differential evolution (HMODE/D) for solving multi-objective optimisation problems. First, when generation satisfies a certain condition, the local optimum is selected using the information of neighbour individual objective values to produce mutation offspring. Then, the heuristic crossover operator is established by using a uniform design method to produce better crossover individuals. Next, an external archive is set for each individual to store the individuals beneficial to the optimisation objective functions. Then, the individual is selected from the external archive to generate mutation offspring. In addition, considering that the performance of DE is determined by parameters, using the relevant information of the objective space function value, the self-adaptive adjustment strategy is adopted for the relevant parameter. Finally, a series of test functions with 5-, 10-, and 15-objectives are performed in the experiments to evaluate the superiority of HMODE/D. The results show that HMODE/D can solve the multi-objective optimisation problem very well.


Introduction
Multi-objective optimisation problems (MOPs) widely exist in real life and industrial applications, such as the design of complex systems (Ramirez et al., 2016), water reservoir (Giuliani et al., 2014), optimisation of risk design optimisation (Gang & Hao, 2014), shop scheduling problem (J. Li et al., 2020), environmental-economic load dispatch (Yalcinoz & Rudion, 2020), and optimisation of engineering systems (Falahiazar & Shah-Hosseini, 2018). It usually consists of more than three objectives, which conflicts with each other. In general, the model of a multi-objective optimisation problem (MOP) is as follows: where m is the number of objectives, x is n-dimensional decision variable vector and is the decision (variable) space. Unlike single-objective optimisation problem, MOP does not find the optimal of a single objective, but to reach a reasonable trade-off for every objective in the objective space to produce a set of solutions. The set is called the Pareto set (PS), and the set of all the objective vectors corresponding to the PS is called Pareto front (PF) (Mardle & Miettinen, 1999). It is challenging to optimised conflicting objectives concurrently by using traditional methods. Evolutionary algorithms can produce a set of solutions in parallel and a set of approximate Pareto optimal can be obtained by multiple iterations. Therefore, evolutionary algorithms (MOEAs) are widely used to solve multi-objective optimisation problems. Classical MOEAs include NSGAII  and MOEA/D (Q. Zhang & Li, 2007). NSGAII uses non-dominated sorting approach to select offspring, which can ensure the uniform distribution of Pareto optimal. However, the corresponding selection pressure increases with the increasing number of objectives, which worsens the convergence of the algorithm. In order to reduce the pressure of selection, NSGAII variants have been proposed to solve MOPs in Deb and Jain (2014). Moreover, MOPs were solved by adjusting the dominance relationship of Pareto to improve the dominance region, such as M. .
MOEA/D decomposes a multi-objective optimisation problem into several scalar optimisation subproblems and optimises them simultaneously. Compared with NSGAII, MOEA/D has better convergence when the number of objectives increases; however, the uniformity of PF is relatively worse, combining the advantages and disadvantages of MOEA/D, many improved algorithms based on MOEA/D have been proposed to solve MOPs in Y. Yuan et al. (2016), Lotfi and Karimi (2017), Hui et al. (2018) and Farias and Araujol (2019).
The following two principal aspects should be considered while using MOEAs to solve MOPs. (1) How to ensure the convergence of the algorithm so that the objective values of individuals converge to true PF as close as possible.
(2) How to ensure the diversity of the populations so that PF is uniformly distributed in the objective space. Researchers designed some algorithms through analysing the above problems from a different perspective and the specific work is summarised as follows.
In recent years, some researchers utilised use user preferences or reference points to select offspring individuals in the search process. For example, a preference-based weight vector was designed in X. Zhang et al. (2016) to enable the algorithm to search in the region of interest of decision-makers. On the contrary, Zhang considered that there is no preference between each objective, but a point, called knee point, which is selected from a set of non-inferior solutions, can accelerate the convergence of the populations. Therefore, an evolutionary algorithm based on knee point was designed in X. Zhang et al. (2015). However, Cheng pointed out that some algorithms, including the above algorithms, will face the problem that the population might not converge to the true PF while the amount of objective increased. Therefore, a multi-objective evolutionary algorithm based on reference vector guidance was proposed in Cheng et al. (2016), which used the angle penalty distance to balance the convergence and diversity of solutions. Moreover, the convergence-diversity balanced fitness evaluation mechanism is proposed in J. Liu et al. (2019), which can evaluate each solution's quality and then increase the selection pressure. A novel interactive preference based on a multi-objective evolutionary algorithm is proposed in Guo et al. (2019) to solve bolt supporting network, which updates dynamically these preference regions based on satisfactory candidates during the evolution.
In order to ensure that the algorithm converges to Pareto optimal as close as possible, different optimisation operators were designed to improve the performance of algorithms, such as particle swarm optimisation proposed in Lin et al. (2018); ant colony optimisation proposed in Ning et al. (2018); genetic algorithm proposed in He et al. (2017); differential evolution (DE) proposed in Yi Elsayed et al. (2013), where DE was proposed by Price and Storn (Storn & Price, 1997) in 1997, which can exploit and explore population searching. Therefore, DE has attracted attention in recent years to solve single objective problems, such as Tang et al. (2015), X. Wang and Tang (2016) and C. Wang et al. (2019). Zhang proposed an algorithm (MOEA/D-DE) in 2009, which uses DE to produce offspring and combines multi-objective decomposition to solve MOPs in Hui and Zhang (2009). Considering that the performance of DE is related to mutation operator and related parameters to solve MOPs, the adaptive differential evolutionary and the decomposition with variable neighbourhood size are combined in Z. Liu et al. (2014), where the neighbourhood size is determined by the ensemble of neighbourhood size, and the differential strategy is selected from the differential strategy pool. In the same way, the DE strategy is chosen from a candidate pool according to a probability that depends on its previous experience in Venske et al. (2014). Moreover, based on the decomposition approach, a modified mutation and crossover operation of DE are presented in Jin and Tan (2015) to solve continuous MOPs. A novel DE is proposed in Xie et al. (2020), where mutation operator and parameter can be, respectively, adaptive dynamic selected by using the search information of the current population and population evolution state. Nayak et al. (2018) present an elitismbased multi-objective DE for feature selection, and a multi-objective clustering algorithm has been proposed based on DE in Nayak et al. (2019) to solve data clustering. In addition, Other algorithms combined with DE to build hybrid evolution algorithm, which can solve MOPs, such as K. , X. Zhang et al. (2016), X. Zhang et al. (2018) and Guerrero-Peña and Araújo (2019).
However, in Tanabe and Ishibuchi (2019), the influence of different mutation strategies on algorithm MOEA/D-DE is analysed in detail, and it is pointed out that a reasonable mutation strategy and parameters can improve the performance of the MOEA/D-DE, which means that the convergence speed of DE is affected by mutation operation and parameters. Therefore, it is essential for DE to select reasonable mutation operation and relevant parameters to solve MOPs. Although some optimisers use different strategies to adjust the parameter F of DE, it is rare to use the size of the decision space to adjust the parameter. In addition, according to the characteristics of MOEA/D-DE and the feedback information of the function values, the external archive and rank sum are established, and the local optimal is selected to produce offspring individuals according to the above results, this process can improve the convergence of DE. It is worth noting that the binomial crossover operation of DE is the recombination of parental and offspring individual, which has certain randomness. Obviously, if each component of the crossover individual is selected according to some heuristic information, this operation is better than binomial crossover. Therefore, this paper did the following work based on MOEA/D-DE.
• The norm values of vectors satisfying certain conditions are used to self-adaptively adjust parameter F.
• The gambling wheel is used to select local optimal individual and to execute the mutation operator, which can ensure the convergence of DE. • An external archive must be set for each individual; this archive is formed by individuals whose weights are relatively smaller. Then an individual is selected from the external archive to execute the mutation operator. • A heuristic crossover operator is established based on a uniform design.
An improved DE based on decomposition is developed by implanting different operations of mutation and heuristic crossover and the self-adaptive adjustment for relevant parameter, which can solve real-world optimisation for industrial problems, such as operation of distributed energy systems, operation of natural gas pipeline networks, crash safety design of vehicles, greenhouse gas emissions from ships, land use allocation in high flood risk areas, etc. The rest of this paper is organised as follows. Section 2 briefly explains the related work. Section 3 introduces improved schemes, including selecting local optimal to produce mutation operator and heuristic crossover operation based on a uniform design, the self-adaptive adjustment for relevant parameter, mutation operation using an external archive, and the framework of HMODE/D. The simulation computation and comparison between HMODE/D and other algorithms are presented in Section 4. Finally, the conclusion is presented in Section 5.

Related work
The section introduces some basic algorithmic approaches, including classical DE and multi-objective differential algorithm based on decomposition.

Classical DE
The classical DE was proposed by Price and Storn to effectively solve the single-objective optimisation problem, which uses mutation, crossover, and selection operators to evolve the population. The basic procedure of DE is described as follows.
Mutation operation is performed on the parent individuals to produce offspring. Different ways of mutation have distinct characteristics. Some common mutation operations are listed below: • DE/best/1/bin where the indices r1, r2, . . . , r5 are different integers chosen randomly from {1, 2, . . . , NP} \ i, x best denotes the best individual in the current population, and the parameter F is used to control the step length of the difference vector.
DE/rand/1 and DE/rand/2 use random vectors to provide mutation direction for the population, which can almost cover the region of the potential optimal solution. However, the random search of the above two mutation methods lacks the exploit ability of DE, which cannot guarantee the convergence of DE. In contrast, DE/best/1 and DE/best/2 use the global optimal solution to provide the mutation direction for the population, which can improve the convergence of DE. However, the above two mutation methods lack the exploration ability of DE, which makes DE easy to fall into the local optimal. Different from the above mutation methods, DE/current-to-best/1 and DE/current-to-rand/1 have a better compromise between convergence of DE and diversity of population.
Crossover operation is selected for each component from the parent and mutation individual by a certain probability CR, and this process is named binary crossover, i.e.
where i = 1, 2, . . . , NP; j = 1, 2, . . . , D. j rand is an integer randomly chosen in [1, D]; rand(0, 1) is a random number in [0,1], the parameter CR is called the crossover probability. The selection operation is a one-to-one selection from the parent and mutation individual. In this operation, the fitness value of each parent individual and the corresponding crossover individual are compared to select the next generation individual.

Multi-objective DE based on decomposition (MOEA/D-DE)
DE can overcome the defect of simulated genetic operators generating inferior offspring. Therefore, Hui and Zhang proposed a multi-objective differential algorithm based on decomposition in Hui and Zhang (2009), which uses DE to produce offspring. The detailed procedure of MOEA/D-DE is described as follows.
Step 2 Update: For i = 1:N do Step 2.1 Select mating range: where rand is a random value that is uniformly distributed on [0, 1]; δ is the probability that parent solutions are selected from the neighbourhood.
Step 2.2 Perform mutation and crossover Randomly select two indexes r1 and r2 from P, generate an individual v using the following equation Then, crossover operation is executed for v i to produce a new individual u i by Equation (8) and repair the component that exceeds the boundary for u i .
Step 2.3 Update z: for each j = 1, 2, . . . , m, if z j > f j (u i ), then f j (u i ) replaces z j .
Step 2.4 Update individuals: set c = 0 and let n r be a parameter (1) If c ≤ n r , randomly pick an index q from P; (2) If g(u i | λ q , z) ≤ g(x q | λ q , z), set x q = u i , F(x q ) = F(u i ) and c = c + 1; (3) Remove q from P and go back to (1) until the condition is not satisfactory.
. . , λ qm }, this is named the Tchebycheff approach (Q. Zhang & Li, 2007). Some algorithms, including MOEA/D-DE and WVMOEAP (X. Zhang et al., 2016), use Equation (9) to generate mutation individual, which is a random search mechanism. If the above mutation operation is adopted in the whole evolutionary process, the convergence of DE may worsen. Moreover, some potential solutions may not be obtained when the parameters F and CR are invariable. Therefore, in order to solve the above problems, this paper performs some improvements on DE. The relevant work of this paper will be described in the next section in detail.

Handling method and proposed algorithm
This section mainly introduces some handling techniques to improve the convergence of DE, including adjustment strategy of related parameter, mutation operation based on the local optimum, external archive, and heuristic crossover operation based on a uniform design.

Adjustment for mutation factor
In order to ensure that DE has a strong searching ability, the mutation individuals should be produced in the early stage of the evolution and distributed as widely as possible in the decision space, while individuals of higher quality should be produced in the late stage of evolution. In order to solve the above problem, it is feasible to adjust the mutation factor using the size of the decision space. If the difference vector ( , the mutation factor F is set as ρ x max − x min 2 , where ρ is a parameter, x max and x min are the decision variables that guarantee the maximum Euclidean distance in the decision space. Obviously, if ρ is larger, individual offspring may be close to the decision boundary and explore the unknown area in the decision space; if ρ is smaller, individual generated by mutation operation may be close to the best individual.
MOEA/D-DE generates mutation individual by making use of the neighbour individuals, in which the scaling factor keeps constant in whole iterations. In this paper, the mutation scheme is improved, in which the information of neighbourhood individuals is used to adaptively tune the scaling factor. In addition, in order to save the computational cost, only part of the objective functions are selected randomly to determine the extreme variable values, x max and x min . The specific process is described as follows.
First, randomly select an objective function in the objective space for individual x i . Then, calculate the objective values of the neighbour individuals for x i and find individual sets corresponding to the maximum and minimum values of the objective function in neighbour individuals, which are noted as X max and X min , respectively.
Then, calculate the Euclidean distance between individuals in set x max and x min ; find the individuals x max and x min that satisfy the largest Euclidean distance.
Next, randomly select two indexes r1 and r2 from the neighbour individuals, the corresponding individuals are noted as x r1 and x r2 .
Finally, calculate the norms of vector (x max − x min ) and (x r1 − x r2 ), the norms are denoted as x max − x min 2 and x r1 − x r2 2 , the mutation parameter F i for x i as follows by using the above norm values where control parameter ρ is set to ρ = τ e −( G Gmax ) 2 ; τ is a number in [0,1]; G and Gmax represent current and maximum generation, respectively.
It is worth noting that in order to ensure the rationality of the parameter values, the parameter is adjusted as follows: where F min represents the lower bound of F i and if F min is small, the algorithm is easy to fall into local optimal. F max represents the upper bound of F i and if F max is large, the algorithm approximates random search, which will affect the convergence of the algorithm.

Mutation operation based on local optimum and external archive
Two different mutation methods are proposed in this section to improve the convergence of DE. One is to select the local optimal individual from the neighbourhood individuals to generate an offspring individual, and the other is to select an individual from the external archive to generate an offspring individual.

Mutation operation based on local optimum
MOEA/D-DE decomposes MOP into a group of single-objective optimisation problems and evolves sub-populations to obtain the optimal solutions to the original problem. If a single mutation strategy is used throughout the evolution of DE, it is difficult to balance the exploration and exploitation abilities of DE. In order to overcome the shortcoming, an improved mutation operator based on the rank of objective values is developed. It distinguishes from DE/best/1, because the roulette wheel selection selects the best individual. The procedure can be described as follows. First, for individual x i , the corresponding subpopulation is set Then, rank all individuals of the subpopulation X i using each dimension of the objective vector; in this way, each individual obtains an order vector. Calculate the sum of each order vector, the sum is called as rank value, denoted by R(x ij ); j = 1, 2, . . . , T.
Finally, get the selection probability as follows: Once the selection probability is determined, a potential better individual x b can be obtained by roulette wheel selection, and a new mutation operation can be executed: where x r1 , x r2 are different individuals selected from the subpopulation X i and F satisfies Equation (10). Obviously, an individual with a smaller ranking is closer to PF in the domain. Therefore, such an individual should have a greater probability of being selected and become the local optimal.

Mutation operation based on external archive
MOEA/D compares the weight values of neighbour individuals with offspring to update parent individuals using the Tchebycheff approach. If the offspring replaces the neighbour individual, i.e. the weight value of offspring does not exceed the corresponding weight value of a neighbour individual. It suggests that the offspring individual is beneficial for optimising some objective functions. On the contrary, the parent individual is more favourable. Therefore, an external archive is set for each weight vector; this archive can store individuals whose corresponding weight is relatively insignificant. Then, an individual can be selected from the corresponding external archive to perform a mutation operation, which is beneficial to the convergence of the algorithm. The detailed process is described as follows.
At first, an empty external archive X archive (i) with a certain size is set for each weight vector λ i to store individuals and weight values.
Then, the weight value g(x q | λ q , z) of the neighbour of the ith individual x i and the weight value g(u | λ q , z) of the offspring are compared one by one, where q is selected from the neighbourhood indexes P. If g(u | λ q , z) ≤ g(x q | λ q , z), the individual u and the weight value g(u q | λ q , z) are stored in the external archive X archive (q) of the corresponding weight vector λ q . Otherwise, the individual x q and the weight value g(x q | λ q , z) are stored in the external archive X archive (q).
At last, randomly select two indexes r1 and r2 from the neighbourhood indexes P. Then, an individual x b archive can be randomly selected from the external archive X archive (i) to generate mutation offspring. i.e.
Unfortunately, the calculation cost is high while each group of weights are compared to store the relatively good individuals. Therefore, some groups are selected in this paper for comparison to ensure that the external archive of each individual is not empty, which can reduce the calculation cost.

Heuristic crossover operation based on uniform design
The binomial crossover operation of DE is the recombination of parental and offspring individual based on crossover probability CR to maintain the diversity of the population. This means that the binomial crossover operator does not have the characteristic of heuristic. Considering that uniform design has the advantage of obtaining global information to solve some optimisation problems. Therefore, the heuristic crossover operator is established in this section using the uniform design to produce better offspring, i.e. the parent individual x i and the mutation individual v i perform the following recombination process. First, generate an orthogonal matrix of elements 0 and 1 L N (Q D ) = [a i,j ] N×D by using Latin square, where a i,j = 0 indicates that the jth component of the ith recombined individual is selected from the parent individual x i and a i,j = 1 indicates that the jth component of the ith recombined individual is selected from the mutation individual v i . It is obvious that N recombinant individuals are formed.
Then, randomly select an index j from neighbourhood indexes P and calculate the weight values of above N recombinant individuals.
Next, select some recombined individuals to form optimal individual set using the weight values and correspondingly select ranks from matrix L N (Q D ).
Finally, calculate the probability value p i of 1 in the ith column of matrix L N (Q D ). If p i satisfies with p i ≤ 0.6, select the ith component of crossover individual from the parent individual. Otherwise, select the ith component from the mutation individual, where i = 1, 2, . . . , D.
However, if the crossover operator is designed by the above process, the number of fitness evaluations of individual function values is 2 D . The calculation is expensive. Therefore, in order to ensure the effectiveness of the crossover operator and reduce the amount of computation, only some components of the individual, instead of the whole individual, are executed by the heuristic crossover operator, and the rest are generated using the binomial crossover operator in this paper.

The proposed algorithm
A hybrid different evolution HMODE/D is introduced in this section, which is based on the framework of MOEA/D-DE. In the proposed algorithm, the iteration is divided into two criteria for evaluation, including evolutionary generation and the maximum times of functions evaluations (MaxFEs). The first criterion is that the evolutionary generation G satisfies G mod 10 == 0 and FEs ≤ MaxFEs; HMODE/D performs mutation operation based on the local optimum described in Section 3.2.1 and the heuristic crossover operation based on a uniform design in Section 3.2. Because the heuristic crossover operation based on uniform design is used frequently in each iteration, the computation of HMODE/D is increased. Therefore, the heuristic crossover operation based on the uniform design is periodically executed in the iteration process of populations. This procedure obtains a trade-off between computation cost and potential better individuals. The second criterion is that the evolutionary generation G does not satisfy G mod 10 == 0; HMODE/D performs mutation operation described in Section 3.2.2 and classical crossover operation. In addition, in order to ensure the effectiveness of HMODE/D, the self-adaptive adjustment strategy is adopted in the whole process of evolution, which is described in Section 3.1. If the stop criterion is satisfied, we output the optima. The procedure of HMODE/D is presented in Algorithm ??.

Experimental study
Twenty-two acknowledged test functions DTLZ1-DTLZ7 and MaF1-MaF15 with diverse properties are used to test the performance of HMODE/D. The properties of the above test functions include linear, concave, multimodal, degenerate, nonseparable, etc. The number of objective for above test functions are 5-objective (M = 5), 10-objective (M = 10), and 15objective(M = 15), respectively. The number of variables is set to M + 4 for DTLZ1, M + 19 for DTLZ7, and M + 9 for other DTLZ problems. The number of variables for MaF1-MaF15 is related to the number of position variables and distance variables. A detailed description of these test functions can be found in  and Cheng et al. (2017). Six state-ofthe-art EMO algorithms, MOEA/D-DE (Hui & Zhang, 2009), WVMOEAP (X. Zhang et al., 2016), AMPDEA, KnEA (X. Zhang et al., 2015), RVEA (Cheng et al., 2016) and CVEA3, are involved in the comparison, where MOEA/D-DE, WVMOEAP, and AMPDEA are based on DE, the rest belong to non de category. RVEA and CVEA3 rank as the first and second in the competition on CEC2018. The relevant codes are obtained from PlatEMO and detailed materials are listed in Tian et al. (2017).

Parameter settings and performance metric
For fair comparisons, all compared algorithms adopt the original parameter values. The relevant parameters of five comparative algorithms are introduced briefly in this section. The parameters of MOEA/D-DE include δ = 0.9, nr = 2, F = 0.5 and CR = 1, where δ is the probability that parent solutions are selected from the neighbourhood; nr is the maximum number of solutions replaced by each offspring; F and CR are the mutation factor and the crossover probability. The parameter of WVMOEAP is b = 0.05, which is the extent of the preferred region. The parameters of AMPDEA are nPer = 50 and nPerGroup = 35,

Initialisation:
The initialisation process is the same as in Section 2.2; 1: while FEs ≤ MaxFEs 2: for i = 1 to N do 3: if rand ≤ δ 4: P = B(i) 5: else 6: P = {1, . . . , N} 7: end 8: randomly select two different index r1 and r2 from neighbourhood indexes P, then the corresponding individuals are denoted as x r1 and x r2 9: randomly select an objective function, find individuals set X max , X min corresponding to the maximum and minimum values of the objective function, respectively 10: calculate the Euclidean distance between individuals in set X max and X min , find the individuals who satisfy with the largest Euclidean distance and are noted as x max and x min . 11: calculate the norm of vector (x max − x min ) and (x r1 − x r2 ). Then, use Equation (10) to set mutation parameter F i and use Equation (11) to adjust mutation parameter F i 12: if G = 1 13: use Equations (9) and (8) D, P, K, FEs) 17: else 18: randomly select an individual from X archive (i) and use Equations (15) and (8) Neighbourhood individuals: X(P) = {x i1 , x i2 , . . . , x iT }, x r1 and x r2 ; F i : Mutation parameter. 1: calculate objective values of neighbourhood individuals F(X(P)) = (f 1 (X(P)), f 2 (X(P)), . . . , f m (X(P))) T 2: for i = 1 : m 3: rank the ith objective values f i (X(P)) and record the ranking of each neighbourhood individual 4: end 5: calculate the total ranking of each neighbourhood individual R(x i1 ), R(x i2 ), . . . , R(x iT ) 6: for j = 1 : T 7: use Equations (12) and (13) to set probability value for each neighbourhood individual 8: end 9: select an individual as a local optimal x b by gambling wheel 10: using Equation (14)  In order to compare the performance of the different algorithms, the inverted generational distance (IGD) (Coello & Cortes, 2005) and hypervolume (HV) (Zitzler & Thiele, 1999) are used to evaluate the performance of these algorithms. A lower value of IGD implies that the Pareto front must be very close to the true Pareto front; The hypervolume metric measures the size of the region, which is dominated by the obtained Pareto front. Therefore, the higher value of HV-metric is preferred. In addition, two statistical methods, the Friedman test and the multiple-problem Wilcoxon test are used to analyse the performance of these algorithms. At last, the convergence figures of some test functions are illustrated to verify the effectiveness of the compared algorithms.

Experimental results and analysis on DTLZ1-DTLZ7
This paper improves the MOEA/D-DE algorithm by the following four schemes, i.e.
• adjustment for mutation factor; • mutation operation based on local optimum; • mutation operation based on external archive; • heuristic crossover operation based on uniform design.
In order to test the influence of the above schemes on the performance of HMODE/D, some comparison algorithms are established by bowdlerising the corresponding schemes of HMODE/D, for example, the same HMODE/D version without adjustment for mutation factor, is fixed at 0.5 and this version can be called HMODE/D-1. The same HMODE/D version does not use mutation operation based on local optimum, but uses DE/rand/1 of DE, this version can be called HMODE/D-2. The same HMODE/D version does not use mutation operation based on external archive, but uses DE/rand/1 of DE, this version can be called HMODE/D-3. The same HMODE/D version does not use heuristic crossover operation based on uniform design, but uses binomial crossover of DE, this version can be called HMODE/D-4.
Tables 1 and 2, respectively, show the average value (Mean) and variance (Std) of IGD and HV via 20 independent runs on DTLZ1-DTLZ7. The parts in bold represent the best performance values of all compared algorithms, and the last three rows of these tables enumerate the comparison results of HMODE/D with each compared algorithm, where "+", "−", and " ≈ " indicate the result of the corresponding competition algorithm is better, worse, and equal than HMODE/D, respectively. Moreover, the results of the Friedman test are summarised in Table 3, in which HMODE/D and other algorithms are ranked by the Friedman test on the functions with DTLZ1-DTLZ7 in terms of Mean values of IGD and HV. Finally, the results of the multi-problem Wilcoxon test are summarised in Tables 4-5, which are used to test whether HMODE/D is significantly different from the comparison algorithms.
Obviously, the results in Tables 1 and 2 highlight the superiority of algorithm HMODE/D. Moreover, HMODE/D achieves the first rank in the Friedman test on DTLZ1-DTLZ7 for the Mean values of both IGD and HV in Table 3. Finally, all the R + values are more significant than the R − values in Table 4 and all the R + values are smaller than the R − values in Table 5, which reflects that HMODE/D is better than the four competitors. There is no doubt that the better experimental results obtained by algorithm HMODE/D are mainly attributed to four schemes proposed in this paper, which can improve the exploitation and exploration ability of DE. Unfortunately, the performance of HMODE/D is almost equal to HMODE/D-4. The main reason is that the heuristic crossover operator needs a larger amount of computation. Therefore, in order to balance the performance of HMODE/D and the cost of computation, the number of executions of the heuristic crossover operator proposed in this paper is finite.
The following section evaluates the advantage of HMODE/D compared with MOEA/D-DE, WVMOEAP, AMPDEA, KnEA, RVEA, and CVEA3 in solving test function DTLZ1-DTLZ7. As shown in Table 6, HMODE/D can beat apart compared algorithms on DTLZ2, DTLZ3, and DTLZ6, except for DTLZ3(10M), and the result of the last row in Table 6 show that HMODE/D performs better than MOEA/D-DE, WVMOEAP, AMPDEA, KnEA, RVEA, CVEA3 on 16, 20, 13, 15, 10, and 15 test functions, respectively. Moreover, compared with RVEA, which is ranked second, HMODE/D performs better on DTLZ2, DTLZ3, and DTLZ6, where DTLZ2 is used to test the uniformity of the Pareto optimal distribution; DTLZ3 is used to test an algorithm's ability to converge the global Pareto optimal front; DTLZ5 is used to test an algorithm's ability to converge a degenerated curve; DTLZ6 is used to test an algorithm's ability to maintain subpopulation in different Pareto optimal regions. The best performance of HMODE/D in the above test functions is attributed to the different mutation strategies used in the evolution process to ensure that the algorithm has a strong searching ability. However, the results in Table 6 also show that HMODE/D performs poorly in DTLZ4, which is used to test an algorithm's ability to maintain a good distribution of solutions. The main reason is that the mutation strategies of HMODE/D cannot well maintain the diversity of the population.   As shown in Table 7, HMODE/D outperforms the competing algorithms on 7 functions in terms of HV. HMODE/D performs better than or equal to MOEA/D-DE, WVMOEAP, AMPDEA, KnEA, RVEA, CVEA3 on 11, 17, 13, 16, 12, and 13 test functions, respectively. In addition, HMODE/D stands out in DTLZ2, DTLZ3 and DTLZ6, whereas RVEA stands out in DTLZ1 and DTLZ4.
As shown in Table 8, HMODE/D both achieves the first rank in the Friedman test on DTLZ1-DTLZ7 for the Mean values of IGD and HV. In addition, in order to further test the convergence of these algorithms, Figures 1-4 describe the convergent speed of compared algorithms when solving DTLZ1 (5M), DTLZ3 (5M), DTLZ5 (5M) and DTLZ7 (5M), respectively. Obviously, HMODE/D has better convergence than other compared algorithms when solving DTLZ3 (5M) and DTLZ5 (5M).
The above results show that HMODE/D is better than other comparative algorithms, which means that HMODE/D can solve complex multi-objective optimisation problems.
Although HMODE/D has poor performance compared with CVEA3, the results in Table 11 show that HMODE/D both achieves the first rank in the Friedman test, which means that the algorithm proposed in this paper has strong comprehensive ability to deal with various complex multi-objective optimisation problems.

Conclusion
In order to solve multi-objective optimisation problems by using DE effectively, a hybrid multi-objective differential evolution (HMODE/D), based on decomposition, is proposed in this paper. The handling schemes of HMODE/D include (1) the probability value for every neighbour individual is set by using the rank sum, and the local optimal is selected by the  roulette wheel. Then, one mutation operator is designed by the local optimal. Besides, one heuristic crossover operator is constructed by using the method of uniform design.
(2) The search ability of HMODE/D is improved using the self-adaptive adjustment strategy; the mutation factor is adjusted by the norms of vectors satisfying certain conditions. (3) Parent individuals are updated using decomposition; during the decomposition external archive for each individual can be constructed, i.e. the weight values of neighbourhood individuals are compared with those of offspring, during which individual with less significant weight value is selected and stored in the archive of the corresponding individual. Then, an individual, which is beneficial to the convergence of DE, is selected from the external archive to execute the mutation operation. The effectiveness of HMODE/D is verified by comparing it with the latest algorithm. The related experimental results show that the performance of HMODE/D has advantages compared with other algorithms.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The research