A centerline symmetry and double-line transformation based algorithm for large-scale multi-objective optimization

The search space of large-scale multi-objective optimization problems (LSMOPs) is huge because of the hundreds or even thousands of decision variables involved. It is very challenging to design efficient algorithms for LSMOPs to search the whole space effectively and balance the convergence and diversity at the same time. In this paper, to tackle this challenge, we develop a new algorithm based on a weighted optimization framework with two effective strategies. The weighted optimization framework transforms an LSMOP into multiple small-scale multi-objective optimization problems based on a problem transformation mechanism to reduce the dimensionality of the search space effectively. To further improve its effectiveness, we firstly propose a centerline symmetry strategy to select reference solutions to transform the LSMOPs. It takes not only some non-dominated solutions but also their centerline symmetric points as the reference solutions, which can enhance the population diversity to avoid the algorithm falling into local minima. Then, a new double-line transformation function is designed to expand the search range of the transformed problem to further improve the convergence and diversity. With the two strategies, more widely distributed potential search areas are provided and the optimal solutions can be found easier. To demonstrate the effectiveness of our proposed algorithm, numerical experiments on widely used benchmarks are executed and the statistical results show that our proposed algorithm is more competitive and performs better than the other state-of-the-art algorithms for solving LSMOPs.

However, with the development of technology, more and more decision variables are involved in the multi-objective optimisation models established for real-world engineering problems. For example, flight safety systems optimisation (Everson & Fieldsend, 2006) has more than 1500 decision variables, EEG time series data optimisation (Goh et al., 2015, may 25-28) involves more than 4800 decision variables, and the number of decision variables increases exponentially with the increase of layers in the deep neural network architecture optimisation problem (Tian et al., 2021). These MOPs with more than 100 decision variables are called LSMOPs. As the number of decision variables increases, the search space of MOPs will increase exponentially. The existing MOEAs perform not well for LSMOPs because of the huge search space. Therefore, many scholars have been attracted to study LSMOPs in recent years and numerous related algorithms have been developed, which can be classified into three categories.
The algorithms in the first category apply the divide-and-conquer technique to solve LSMOPs (Antonio & Coello, 2013Cao et al., 2020Cao et al., , 2017H. Chen et al., 2020;Ma et al., 2016;A. Song et al., 2016;X. Zhang et al., 2018). For example, Antonio et al. divide the decision variables into several equal-length subcomponents using the existing random grouping method. Each subproblem evolves collaboratively with other subproblems based on a differential evolution algorithm to optimise the LSMOPs (Antonio & Coello, 2013). Song et al. propose a dynamic random grouping method, which determines the number of groups dynamically according to the quality of the obtained solutions in each cycle (A. Song et al., 2016). After grouping, a decomposition-based MOEA, i.e. MOEA/D (Q. Zhang & Li, 2007), is used to optimise each subgroup. Because of the conflicts among the objectives, the existing grouping methods proposed for large-scale single-objective optimisation problems cannot be well applied in LSMOPs. Some decision variable analysis techniques and grouping strategies are proposed to group the decision variables for LSMOPs. Ma et al. propose an MOEA based on decision variables analysis (MOEA/DVA). In MOEA/DVA, the decision variables are classified into three types, i.e. convergence-related, diversity-related and mixed variables, according to their contributions to the convergence and diversity of the algorithm. Then the existing large-scale global optimisation grouping method is used to further divide the convergence-related variables into subgroups and the small-scale sub-MOPs are optimised collaboratively (Ma et al., 2016). To improve the grouping accuracy, Cao et al. apply the recursive differential grouping method to MOEA/DVA to adjust the misclassified variables in the three types of groups (Cao et al., 2017). Based on the main idea of MOEA/DVA, Zhang et al. group the decision variables into convergence-related and diversity-related variables instead of the three types by using the k-means clustering strategy to reduce the dimensionality of the LSMOPs (X. Zhang et al., 2018). However, these decision interaction analysis methods will consume a large number of computation resources, leading to low computational efficiency.
The algorithms in the second category propose some effective strategies to accelerate the search for LSMOPs (Gu & Wang, 2020;He et al., 2022;Hong et al., 2019;Tian et al., 2020;Yi et al., 2018Yi et al., , 2020. The evolutionary algorithm based on a direction-guided adaptive offspring generation (DGEA) (He et al., 2022) proposes an adaptive reproduction mechanism to guide the population evolution, thereby producing promising offspring based on the selected parents with strong convergence and diversity. Hong et al. present a diversity enhancement strategy to guide the population towards different regions of the Pareto frontier, to avoid the algorithm falling into local minima (Hong et al., 2019). The large-scale multi-objective competitive swarm optimiser algorithm (LMOCSO) (Tian et al., 2020) designs a new two-stage particle updating mechanism to update the position of each particle to solve LSMOPs with complicated landscapes. The updated particles are determined by the competitive mechanism of the competitive swarm optimiser (Tian et al., 2020). However, since these algorithms do not apply any dimensionality reduction strategy, the scale of the problem is still large and it requires a large number of function evaluations for the algorithms to search the whole decision variable space.
The algorithms belonging to the third category are based on non-grouping dimensionality reduction strategy, e.g. problem transformation technology and fuzzy technology (He et al., 2019;R. Liu et al., 2020;Qin et al., 2021;Tang et al., 2020;Zille et al., 2018). For example, A large-scale multi-objective optimisation framework via problem reformulation called LSMOF is proposed to reduce the dimension of the decision variables. It first presents a bi-direction weight variable association mechanism that associates two weight vectors to one selected solution to specify the search directions to accelerate the population evolution. Then by using a performance indicator, it reformulates an LSMOP into a small-scale SOP (He et al., 2019). By doing this, the search space is narrowed down from the whole space to several straight lines, which accelerates the convergence of the algorithm for solving LSMOPs. However, its performance is easily affected by the selection of reference points. Qin et al. design a directed sampling strategy based on LSMOF, which samples solutions based on the uniformly distributed search directions to reformulate the original problem to improve the population diversity (Qin et al., 2021). However, the sharp reduction of the search space still affects the population diversity of this kind of algorithm to a certain extent. Zille et al. propose a weighted optimisation framework (WOF) to transform the LSMOP into several small-scale MOPs. It first divides the n-dimensional decision variables into g (g < n) subgroups by adopting the existing grouping methods, e.g. random grouping (Omidvar et al., 2010), ordered grouping (W. Chen et al., 2010), and differential grouping (Omidvar et al., 2014). The variables in one subgroup are appointed with the same weight variable. By using a transformation function, an original n-dimensional LSMOP can be converted into a g-dimensional MOP with g weight vectors as decision variables, thereby reducing the dimensionality of the original LSMOP. Yang et al. present a fuzzy decision variables framework for LSMOPs to reduce the decision variable search space. It consists of two stages: fuzzy evolution and precise evolution (Yang et al., 2021). In the fuzzy evolution, it blurs the decision variables to reduce the search space to accelerate the convergence of the algorithm. Then the existing MOEAs are used to optimise the original problem to maintain the population diversity in the precise evolution stage. The algorithms in this category are more effective than those of the other two categories. Among the algorithms in this category, WOF is a very effective algorithm. However, its effectiveness is also affected by the selection of reference points. And the form of the transformation function also has a great impact on the search range of the transformed problem, thus affecting the performance of WOF. It is necessary for us to study effective strategies to enhance the performance of WOF to improve the ability to solve LSMOPs.
Based on the analysis of the related works, we can know that the research on large-scale multi-objective optimisation algorithms (LSMOAs) is still in the early stage. In this paper, to alleviate the limitations of WOF and improve the accuracy of solving LSMOPs, we present a new algorithm with two effective strategies based on WOF. The main innovations are summarised as follows: (1) A new centreline symmetry strategy for selecting reference solutions is presented to improve the diversity of WOF.
(2) A new transformation function is designed to expand the search range for the selected reference solutions. (3) A new algorithm with the above two strategies is proposed and numerical experiments are carried out on widely used LSMOP benchmarks to verify its performance.
The remaining structure of this paper is shown below. Section 2 introduces the basic idea of WOF and the motivation of this paper. Our proposed algorithm is elaborated in Section 3. Section 4 shows the numerical experiments and analysis of the results. Finally, in Section 5, the conclusion is given.

Preliminary and motivation
The purpose of LSMOAs based on problem transformation is to achieve dimensionality reduction. Among them, WOF is a very effective algorithm. Because our algorithm is designed based on WOF, in this section, we elaborate on the main idea of WOF and then analyse its limitations. Subsequently, the motivation of this paper is presented.

The main idea of WOF
There are two stages in WOF. In the first stage, the original large-scale problem F is transformed into several low-dimensional MOPs to reduce the search space of the decision variable space. In the second stage, one of the existing MOEAs is used to optimise the original problem F. Specifically, in the first stage, suppose x * is an optimal solution of the n-dimensional problem F. For any solution x and a linear transformation function ψ(w, x) = {w 1 x 1 , w 2 x 2 , . . . , w n x n }, we can find a real number optimal vector w * such that ψ(w, x) = w * x = x * . Then, for an arbitrary but fixed solution x , we can optimise w instead of optimising x . To reduce the dimensionality of the original problem, WOF decomposes n decision variables into g groups, where g < n. Then assign the same weight w i to the variables in the same group G i , where i = 1, 2, . . . , g, i.e.
ψ(w, x) = (w 1 x 1 , . . . , w 1 x l w 1 , . . . , w g x n−l+1 , . . . , w g x n where l denotes the size of each group. Then, by taking a fixed x , the problem F of optimising n-dimensional x is transformed into the problem F of optimising g-dimensional w, thereby reducing the dimensionality of the LSMOPs. In WOF, to keep the population diversity, m solutions in the first non-dominated front are selected as the reference solutions according to the crowding distance to execute the transformation, and then m transformed low-dimensional MOPs are optimised. Finally, m optimal weight vectors W * = {W 1 , W 2 , . . . , W m } are used to update the individuals of the original problem according to the transformation function. Geometrically speaking, the result of the above problem transformation is to narrow the whole search space to m straight lines. In Figure 1(a), the blue curve PS denotes the Pareto optimal set (PS) and s is a selected reference solution. o and t denote the origin and the upper boundary point, respectively. After using transformation functions, e.g. the linear transformation function, the search range is reduced from the whole 2D space to one straight line. Searching along the straight line can obtain the optimal solution or the sub-optimal solution more quickly than searching on the entire two-dimensional space.
Based on the principle of the problem transformation method, we can know that the search for the transformed problem F focuses on some potential areas which may result in faster convergence. However, due to the drastic reduction of the search space, the population diversity will deteriorate. Therefore, in the second stage of WOF, to make up for the diversity loss caused by optimising the transformed problem, the original problem F is optimised by the existing MOEAs. Since optimising the original problem F in the whole search space may obtain all possible solutions in the whole space, which may improve the population diversity. For details of the algorithm, please refer to WOF (Zille et al., 2018).

Motivation
According to the principle of WOF, we can know that factors affecting the search range are the selection of the grouping method, reference solutions, and the transformation function. Since the correlation between decision variables of LSMOPs is complex, there is still no effective large-scale multi-objective grouping method to group the decision variables of LSMOPs reasonably. At present, the commonly used grouping methods of WOF are the ones proposed for large-scale global optimisation problems, e.g. random grouping, linear grouping, ordered grouping, and differential grouping. The influence of the four grouping methods on WOF has been investigated in the original paper and the results show that WOF using the ordered grouping method is more competitive. Therefore, in our proposed algorithm, the ordered grouping method is still used to group the decision variables. In this paper, we focus on the influence of the latter two factors, i.e. the selection of reference solutions and the transformation function.
As shown in Figure 1(b), if the selected reference solutions are located in a small area of the search space, the transformed search range may be concentrated in a certain area as shown by the six straight lines. The search range is far away from PS, which may cause the algorithm to fail to find the optimal solutions. Thus, how to select appropriate reference solutions to expand the search coverage of the algorithm and improve the population diversity is an urgent problem to enhance the performance of WOF.
Additionally, the transformation function can determine the search directions for the selected reference solutions. As shown in Figure 1, the linear transformation function transforms the search space into one straight line based on one reference solution. If the LSMOP is complicated and the optimal solutions or the sub-optimal solutions do not locate near the line, the efficiency of WOF will be affected. Thus, it is an effective way to improve the performance of the WOF in solving LSMOPs by designing a new transformation function to add potential search directions.

The proposed algorithm
In this section, we propose a new weighted optimisation algorithm with a centerline symmetry strategy for selecting reference solutions and a double-line transformation function, referred to as CSDT, for solving LSMOPs. The main framework is shown in Algorithm 1. CSDT begins with the population initialisation and is followed by two stages. In the first stage, according to our proposed selection mechanism in Section 3.1, m solutions with relatively wide distribution are selected as reference solutions for the problem transformation. Subsequently, one of the existing grouping methods is used to group the decision variables into g subgroups. For each reference solution, the original problem F is transformed into a g-dimensional problem F based on the transformation function designed in Section 3.2. Then, the transformed problem F is optimised by the existing MOEAs to obtain promising weight vectors. These weight vectors are utilised to reproduce new promising offspring based on the original population by applying the transformation function. Optimizing the transformed problem can reduce the search space of the algorithm and can concentrate the search in the potential search areas, thereby significantly accelerating the population convergence. Subsequently, the existing MOEAs are utilised to optimise the problem F to obtain solutions with good diversity in the second stage.
In the following, we will introduce the details of the proposed selection strategy and transformation function.

A new reference solutions selection strategy
In this section, we design a strategy to select widely distributed solutions to transform the original problem to get some widely distributed search lines. For the convenience of description, we define the diagonal line connecting the upper and lower boundary points of the decision variable space as the centreline. The details are as follows.
Firstly, select m/2 best solutions by using the existing environmental selection method, e.g. NSGA-II, where m is the number of the reference solutions to be selected. Generate m/2 centreline symmetry points based on the m/2 best solutions selected before. The method of generating centreline symmetry points is described in the following.
As shown in Figure 2(a), suppose that s is one of the selected best solutions, p is its projection point on the centreline. For the convenience, suppose t and the origin o are the upper and lower boundary points of the decision variable space in this paper, respectively. Since points p, t and o are on the same straight line, we can get k * According to the above method, m/2 centreline symmetry points can be generated. These points and the m/2 best solutions selected previously are taken as reference solutions. After transformation, the search space of the transformed problem can be expanded to widely distributed straight lines.
Take a 2D space in Figure 2(b) as an example. Suppose that m is 6. s 1 ∼ s 3 are the reference solutions selected by NSGA-II. Three straight lines passing through s 1 ∼ s 3 are the search range of the transformed problem using the linear transformation function, which is far away from the PS. According to the reference solutions generation method, we can obtain the centreline symmetry points of s 1 ∼ s 3 , i.e. r 1 ∼ r 3 . Then, s 1 ∼ s 3 and r 1 ∼ r 3 are taken as reference solutions to transform the original problem. From Figure 2(b), we can see that after the transformation, the search range of the transformed problem is increased by three straight lines passing through r 1 ∼ r 3 . The six lines are relatively widely distributed in the decision variable space. Searching along these lines, the algorithm can find the solutions of PS easier.

A new transformation function
In WOF, three transformation functions are proposed to transform the original problem. Specifically, the forms of the product transformation ψ 1 and the interval-intersection transformation ψ 3 are basically the same. They narrow the whole search subspace to a straight line connecting the origin o and the corresponding selected reference solution, which is denoted by l 1 in Figure 3. The p-value transformation ψ 2 narrows the search space to the straight line passing through the corresponding selected reference solution and parallel to the centreline as displayed by l 2 in Figure 3.
Here, we design a double-line transformation function to increase the search range and search directions for the selected reference solutions. The new transformation function is as follows. where x i is the value of ith dimension of x, and w j is the weight vector assigned to the jth group of decision variables that x i belongs to. Based on the designed transformation function, the search range of the original problem can be narrowed down to two straight lines, i.e. l 1 and l 3 as shown in Figure 3. If w j ≤ 1, the algorithm searches along the line l 1 , otherwise it searches along the line l 3 . From Figure 3, we can see that the search range is expanded to more promising areas, which may make it easier to find promising solutions quickly. This can further improve the efficiency of the existing MOEAs to solve LSMOPs.

Experiments and results
In this section, we conduct the numerical experiments on widely used multi-objective benchmarks UF1-UF10 (Q. Zhang et al., 2008), WFG1-WFG9 (Huband et al., 2006), and LSMOP1-LSMOP9  to empirically demonstrate the effectiveness of our CSDT. And the comparison with MOEA/DVA (Ma et al., 2016), LSMOF (He et al., 2019), DGEA (He et al., 2022), and WOF (Zille et al., 2018) is made. All the numerical experiments are carried out on PlatEMO (Tian et al., 2017) of Matlab 2015b on a computer with Intel(R) Xeon(R) CPU E5-2630 v4 @2.20GHz and 128G RAM. IGD (Zitzler et al., 2003) is used to comprehensively access the convergence and diversity of the obtained non-dominated solutions set. The smaller the IGD value is, the better the algorithm performs.

Parameter settings
For each test problem, the number of the objectives is set to 2 and 3, respectively. The dimension of the decision variables is set to 200, 500, and 1000, respectively. Since our proposed algorithm is an improved version of WOF and the main framework of WOF is not changed, the values of the same parameters in CSDT and WOF are consistent with those set in WOF (Zille et al., 2018). For example, the number of reference solutions m is set to 10. The ratio of function evaluations spent in the first stage δ is 0.5, which means that we allocate 50 percent of the total function evaluations to each stage. The ordered grouping method is used in these two algorithms and the number of groups g is 4. In WOF, the pvalue transformation function is applied to transform the original problems since it is the best one among the three proposed functions. The optimisation algorithm used in each stage is particle swarm optimisation in CSDT and the original WOF. As for the other three compared algorithms, i.e. MOEA/DVA, DGEA, and LSMOF, the parameters are set to their default values. To ensure the fairness of the comparison experiments, the population size is set to 100 for all the compared algorithms. To ensure the convergence of the compared algorithms, we set the maximum number of function evaluations (FEs) to 500,000 for all algorithms.
On all instances, every algorithm runs independently 20 times. The Wilcoxon rank-sum test with a significance level of 0.05 (Carrasco et al., 2020) is applied to display the statistical results of compared algorithms. In the statistical results, "+", " = ", and "−" mean that the compared algorithm is statistically better than, comparable to, and worse than the proposed algorithm.

Investigation of effectiveness of the two strategies
To investigate the effectiveness of our proposed strategies, we conducted numerical experiments on UF and WFG benchmarks. In the experiments, WOF-CS is the algorithm WOF with the reference solution selection strategy embedded, and WOF-DT is the algorithm WOF with the new transformation function embedded. We compare the performance of WOF-CS, WOF-DT, and the proposed algorithm CSDT with that of the original algorithm WOF on UF and WFG benchmarks. The IGD results are shown in Tables 1  and 2. Firstly, comparing the results of WOF-CS and WOF in Tables 1 and 2, we can see that the results obtained by WOF-CS are statistically better than those obtained by WOF on the two benchmark suites. Specifically, for UF benchmarks, WOF-CS wins on 11 out of 30 instances and ties on the remaining instances. It performs significantly better than WOF mainly on UF1, UF2, UF5, UF8, and UF9 instances. For WFG benchmarks, WOF-CS wins on 25/54 instances and loses only on 1/54 instances on WFG benchmarks. The better results obtained by WOF-CS mainly focus on all instances of WFG2, WFG3, WFG4, and WFG7, and bi-objective instances of WFG1, WFG6, WFG8, and WFG9. This indicates that the proposed reference solution strategy is effective. Select widely distributed solutions to transform the original problem can provide widely distributed search lines for the algorithm to search. Along these lines, the algorithm can obtain solutions with better diversity, which may help WOF jump out the local minima and find better optimal solutions while ensuring the acceleration for population convergence.
Secondly, comparing the results of WOF-DT and WOF, we can see that the performance of WOF-DT is significantly better than that of WOF on UF and WFG benchmarks. To be specific, for UF benchmarks, WOF-DT performs statistically better than WOF on 19 instances. It obtains better results than WOF on all UF benchmarks except UF5 and 200-D UF7. For WFG benchmarks, WOF-CS performs statistically better than WOF on 32 out of 54 instances and worse than WOF only on 1 instance. The better results obtained by WOF-DT mainly focus on all instances of WFG1, WFG2, WFG3, WFG4, WFG6, WFG7, and WFG9, and bi-objective instances of WFG5 and WFG8, and tri-objective WFG1. This demonstrates that our designed transformation function is more competitive. As described in Section 3.2, our proposed transformation function can narrow down the search space into two straight lines based on one reference solution and provide more potential search areas and search directions than the existing ones. It can help the algorithms find more promising solutions quickly, thereby improving the efficiency of solving LSMOPs.
Furthermore, the performance of CSDT that combines the two proposed strategies on WOF is also investigated. From Tables 1 and 2, CSDT outperforms WOF with 22 wins, 1 lose, and 7 ties on UF benchmarks, and performs statistically better than WOF with 32 wins, 1 lose, and 21 ties on WFG benchmarks. This demonstrates that our proposed algorithm is more efficient than the original WOF for solving LSMOPs. Moreover, the best results obtained by CSDT are more than those obtained by the algorithms embedded with one single strategy. Specifically, WOF-CS obtains 5 best results, WOF-DT obtains 6 best results, and CSDT finds 17 best results on 30 UF benchmarks. And on 54 WFG instances, WOF-CS gets 14 best results, WOF-DT gets 13 best results, and CSDT obtains 24 best results. This proves that CSDT gives full play to the advantages of the two strategies and greatly improves the ability of WOF to solve LSMOPs.

Comparisons with state-of-the-arts
To demonstrate the performance of CSDT, we compare CSDT with three state-of-the-art LSMOAs, i.e. MOEA/DVA, LSMOF, and DGEA, on UF benchmarks, WFG benchmarks, and LSMOP benchmarks, respectively. The statistical IGD results achieved by the four compared algorithms are shown in Tables 3 to 5. From Table 3, we can find that our proposed CSDT obtains 18 best results, MOEA/DVA obtains 6 best results, and LSMOF obtains 6 best results on 30 UF instances. Specifically, CSDT outperforms MOEA/DVA with 20 wins, 8 loses, and 2 ties. It performs significantly better than LSMOF on all instances except UF5 and UF10. And on all instances, the performance of CSDT is statistically better than that of DGEA.
On 54 WFG instances, it can be seen from Table 4 that our proposed CSDT obtains 45 best results and LSMOF obtains 13 best results. To be specific, CSDT outperforms MOEA/DVA and DGEA on all WFG test instances. CSDT performs statistically better than LSMOF on 37 instances and performs worse on 9 instances. It performs very well on almost all test instances of WFG2, WFG3, WFG7, and WFG8. On bi-objective WFG6, tri-objective WFG5, and most instances of WFG1, CSDT also has a good performance.
From Table 5, we can see that among the four compared algorithms, CSDT obtains 37 best results, MOEA/DVA obtains 6 best results, LSMOF obtains 6 best results, and DGEA obtains 5 best results on 54 LSMOP instances. Specifically, CSDT outperforms MOEA/DVA with 47 wins, 5 loses, and 2 ties. It performs significantly better than LSMOF on 41 instances and performs worse on 5 instances. Compared to LSMOF, it finds better solutions for all the instances of LSMOP1, LSMOP2, LSMOP4, LSMOP5, LSMOP8, and bi-objective LSMOP6. As for DGEA, CSDT outperforms it with 42 wins, 7 loses, and 5 ties.
Overall, CSDT is the best one among the four compared algorithms for solving LSMOPs. The poor performance of MOEA/DVA may be due to the numerous FEs consumption on decision variable correlation analysis which is not required in CSDT. The sub-problems obtained by grouping variables cannot be fully optimised, thus failing to find better solutions. DGEA needs to search the entire decision variable space of LSMOPs in the optimisation process, which makes it not easy to find optimal solutions quickly. Whereas in CSDT, we only need to search along several potential straight lines in the first stage, which significantly accelerates the population convergence. Therefore, the optimisation efficiency of CSDT is much higher than that of DGEA. As for LSMOF, it narrows the entire search space into several straight lines, which accelerates the convergence of the algorithm significantly. However, the search space is too small compared to the whole decision search space, which may lose the population diversity and make the algorithm fall into local minima. CSDT pays attention to ensuring the diversity of the population when reducing the search space. It searches in more areas with convergence and diversity potential, which can enhance the diversity of the obtained offspring while ensuring their convergence, thereby finding better optimal solutions than LSMOF.
To more intuitively illustrate the effectiveness of CSDT, we give the variations of IGD curves got by MOEA/DVA, LSMOF, DGEA, and CSDT on 500-dimensional bi-objective UF2, Table 3. IGD values of the four compared algorithms on UF benchmarks, where the best results on each test instance is highlighted.

Parameter sensitivity analysis
To analyse the effect of the number of reference vectors m, in this section, we conduct a parameter sensitivity analysis experiment on UF and WFG benchmarks. The values of m are set to 4, 10, 20, and 50, respectively. The experimental results are displayed in Tables 6 and 7.
Overall, we can conclude that when m takes a moderate value, such as m = 10 or 20, the performance of CSDT is good. When m takes a relatively large value, such as m = 50, or a relatively small value, such as m = 4, the performance of CSDT is not as good as that when m takes 10 or 20. When m is relatively large, the algorithm needs to optimise relatively more transformed problems in the first stage. Since the computing resources allocated to each stage are fixed, it will reduce the computing resources that can be allocated to each transformed problem, resulting in insufficient optimisation of each transformed problem. Therefore, the solutions found by the algorithm are relatively poor. However, when m is relatively small, the number of the transformed problems is relatively less. Although each transformed problem can be optimised sufficiently, the search range is relatively small than that when m = 10 or 20.

Conclusion
In this paper, we present a new algorithm based on WOF for LSMOPs, named CSDT. To alleviate the limitations of WOF, in the first stage, we take the centreline symmetry points to replace half of the reference solutions chosen in WOF to transform the problem. It can avoid the search range of the transformed problem being concentrated in a small area of search space, thereby improving the search efficiency of the algorithm. Then, to enhance the diversity of the algorithm while ensuring its convergence, a double-line transformation function is designed to add more potential search areas. Subsequently, in the second stage, PSO is applied to search the whole space of the original problem to improve the diversity of CSDT. To investigate the effectiveness of the two strategies and the general performance of CSDT, we conducted numerical experiments on three widely used benchmark suites. The performance of CSDT is compared with that of the original WOF, MOEA/DVA, LSMOF, and DGEA. The statistical results indicate that CSDT significantly improves the performance of the WOF and outperforms the other four compared algorithms.
However, since our proposed algorithm pays more attention to diversity than WOF, its performance may be slightly inferior to WOF for problems that are difficult to obtain convergence solutions. Moreover, since our proposed algorithm is proposed based on WOF, its performance will be affected by the results of decision variable grouping. In the future, an important task for us is to study advanced multi-objective grouping techniques to reasonably group the decision variables of LSMOPs. Furthermore, finding potential areas and directions to guide the algorithm search is also an effective method that we need to study for solving LSMOPs.