Large-scale consensus with endo-confidence under probabilistic linguistic circumstance and its application

Abstract In real decision-making problems, decision makers (DMs) usually select the most potential project from several ones. However, they unconsciously show different confidence levels in decision-making process because they come from various backgrounds and have different experiences, etc., which affects the decision results. Moreover, the probabilistic linguistic term set, which not only includes the linguistic expressions used by DMs in their daily life but also contains the probability for each linguistic term, can well portray the real perceptions of DMs for the projects. Furthermore, large-scale consensus has gradually been a popular way to effectively solve complex decision-making problems. To sum up, in this paper, we are dedicated to constructing a large-scale consensus model considering the confidence levels of DMs under probabilistic linguistic circumstance. Firstly, the endo-confidence is defined and measured by DM’s probabilistic linguistic information. Then, the DMs are clustered according to the similarities of both evaluation information and the endo-confidence levels. Both evaluation of the non-consensus cluster and evaluation integrated by the clusters with higher endo-confidence level than this non-consensus cluster are used as the reference to adjust its evaluation information. Then, a case study and the comparative analysis are carried out. Finally, some conclusions and future work are given.


Introduction
In our daily life, decision makers (DMs) often evaluate the available projects, which is the main part in the decision-making process. Sometimes, there are many aspects which should be considered due to the complexity of the decision objects (Tian et al., 2021). In order to comprehensively evaluate the projects, DMs need to master different types of knowledge. However, with the development of economy, the social division of labor has been increasingly refined, and the professional knowledge and skills CONTACT Xiaoli Tian tianxiaolitxl@126.com continue to be strengthened. Thus, the single DM can be only specialised in one major and be proficient in a special field. Then, group decision making (GDM) has become a general way to conquer the disadvantages caused by the professionalisation in real decision-making situations. In GDM process, the DMs come from various professional fields and have different backgrounds. And thus, they may have different opinions for the same project. Sometimes, those opinions may have large difference. If those evaluations with large difference are used to make decisions, the decision results may be unreasonable and distorted. Therefore, reaching consensus is the precondition to make a reasonable decision in GDM. Moreover, for complex decision-making problems, the large number of DMs need to be invited to participate in decision making. Considering the situation above, large-scale consensus has been a hot topic in recent years.
In real decision-making cases, some attributes are not suitable to be expressed as numerical form, and nature languages are the usual form used by DMs in their evaluation processes. In order to deal with natural languages, linguistic expression (Zadeh, 1975) is developed and widely adopted as the basic evaluation information in decisionmaking methods. As experts may hesitate among several values to assess a project in decision-making process, hesitate fuzzy linguistic term set (HFLTS) has been proposed by Rodriguez et al. (2012) to handle the uncertain environments. However, according to HFLTS, all possible values provided by the DMs have equal importance or weight which may lead to unreasonable decision results (Pang et al., 2016). That is because (1) the occurrence frequencies of evaluation values are likely to be different for a group of DMs, and (2) for a single DM, the reliabilities or the preference degrees on evaluation values are also probably different (Xu, He, et al., 2019). As a result, the probabilistic linguistic term set (PLTS) (Pang et al., 2016), which is a widespread description tool in the existing decision-making method (Wei et al., 2020), can solve the above problem. It not only includes the hesitant linguistic representation but also reflects the different preference degree for each piece of linguistic representation. Hence, in this paper, we will use PLTSs to represent the evaluation information of DMs.
Moreover, DMs are professional individuals in their special fields, and they may have different perceptions for the same aspect of the projects. Then, the confidence levels of their evaluation information are also different. Thus, considering the effect of confidence on the decision-making results is essential. Consensus models with self-confidence have been presented and Ding et al. (2019) alleged that the self-confidence level indeed affect the consensus reaching process. According to Liu et al. (2017) and Liu et al. (2018) proposed an iteration-based consensus with self-confidence multiplicative preference relations. Gou, Xu, Wang, et al. (2021) introduced self-confidence factor into the double hierarchy linguistic preference relations and presented a consensus model based on the priority ordering theory. Zhang et al. (Zhang et al., 2021) introduced self-confidence factor into comparative linguistic expressions and gave an optimisation consensus model with minimum information loss.  focussed on the self-confidence-based consensus model and proposed a novel feedback mechanism that chooses the expert with minimal self-confidence to adjust his/her evaluation information.
Consistency of preference relation in consensus model should also be considered (Gou et al., 2019). It is easy to notice that some researches based on preference relations with self-confidence neglect the consistency of the preference relations, which has a direct influence on the results of final decision (Liu, Xu, Montes, Dong, et al., 2019). To solve this problem, Liu, Xu, Montes, Dong, et al. (2019) gave a novel method to measure the additive consistency level by considering both the fuzzy preference values and the self-confidence levels. However, additive consistency sometimes cannot capture the consistency of a fuzzy preference relation (Zhang, Kou, et al., 2020). Zhang, Kou, et al. (2020) proposed two algorithms to improve the multiplicative consistency level. Bashir et al. (2018) defined the hesitant fuzzy preference relation with self-confidence and the hesitant multiplicative preference relation with self-confidence in their paper, and also the corresponding additive consistency and multiplicative consistency.
As the development of technology, large-scale decision making has been more and more popular. Based on Xu, Du, et al. (2019), the combination of the levels of both rationality and non-cooperation is used to measure the confidence level, which is utilised as adjustment coefficient to modify the evaluation value. According to Liu et al. (2017),  studied the large-scale consensus which considers the overconfidence behaviours of DMs. The grey clustering algorithm was used to distinguish the experts by combining the similarities of both fuzzy preference values and self-confidence levels. An overconfidence measurement was given to detect the experts' overconfidence behaviours in the consensus model. Also, a dynamic weight punishment mechanism was implemented to manage overconfidence behaviour. However, the clustering method in  causes the problem that the experts with lower similarities of fuzzy preference values will be classified into one cluster because they have the higher similarities of self-confidence levels. Then, these experts with large differences in fuzzy preference will be used to calculate the weights together and further to obtain the overall preference value, which may lead to unreasonable decision-making results.
As far as we know, in the existing literature, self-confidence level is directly given by DMs and there are not any consensus researches considering to measure the confidence from the evaluation information. Because there are not any consensus researches considering the confidence of DMs with PLTS, our research targeted at consensus with PLTS for multi-attribute large-scale GDM (LSGDM) problems typically has four main goals: 1. In order to give the DMs' confidence level more accurately and objectively, and distinguish the self-confidence given by DM in the existing studies, one aim is to define the endo-confidence, which is measured by DM's evaluation information rather than the existing self-confidence given by the DM aforehand. 2. To solve the problem of confidence-based clustering that DMs with lower similarities of evaluations may be classified into one cluster, one purpose is to define a novel clustering procedure based on both the evaluation information and the endo-confidence level. 3. In the decision-making process, the DMs with higher level of endo-confidence may influence the opinions of others whose confidence levels are lower than themselves. Also, those DMs with higher endo-confidences tend to have a greater impact on the final results than those DMs with lower endo-confidences. Hence, one objective is to simulate the above phenomenon and present a new feedback adjustment mechanism. 4. When assigning the missing probabilities to linguistic terms for the incomplete PLTSs, the total probability for linguistic terms from DMs should be considered. Moreover, the original preference among linguistic terms of DMs should also be considered when normalising the PLTSs. In order to reflect the opinions of DMs more accurately and avoid the loss of information, the other goal is to provide a suitable method to normalise the PLTSs.
To sum up, in this paper, we will construct the large-scale consensus model with probabilistic linguistic information considering the confidence behaviours (which is named as endo-confidence level in this paper) of DMs derived from PLTS itself. Although there already exists large-scale consensus models considering the self-confidence behaviours, it is different according to our above review. The innovations of this paper can be summarised as follows: 1. In this paper, the endo-confidence is defined and it can be obtained from three aspects: (a) the probabilistic information in the original evaluation given by DMs; (b) DMs' hesitation in the original evaluation information; and (c) DMs' preference among linguistic terms in the original evaluation information. 2. Inspire by the similarity-based clustering algorithm, we give a new bi-clustering process considering both the evaluation information and the endo-confidence level. The optimal classification threshold is determined by both the similarities and the threshold given by DMs. 3. We consider the evaluation information of both the cluster itself and the clusters with higher endo-confidence levels than itself as the reference to adjust the evaluation information of this cluster. Meanwhile, an endo-confidence-based method to determine weights is proposed. In this method, we give a function to obtain the weights. 4. We proposed a new method to normalise the PLTSs, which considers both the certainty degree of probabilistic information and the preference among the linguistic terms in it.
The outline of this paper is listed as follows: Section 2 shows the preliminaries including the form of basic evaluation information and its operators. Consensus reaching process is given in Section 3, including how to determine the endo-confidence level, how to normalise the PLTSs and the weights of DMs, clustering process, consensus measurement, feedback mechanism and the selection process. Then, a case study and comparative analysis are shown in Section 4. Finally, Section 5 ends with some conclusions.

Preliminaries
In this section, we will introduce the basic knowledge proposed in previous studies, such as the linguistic term set, the probabilistic linguistic term set and its score function, distance and similarity measurements, etc., which will be used in this paper.

The probabilistic linguistic term set
Linguistic is generally used in our daily life to express DMs' opinions. The linguistic variable (Zadeh, 1975) is closer to the natural or artificial language and has been widely used in the decision-making field. Let S ¼ fs 0 , s 1 , . . . , s c , . . . , s s g be a linguistic term set (LTS), where s c represents the c-th linguistic term in S, and s þ 1 is the cardinality of the LTS S: Then, the PLTS (Pang et al., 2016) with probability of each linguistic term is defined as: LðpÞ ¼ fL ðkÞ ð p ðkÞ ÞjL ðkÞ 2 S, p ðkÞ ! 0, k ¼ 1, 2, . . . , #Lð pÞ, X #LðpÞ k¼1 p ðkÞ 1g (1) where L ðkÞ ðp ðkÞ Þ is a linguistic term L ðkÞ associated with the probability p ðkÞ , and #Lð pÞ is the number of all different linguistic terms in Lð pÞ: When P #LðpÞ k¼1 p ðkÞ < 1, we should normalise it to be P #Lð pÞ k¼1 p ðkÞ ¼ 1 before the aggregation processes. In this paper, we will give a new normalised method in Section 3.2.
In order to use data more efficiently and express the semantics flexibly, Wang et al. (2014) proposed the linguistic scale function f which is a monotonically increasing function, where f : s c ! h c ; f À1 : h c ! s c ; h c 2 ½0, 1: Then, it is represented by: Wu and Liao (2019) introduced the linguistic scale to measure the distance by Equation (4): According to Wu et al. (2018), we can obtain the similarity of the PLTSs qðL 1 ð pÞ, L 2 ðpÞÞ based on their distance dðL 1 ðpÞ, L 2 ð pÞÞ : qðL 1 ð pÞ, L 2 ð pÞÞ ¼ 1ÀdðL 1 ðpÞ, L 2 ð pÞÞ (5)

Consensus decision making with endo-confidence based on the probabilistic linguistic information
In this section, we will give a new procedure to reach consensus considering the endoconfidence levels of DMs. Firstly, we will show how to determine the endo-confidence level of each DM from his/her evaluation information. Then, a new method to normalise the PLTSs is given. After that, a similarity-based clustering algorithm is used to distinguish the DMs into several clusters with bi-clustering. The weight of each DM is an important issue in decision making, and we will define a method to measure the weight of DM based on his/her endo-confidence level. Subsequently, a consensus measurement is carried out and the identification and adjustment procedures are designed to achieve consensus. Finally, when the consensus level is reached, the selection process is presented. Please see Table A1 for the meaning of notations used in the proposed model.

How to determine the endo-confidence level of each DM
As afore-mentioned, the experts' confidence levels greatly influence the final decisionmaking results and it should be considered in the decision-making process. As a key step, the measurement of confidence level should be further discussed. However, few studies measure the confidence level according to the evaluation information (Guha & Chakraborty, 2011), especially through probabilistic linguistic information. As a result, in this paper, we propose a method to determine experts' original confidence levels which are named endo-confidence levels based on probabilistic linguistic information from those three aspects: (1) The probabilistic information in the original evaluation given by experts The original probabilities given by experts can reflect their endo-confidence levels. It is not difficult to find that the greater the total probability of the original assessment from an expert is, the higher the expert's endo-confidence level for the corresponding assessment. For instance, if an expert (e 1 ) gives his/her assessment as L 1 ð pÞ ¼ fs 3 ð0:2Þ, s 4 ð0:1Þg while another expert (e 2 ) gives his/her assessment as L 2 ð pÞ ¼ fs 3 ð0:6Þ, s 4 ð0:3Þg, we can find that the expert e 2 gives more information than the expert e 1 : That is, the expert e 2 is more confident for his/her evaluation information than the expert e 1 : Hence, we can calculate endo-confidence level from the perspective of total probability by ec p ¼ P #Lð pÞ k¼1 p ðkÞ : The first part of endo-confidence level of expert e 1 /e 2 is denoted as ec 1 p ¼ 0:3/ec 2 p ¼ 0:9: (2) Experts' hesitation in the original evaluation information Experts may express various degrees of hesitation when facing a decision-making problem. The greater degree of hesitation means that the experts are less confident. Furthermore, endo-confidence level should be related to the number of the linguistic terms in PLTS. For instance, if there are two PLTSs L 1 ðpÞ ¼ fs 0 ð0:1Þ, s 1 ð0:2Þ, s 2 ð0:3Þ, s 3 ð0:2Þ, s 4 ð0:1Þg and L 2 ð pÞ ¼ fs 3 ð0:6Þ, s 4 ð0:3Þg given by experts e 1 and e 2 respectively. Intuitively, the expert e 1 is more hesitant than the expert e 2 : That is, e 1 is less confident about his/her evaluation information than e 2 : Hence, the endo-confidence level based on hesitation is defined as: ec h ¼ 1À #Lð pÞÀ1 s : 1. When #Lð pÞÀ1 s ¼ 0, it means that the expert gives only one linguistic term. Under this circumstance, expert does not hesitate about his/her evaluation, and there is ec h ¼ 1: 2. When #LðpÞÀ1 s ¼ 1, it means #LðpÞ ¼ s þ 1, and the expert believes that all linguistic terms in S may be the possible evaluation, and thus, there is ec h ¼ 0: 3. In other cases, the value of endo-confidence level of the expert is somewhere between the above two cases, that is, 0<ec h <1: (3) Experts' preference among linguistic terms in the original evaluation information One of the most important characteristics of probabilistic linguistic information is that it can reflect the preferences of DMs among the linguistic terms in PLTSs. We can get another part of the expert's endo-confidence level from the probability distribution in different linguistic terms.
Example 1. If the experts e 1 , e 2 and e 3 give their assessments as L 1 ðpÞ ¼ fs 2 ð0:1Þ, s 3 ð0:2Þg, L 2 ð pÞ ¼ fs 2 ð0:3Þ, s 3 ð0:6Þg and L 3 ð pÞ ¼ fs 2 ð0:4Þ, s 3 ð0:4Þg respectively, we can find that the expert e 3 does not know which of the linguistic terms s 2 and s 3 is better, while the expert e 2 thinks that the linguistic term s 3 can better describe his/ her perception than s 2 : Motivated by the concept of standard deviation in depicting the degree of dispersion, we can find that a smaller standard deviation level of probabilities means that the probabilities are closer, and endo-confidence level of the expert is lower, while a larger standard deviation indicates that the expert has a stronger preference for some linguistic terms, which shows that the expert is more confident. Hence, the standard deviation of probabilities should be used as a measure of endo-confidence level. It is worth noting that the expert e 1 and the expert e 2 have the same preference for the terms s 2 and s 3 , but the standard deviation levels of probabilities are different due to the difference in total probabilities. The standard deviation of the probabilities in L 1 ð pÞ is 0.05, while the standard deviation of the probabilities in L 2 ðpÞ is 0.15.
In order to solve this problem, inspired by Pang et al. (2016), we first adjust the total probability to 1 according to the given information. In particular, the associated PLTS LðpÞ 0 is defined by L 0 ð pÞ ¼ fL ðkÞ ðp 0 ðkÞ Þjk ¼ 1, 2, . . . , #L 0 ð pÞg, where p 0 ðkÞ ¼ p ðkÞ = P #Lð pÞ k¼1 p ðkÞ : We can further calculate standard deviation level std of probabilities p 0 ðkÞ : and #L 0 ð pÞ is the number of all different linguistic terms in L 0 ðpÞ: Specially, for the PLTS with only one term, we add a term to it and define the corresponding probability as zero. Then, the standard deviation of probabilities is 0.5.
Then, if there is a finite set of experts E ¼ fe 1 , e 2 , . . . , e t g (T ¼ f1, 2, . . . , tg, a 2 T, t ! 20) and the evaluation information of an expert e a is shown as PLTS, the endo-confidence level ec a d of e a from the perspective of preference among linguistic terms can be obtained by Equation (7): where minfstd t , 8t 2 Tg means that the minimum standard deviation level of probabilities given by all the experts and maxfstd t , 8t 2 Tg denotes the corresponding maximum one.
be a finite set of potential alternatives and C ¼ fc 1 , c 2 , . . . , c m g (M ¼ f1, 2, . . . , mg, j 2 M, m ! 2) be a finite set of attributes. Suppose that the decision-making matrix from the expert e a is given as: . .
where x a ij (i 2 N, j 2 M, a 2 T) is represented by the PLTS L a ij ðp a ij Þ which is a piece of probabilistic linguistic information for the alternative A i on the attribute c j given by the expert e a : Then, we can get the endo-confidence matrix of the expert e a as: where ec a ij can be calculated by Equation (8).

How to normalise the PLTSs
Many scholars have proposed some methods to execute the normalisation of PLTSs. Mi et al. (2020) divided these methods into five categories according to different methods of assigning unknown information, that is, (1) average assignment, (2) fullset assignment, (3) pow-set assignment, (4) envelope assignment, and (5) attitudebased assignment. Furthermore, Zhang, Liao, et al. (2020) gave a new method to normalise the incomplete PLTS into the complete one by assigning the unknown probabilities evenly to all the linguistic terms in S: Although previous researches have given various methods to assign the ignorance of probabilistic information, there are some unreasonable points in these methods. For example, the average assignment method assumes that if a linguistic term s c does not appear in Lð pÞ, then it should not appear in the normalised PLTS Lð pÞ: However, experts cannot accurately know the linguistic terms associated with the ignorance of probabilistic information. As a result, this assumption imposes too many restrictions on the ignorance of probabilistic information. The full-set assignment method (Fang et al., 2020) and the method proposed by Zhang, Liao, et al. (2020) assign the unknown probabilities to all the linguistic terms which can avoid the problem of too strict restrictions mentioned above. For fs 4 ð0:4Þ, s 5 ð0:5Þg, if we use average assignment method, there is fs 4 ð0:44Þ, s 5 ð0:56Þg, while if we use full-set assignment method, there is fs 4 ð0:4Þ, s 5 ð0:5Þ, fs 0 , s 1 , s 2 , s 3 , s 4 , s 5 , s 6 gð0:1Þg, and if we use the method proposed by Zhang, Liao, et al. (2020), there is fs 0 ð0:014Þ, s 1 ð0:014Þ, s 2 ð0:014Þ, s 3 ð0:014Þ, s 4 ð0:415Þ, s 5 ð0:515Þ, s 6 ð0:014Þg: Compared with the average assignment method, the latter two methods assign the unknown information to more terms of the PLTSs. However, those methods do not consider the attitude of DMs. Although the attitudes of DMs (such as optimistic, pessimistic and neutral) are considered in the attitude-based assignment method (Song & Li, 2019), it is not easy to judge the attitudes of DMs in real decision-making process. As a result, inspired by the score function EðLðpÞÞ of Lð pÞ, we propose a novel method to normalise the incomplete PLTSs.
Since the limitation of knowledge and ability, some DMs will give less assessment which lead to the ignorance of probabilistic information in the decision-making process. It is clear that the more unknown assessment is, the less accurate the information provided by DMs will be. For instance, there are two PLTSs L 1 ðpÞ ¼ fs 4 ð0:4Þ, s 5 ð0:5Þg and L 2 ð pÞ ¼ fs 4 ð0:1Þ, s 5 ð0:2Þg: Both of the experts deem that the assessment should be between s 4 and s 5 , but the corresponding probabilities show that the certainty of the assessment L 1 ð pÞ reaches 0.9, while the certainty of L 2 ð pÞ is only 0.3. It is clear that different normalisation methods should be adopted according to the ignorance of probabilistic information. If the total probability of the PLTS is at a higher level, we tend to believe that experts are more certain about their assessments, thus the remaining probabilities are also distributed around the original information. If the total probability of the PLTS is at a lower level, the assessments may not be accurate. In other words, the remaining probabilities should be assigned to more linguistic terms which are far from the original linguistic information. Hence, we use the values of the ignorance of probabilities to determine the terms that appear in the normalised PLTSs. The specific method is shown as follows: We can calculate the lower and upper bounds of the normalised PLTS, which is denoted as s l and s u respectively, where l and u are obtained by Equations (11) and (12): Notice that: 1. When 1À P #Lð pÞ k¼1 p ðkÞ ¼ 0, the PLTS is a complete one. In this case, l ¼ min k c ðkÞ and u ¼ max k c ðkÞ , the lower and upper bounds of the normalised PLTS are the same as the original one. 2. When 1À P #LðpÞ k¼1 p ðkÞ ¼ 1, all of the assessments are unknown, and thus, the normalised LTS should fetch all the linguistic terms, in other words, s l ¼ s 0 and s u ¼ s s : In this condition, we can give a virtual PLTS as fs s 2 ð0Þg and utilise Equations (11) and (12) to get its corresponding lower and upper bounds. 3. Specifically, if the bounds of the normalised PLTS exceed the definition of the LTS, then we set s l ¼ s 0 and s u ¼ s s : Subsequently, we fill in all integer terms between s l and s u to get the normalised LTS L ¼ fs l , . . . , s c , . . . , s u g: Then, we get the corresponding expanded original probabilities p ðk 0 Þ 1 according to Equation (13) and obtain the PLTS : where #Lðp 1 Þ is the number of all different linguistic terms in Lðp 1 Þ: When normalising the probabilistic linguistic information, we should consider the original opinions of experts. The term which is closer to the score EðLð pÞÞ is more in line with the original opinion of the expert, and we should assign a greater probability to this term. When c ðk 0 Þ ¼ t (see Equation (2)), the proportion of the probability distribution hð p ðk 0 Þ Þ should be maximised. In this paper, we use the exponential function to obtain hð p ðk 0 Þ Þ : Equation (15) is used to get the normalised proportion hð p ðk 0 Þ Þ of the probability distribution: Then, we can obtain the normalised probabilities p ðk 0 Þ : where #LðpÞ is the number of all different linguistic terms in the normalised PLTS LðpÞ: In order to facilitate readers to understand the proposed normalisation method, in this paper, we give the corresponding calculation process by Example 3 in Appendix.

The clustering procedure of the large-scale experts
Clustering analysis, which can not only reduce the complexity of LSGDM problems and the cost of decision making but also help us find common opinion patterns to identify a spokesman who represents the cluster (Tang & Liao, 2021) has attracted much attention and become the most commonly used method in LSGDM problem. Some traditional clustering algorithms (i.e., K-means, fuzzy C-means) have been used to distinguish participants into different subgroups. Since the results of these two clustering methods are affected by the number of clusters determined by DMs, other clustering methods including similarity measure-based clustering and fuzzy equivalence clustering are introduced into LSGDM problem recently. The similarity-based clustering, which is one of the most widespread methods used by the researchers to distinguish the DMs into different clusters, is utilised in this paper (Gou et al., 2018).
Furthermore, we introduce the endo-confidence factor into the similarity-based clustering. Although  utilised the grey clustering algorithm to classify the experts, they combined the similarity of fuzzy preference values and self-confidence level to cluster the experts. However, this algorithm may cause the problem that the experts with lower similarities of fuzzy preference values will be distinguished into one cluster because they have the higher similarities of selfconfidence levels, which may lead to unreasonable decision results. As a result, we utilise the similarity-based clustering algorithm twice to reduce the dimensionality of the decision information according to their assessments and endo-confidence levels respectively. Firstly, we will get the clusters of DMs based on the similarities of evaluation information, which is recorded as G ij, Lð pÞ ¼ fG ij, Lð pÞ 1 , G ij, Lð pÞ 2 , . . . , G ij, Lð pÞ g , . . . , G ij, Lð pÞ n g: In this process, the similarities of PLTSs (Equation (5)) are used. Then, we will get the clusters of DMs based on the similarities of endo-confidence levels (Equation (18)) among DMs, which are represented as G ij, ec ¼ fG ij, ec 1 , G ij, ec 2 , . . . , G ij, ec g , . . . , G ij, ec n g: The intersection of the clusters should be the final result, that is G ij ¼ fG ij, 1 , G ij, 2 , . . . , G ij, g , . . . , G ij, r g: The specific clustering steps are shown as follows: 1. Clustering the experts based on the similarities of evaluation information Step 1. Establish the similarity matrix of the evaluation information between each expert SE ij ¼ se ab ij h i tÂt based on the alternative A i over the attribute c j : The similarities of the evaluation information are given as (17): where se ab ij is calculated by Equation (5) and it expresses the similarity degree between the evaluation information x a ij and x b ij from the corresponding experts e a and e b based on the alternative A i over the attribute c j : According to Equation (5), there is se ab ij ¼ se ba ij , se aa ij ¼ 1: Step 2. Choose the classification threshold. Due to that the similarity matrix is a symmetric matrix with its diagonal elements equal to 1, we can choose the classification threshold by ranking the value of the upper triangular elements of the similarity matrix (except the diagonal elements) as g ij, Lð pÞ 1 ! g ij, Lð pÞ 2 ! Á Á Á ! g ij, Lð pÞ q ! Á Á Á ! g ij, Lð pÞ tðtÀ1Þ 2 and denote the threshold as g ij, Lð pÞ , where g ij, Lð pÞ ¼ g ij, Lð pÞ q : Step 3. Determine the optimal classification threshold g Ã ij, Lð pÞ and obtain the clustering results. Gou et al. (2018) determined the threshold in their paper according to the rate of threshold change which may cause problems that some experts are not distinguished into any cluster. To solve the above problem, we can define g 0 ij, Lð pÞ as the classification threshold, that is produced when each expert is involved in one cluster. That is to say, we record the maximum value of the similarity between each expert and other tÀ1 experts, and then take the minimum value of these t values as the threshold g 0 ij, Lð pÞ : Notice that, in some real decision-making processes, some DMs may want to set the threshold by themselves. This threshold has a function to artificially limit the lowest threshold level to avoid the incomplete dimensionality reduction of large-scale experts. As a result, if the similarity g ij, Lð pÞ q is less than a parameter v ij, Lð pÞ given by DMs, we can say that the similarity is at such a low level that the clustering procedure can be terminated. It is worth noting that the value of the threshold is related to the degree of difficulty in clustering process. The larger the threshold is, the stricter the classification of experts will be, and vice versa. Hence, we define the optimal classification threshold g Ã ij, Lð pÞ ¼ maxðg 0 ij, Lð pÞ , v ij, Lð pÞ Þ: If the similarity of the experts e a and e b for the i-th alternative over the attribute c j is not less than the optimal threshold g Ã ij, Lð pÞ , that is se ab ij ! g Ã ij, Lð pÞ , then the experts e a and e b are divided into one cluster. We get the clusters G ij, Lð pÞ 1 , G ij, Lð pÞ 2 , . . . , G ij, Lð pÞ g , . . . , G ij, Lð pÞ n : Furthermore, if there exists common experts such as e a in more than one cluster, in other words, G ij, Lð pÞ g \ G ij, Lð pÞ g 0 6 ¼ ;, we combine these clusters into one cluster and get the final clustering results namely G ij, Lð pÞ 1 , G ij, Lð pÞ 2 , . . . , G ij, Lð pÞ g , . . . , G ij, Lð pÞ n : In this paper, we set v ij, Lð pÞ ¼ 0:9 (i 2 N, j 2 M).

Clustering the experts based on the similarities of their endo-confidence levels
The similarities of DMs' endo-confidence levels are used in the clustering process. In this paper, the similarities of endo-confidence levels for the alternative A i over the attributes c j between the experts e a and e b can be calculated by (18): According to the definition, there is sec ab ij ¼ sec ba ij and sec aa ij ¼ 1: Similar to the steps mentioned above, we first obtain the similarity matrix SEC ij ¼ sec ab ij h i tÂt of the endo-confidence levels between each pair of the experts e a and e b based on the alternative A i over the attribute c j : The similarities of endo-confidence of the i-th alternative over the attribute c j are given as: . .
Then, we choose the optimal classification threshold g Ã ij, ec and obtain the clustering results. We can define the parameter v ij, ec and obtain the threshold g 0 ij, ec , and then the optimal classification threshold g Ã ij, ec can be calculated by g Ã ij, ec ¼ maxðg 0 ij, ec , v ij, ec Þ: We get the clusters G ij, ec 1 , G ij, ec 2 , . . . , G ij, ec g , . . . , G ij, ec n and combine the clusters with common experts. We can obtain the final clustering results, which are denoted as G ij, ec ¼ fG ij, ec 1 , G ij, ec 2 , . . . , G ij, ec g , . . . , G ij, ec n g: In this paper, we set v ij, ec ¼ 0:94 (i 2 N, j 2 M).
When the experts give similar evaluation information and endo-confidence levels, then we think that the experts have similar characteristics and should be distinguished into one cluster. In other words, we take the intersection of the above clustering results G ij, Lð pÞ and G ij, ec , and use the results of the intersection as the new clustering results. Then, we can obtain the final clustering results G ij ¼ fG ij, 1 , G ij, 2 , . . . , G ij, g , . . . , G ij, r g, where r is the number of clusters G ij, g , and it is obvious that r Lð pÞ n Á ec n : To better express the clustering mechanism, we provide the clustering process by Example 4 in Appendix.

Weight determination with endo-confidence
The weights of the DMs indicate the importance levels of the DMs in the group, which have a strong effect on the final decision-making results (Zha et al., 2019). Hence, a lot of weights-determining methods in GDM are developed. In most circumstances, scholars determine the weights based on the majority principle, which may ignore the differences of inner characteristics (Tang & Liao, 2021). Confidence level is an important inner characteristic of DM. The higher the confidence level of an expert is, the more reliable he/she believes his/her assessment will be (Hinsz, 1990). Then, a greater degree of importance should be assigned to him/her. Thus, the confidence should be considered to determine the weight of DM. Some researchers calculate experts' weights by using the proportion of DMs' self-confidence levels to the collective one (Liu, Xu, Ge, et al., 2019;Liu, Xu, Montes, Dong, et al., 2019;Ureña et al., 2015). With this method, the weight linearly changes with the self-confidence level. However, it is obvious that under different confidence levels, the speeds of weight change should be different. Ding et al. (2019) gave a weight determination model that the weight changes non-linearly with the self-confidence level. However, if the experts' weights are calculated in this way, only the evaluations provided by the experts with high self-confidence levels will be considered, while the weights of other experts are very low and the influence of their opinions on the decision-making results will be ignored. As a result, the weight determination mechanism should be further discussed.
In this paper, we assume that if the expert's endo-confidence level is close to the average level, as the endo-confidence level rises, the weight increases slowly. When the expert's endo-confidence level is large or small, the weight changes rapidly as the endo-confidence level increases. Motivated by hyperbolic sine function 1 which can well portray the above phenomenon, we use Equation (20) to simulate the relationship between weight and endo-confidence level. And the weight determination method with endo-confidence level in this paper is indicated below: First, we can get the weight of the expert e a in the cluster G ij, g , namely w a ij, G ij, g by using the following two steps: Step 1. The weight of the expert e a in the cluster G ij, g can be simulated by Equation (20): Step 2. Normalise the weight w a ij, G ij, g in the cluster G ij, g by Equation (22) and get w a ij, G ij, g : where #NE G ij:g represents the number of experts in the cluster G ij, g : Then, we can obtain the weight w ij, G ij, g of the cluster G ij, g by using the number of experts in the cluster G ij, g : where r is the number of clusters G ij, g : Example 5 in Appendix shows the weight determining method with endo-confidence level.

Consensus measurement
Consensus measurement is a crucial part in consensus reaching process. A significant point of consensus definition is to select an appropriate distance or similarity measure to obtain the consensus index among experts (Wang et al., 2020). In this paper, we give the steps to calculate the overall consensus index. The specific calculation steps are shown as follows: First, we get the rearranged PLTSs L a ij, G ij, g The collective evaluation information L ij, G ij, g ð pÞ of the cluster G ij, g can be calculated by Equation (24) where #L ij, G ij, g ðpÞ is the number of all different linguistic terms in L ij, G ij, g ð pÞ, and it is obvious that #L ij, G ij, g ð pÞ ¼ #L a ij, G ij, g ð pÞ Ã , k 0 ¼ 1, 2, . . . , #L ij, G ij, g ðpÞ, w a ij, G ij, g is the weight of the expert e a in the cluster G ij, g according to Section 3.4, and means to sum the terms at the same position k 0 in the rearranged PLTS.
Then, according to weight determining method in Section 3.4, the weight of the cluster G ij, g can be calculated as w ij, G ij, g : We obtain the rearranged PLTSs . . , #L ij, G ij, g ð pÞ Ã g of the collective evaluation information L ij, G ij, g ð pÞ in the cluster G ij, g according to the method in Section 2.2, where #L ij, G ij, g ð pÞ Ã is the number of all different linguistic terms in L ij, G ij, g ð pÞ Ã : (24) and (25), we can calculate the overall evaluation as L ij ðpÞ by Equation (26):

Similar to Equations
#L ij ð pÞ is the number of all different linguistic terms in L ij ð pÞ, and #L ij ð pÞ ¼ #L ij, G ij, g ðpÞ Ã , k 0 ¼ 1, 2, . . . , #L ij ðpÞ: Subsequently, we can calculate the similarity qðL ij, G ij, g ð pÞ, L ij ð pÞÞ by Equation (5). The overall consensus index OCI ij can be calculated by Equation (28): Obviously, the larger the value of OCI ij is, the higher the similarity of evaluation information for the alternative A i over the attribute c j will be. If there is OCI ij ! e j for 8i, j, then the consensus level is acceptable. The parameter e j which is given by DM is called a consensus threshold. When the consensus level is reached, we can select the best alternative by ranking them according to the score of the overall assessment L ij ðpÞ: Otherwise, some non-consensus clusters should adjust their assessments.
3.6. Feedback adjustment mechanism based on endo-confidence level To achieve a predefined consensus level, feedback mechanism should be further discussed. Feedback mechanism can generate suggestions to help DMs adjust their evaluation information and finally reach the consensus level. Identification and direction rules are the general procedures in feedback mechanism. They are used to identify the DM who needs to revise his/her evaluation and provide the suggestions to adjust the evaluation to facilitate the group consensus. First, we should identify which clusters need to adjust the evaluation. We can rank the clusters G ij, g according to their similarities qðL ij, G ij, g ð pÞ, L ij ð pÞÞ (denoted as q ij, G ij, g ) as: q ij, G 0 ij, r : We choose group G 0 ij, 1 , G 0 ij, 2 , . . . in turn to adjust the evaluation until the consensus is reached. Notice that, in order to avoid excessive loss of original information, we stipulate that each cluster G ij, g can only adjust the evaluation once.
In our daily life, we can notice that experts' opinions are often influenced by other experts who are more capable than himself/herself. As afore-mentioned, experts' abilities can be reflected by their endo-confidence levels. Hence, we assume that in the adjustment process, the clusters will refer to both the original assessments of themselves and the opinions of clusters who are more confident than them to adjust their assessments. Moreover, experts with higher endo-confidence levels tend to be more difficult to be influenced by others. Thus, in the process of adjusting opinions, the degree of acceptance of others' opinions is related to their endo-confidence levels ec ij, G ij, g which is obtained by Equation (29): Then, the following adjustment rules are given: For a cluster G ij, g , we can define the clusters whose endo-confidence levels are not less than ec ij, G ij, g as the suggested adjustment set AS ij, G ij, g , where AS ij, G ij, g ¼ fG ij, g 0 jec ij, G ij, g 0 ! ec ij, G ij, g , g 0 6 ¼ gg: We can calculate the collective assessment of the suggested adjustment set AS ij, G ij, g and use it as an important reference part to revise evaluation information of the cluster G ij, g : Firstly, using the weight determining method in Section 3.4 to calculate the weight of the cluster G ij, g 0 in AS ij, G ij, g , namely w ij, G ij, g 0 : Then, we can obtain the rearranged PLTSs L ij, G ij, g 0 ð pÞ Ã ¼ fL . . , #L ij, G ij, g 0 ð pÞ Ã g of the collective evaluation information L ij, G ij, g 0 ðpÞ in the suggested adjustment set AS ij, G ij, g according to the method in Section 2.2, where #L ij, G ij, g 0 ð pÞ Ã is the number of all different linguistic terms in L ij, G ij, g 0 ð pÞ Ã : Subsequently, we can utilise Equation (30) to get a part of revising evaluation information L ijAS ij, G ij, g ð pÞ of the cluster G ij, g as: ðpÞ is the number of all different linguistic terms in L ijAS ij, G ij, g ð pÞ, and #L ijAS ij, G ij, g ðpÞ ¼ #L ij, G ij, g 0 ðpÞ Ã , k 0 ¼ 1, 2, . . . , #L ijAS ij, G ij, g ð pÞ and #AS ij, G ij, g is the number of clusters in the suggested adjustment set AS ij, G ij, g : Then, the updated evaluation information L ij, G ij, g ð pÞ ð1Þ of the cluster G ij, g in the first adjustment round can be obtained by: , and L ijAS ij, G ij, g ðpÞ Ãð0Þ ¼ L ij, G ij, g ðpÞ Ã ð0Þ ¼ #L ij, G ij, g ð pÞ ð1Þ : After adjusting the evaluation information of the cluster G ij, g , we need to modify the endo-confidence level ec ij, G ij, g of the cluster G ij, g : To accomplish it, firstly, we need to calculate the average endo-confidence level (ec 0 ij, G ij, g ) of the cluster G ij, g 0 in the suggested adjustment set AS ij, G ij, g according to Equation (34): Then, the updated endo-confidence level ec ij, G ij, g of the cluster G ij, g in the first round which is obtained by Equation (35): For other clusters G ij, g 00 that do not need to adjust their evaluation information and endo-confidence levels, the following rule is used: , ec ij, G ij, g 00 ¼ ec ij, G ij, g 00 Similar to the method presented in Section 3.4, we utilise the endo-confidence level ec ð1Þ ij, G ij, g of each cluster G ij, g to get the weight w ij, G ij, g : Then, we go back to Section 3.5 to carry out the consensus process. If the consensus has been reached, then go to Section 3.7. If all the clusters have adjusted their information and the consensus has not been reached yet, the consensus fails and the decision-making process should be terminated.

Selection process with the consensus evaluation information
Once the consensus level among experts is reached, the selection process is conducted to generate the final overall ranking of alternatives with the consensus evaluation information. Let X ð/Þ ¼ ðx ij ð/Þ Þ nÂm be the consensus decision matrix after / rounds, where which is the consensus PLTS for the alternative A i over the attribute c j : Applying the weighted averaging operator to fuse all the evaluations in the i-th row of X ð/Þ ¼ ðx ij ð/Þ Þ nÂm , the overall evaluation L i ðpÞ of the alternative A i can be generated. We can get the rearranged PLTSs L ij ðpÞ Ãð/Þ ¼ fL where Figure 1. The visual procedure of consensus decision-making model with endo-confidence.
Source: calculated by the methods using the original data.
#L i ð pÞ is the number of all different linguistic terms in L i ð pÞ, #L i ð pÞ ¼ #L ij ð pÞ Ãð/Þ and y j is the weight of the attribute c j : Obviously, P m j¼1 y j ¼ 1, y j ! 0: We can make a comparison between the overall evaluation L i ð pÞ: According to the method proposed in Section 2.1, we calculate the score EðL i ð pÞÞ of the overall evaluation L i ð pÞ: If there is t i 0 ¼ maxðt i Þ, i ¼ 1, 2, . . . , nÞ, then, A i 0 is recognised as the best choice.
In summary, the detailed consensus decision-making process for LSGDM with endo-confidence level can be described as follows: First, the similarity-based cluster algorithm is utilised to distinguish experts into different clusters considering the similarities of both the evaluation information and the endo-confidence levels respectively. Then, the weights of the experts e a in the cluster G ij, g and the weights of the cluster G ij, g are obtained by using the method in Section 3.4. Subsequently, the consensus measurement is proposed, and if the consensus level is unacceptable, the adjustment mechanism is provided for clusters to adjust their evaluation information and endo-confidence levels. Finally, once the consensus level is reached, the decision-making process is conducted to select the best alternative. However, if the consensus level has not been reached in the end, the consensus fails. This process is presented in Algorithm I.

Algorithm I (Decision making for LSGDM considering the endo-confidence in PLTSs)
Input: The evaluation information X a ¼ ½x a ij nÂm , where x a ij is represented by PLTS, y j , a, b, e j , v ij, LðpÞ and v ij, ec . Output: The score EðL i ðpÞÞ of the overall evaluation L i ðpÞ and the ranking results of the alternatives.
Step 1. Set / ¼ 0. We calculate the endo-confidence of e a according to his/her original evaluation and establish the corresponding endo-confidence matrix (10) based on Section 3.1. Then, we normalize the evaluation information x a ij according to Section 3.2.
Step 2. Calculate the similarities of evaluation information Eq. (17) and endo-confidence level Eq. (19), and get the final clustering results by carrying out the clustering procedure in Section 3.3.
Step 3. Obtain the weight w a ij, G ij, g of the expert e a in each cluster G ij, g and the weight w ij, G ij, g of each cluster G ij, g by the method in Section 3.4.
Step 4. Get the overall consensus index OCI ij according to Section 3.5 and judge the consensus has been reached or not. If the evaluation information from the DMs do not reach consensus, then, go to Step 5 to revise the evaluation information. If all the clusters have adjusted their evaluation information and the consensus has not been reached yet, the consensus fails and the decision-making process should be terminated; otherwise, go to Step 6.
Step 5. Find the cluster which needs to revise the evaluation information and correct it according to the suggestions in Section 3.6. Subsequently, modify the endo-confidence level of the cluster G ij, g by applying Eq. (35) and utilize it to obtain the weight w ð/Þ ij, G ij, g according to the method proposed in Section 3.4. Then, let / ¼ / þ 1 and repeat Step 4.
Step 6. If the consensus among experts is reached, then we get the consensus evaluation information and obtain the overall evaluation L i ðpÞ based on (37). Further, we get the score EðL i ðpÞÞ of the overall evaluation L i ðpÞ. Then, rank and find the best alternative.
In order to intuitively reflect the consensus model based on endo-confidence, a visual procedure of the proposed model is presented in Figure 1.

Decision-making process with the proposed method
In order to prove the feasibility and efficiency of the decision-making model with endo-confidence factor proposed in this paper, a case study is given. In this case, there are four alternatives for experts to make decision which are denoted as A ¼ fA 1 , A 2 , A 3 , A 4 g: We ask experts to evaluate these alternatives from four attributes C ¼ fc 1 , c 2 , c 3 , c 4 g: The corresponding weights of each attribute are given as: y 1 ¼ 0:15, y 2 ¼ 0:10, y 3 ¼ 0:55, y 4 ¼ 0:20: There are twenty experts E ¼ fe 1 , e 2 , . . . , e 20 g participated in the decision-making process, giving the corresponding probabilistic linguistic information x a ij : Then, the decision matrix X a ¼ ½x a ij 4Â4 ða ¼ 1, 2, . . . , 20Þ is generated randomly. In this case, we set a ¼ 0:4, b ¼ 0:3, v 11, Lð pÞ ¼ 0:9, v 11, ec ¼ 0:94 and e 1 ¼ 0:88, e 2 ¼ 0:87, e 3 ¼ 0:84, e 4 ¼ 0:85: Step 1. Set / ¼ 0: We establish the endo-confidence level (ec a ij ) of expert e a based on Section 3.1. The endo-confidence matrixes of experts are shown as follows:  Step 2. We can obtain the normalised PLTSs by applying the method proposed in Section 3.2. We calculate the similarities of evaluation information among experts. Then, we obtain the order of the upper triangular elements of the similarity matrix (except the diagonal elements) as 1 ! 0:9745 ! 0:9647 ! Á Á Á ! 0:2650: The optimal classification threshold g Ã 11, Lð pÞ ¼ 0:9011 is determined and the clustering results G 11, Lð pÞ are obtained. The similarities of the evaluation information are given below: . . Subsequently, the similarity matrix of endo-confidence is established according to Equation (18). Then, we can get the order of the upper triangular elements of the similarity matrix (except the diagonal elements) 1 ! 1 ! 1 ! Á Á Á ! 0:5062: Thus, we get the optimal classification threshold g Ã 11, ec ¼ 0:94 and obtain the clustering results G 11, ec : The final clustering results G 11 based on the results of the above two clustering results G 11, Lð pÞ and G 11, ec are determined. The clustering results based on the similarities of evaluation information, the clustering results based on the similarities of endo-confidence levels and the final clustering results are shown in Table 1.
Step 3. The weight of each cluster G 11, g is shown in Table 1. To save space, the results for the weight of each expert e a in each cluster G 11, g are omitted here.
Step 4. We can get the collective evaluation information of the cluster G 11, g and the overall evaluation L ð0Þ 11 ð pÞ: Then, the overall consensus indexOCI ð0Þ 11 ¼ 0:8348 has been worked out. Compared with the consensus threshold e 1 ¼ 0:88, we can see that the consensus level is not reached. Then, some clusters need to adjust their evaluation information.
Step 5. We rank the clusters G 11, g according to their similarities and get G 0 11 : Then, we choose cluster G 11, 6 to adjust the evaluation information. After calculating 11, G 11, g ¼ fG 11, 3 , G 11, 5 , G 11, 7 g and further obtain the updated evaluation information and endo-confidence level of the cluster G 11, 6 : Step 6. Go back to Step 4. We calculate the overall evaluation and the overall consensus index, then, we compare with the consensus threshold e 1 ¼ 0:88 to judge if the consensus level is reached. We get the overall consensus index for each iteration, that is OCI  Table 2. Due to space limitations, the results of the suggestions and the updated evaluation information are omitted.

The decision-making method without considering the endo-confidence of DMs
In this paper, we proposed a decision-making model for LSGDM considering the endo-confidence in PLTSs. In order to demonstrate the influence of endo-confidence factors on the decision-making process, here we give a decision-making model without endo-confidence level. The detailed steps are depicted as follows: Step 1. Set / ¼ 0: We convert the original evaluation information into the complete PLTSs according to the method proposed in Section 3.2.
Step 2. Get the similarities of evaluation information by Equation (5), and use it to distinguish the experts based on Section 3.3. We can obtain the clusters G ij, Lð pÞ ¼ fG ij, Lð pÞ 1 , G ij, Lð pÞ 2 , . . . , G ij, Lð pÞ g , . . . , G ij, Lð pÞ n g and take it as the final clustering results, denoted as G ij ¼ fG ij, 1 , G ij, 2 , . . . , G ij, g , . . . , G ij, r g: Step 3. Based on the majority principle, we calculate the weight w a ij, G ij, g of the experts e a in each cluster G ij, g : Further, according to Equation (23), we get the weight w ij, G ij, g of each cluster G ij, g : Step 4. Get the collective and overall evaluation information by utilising Equations (24)-(27), and we can obtain the overall consensus index OCI ij : Judge the consensus has been reached or not by comparing OCI ij with the threshold e j : If the consensus level is achieved, then, go to Step 6; otherwise, go to Step 5. If all the clusters have adjusted their evaluation information and the consensus has not been reached yet, the consensus fails and the decision-making process should be terminated.
Step 5. Rank the clusters G ij, g according to the similarities qðL ij, G ij, g ð pÞ, L ij ð pÞÞ and choose the cluster with the smallest similarity to adjust the evaluation until the consensus reaches. We adjust the evaluation information according to the collective information L ij ð pÞ: Notice that, each cluster G ij, g can only adjust the evaluation once.
After / iterations, we obtain the rearranged PLTS L ij, G ij, g ð pÞ Then, the updated evaluation information of the cluster G ij, g can be calculated by Equation (40): For other clusters G ij, g 00 that do not need to adjust their evaluation information, we can also get L ij, G ij, g 00 ð pÞ ð/þ1Þ : L ij, G ij, g 00 ð pÞ ð/þ1Þ ¼ L ij, G ij, g 00 ðpÞ We keep the weight of each cluster unchanged, that is w : Then, let / ¼ / þ 1 and repeat Step 4.
Step 6. Get the consensus evaluation information and obtain the overall evaluation L i ð pÞ based on Equation (37). Then, we calculate the score EðL i ð pÞÞ of the overall evaluation L i ð pÞ, rank and find the best alternative.

Comparative analysis
In order to prove that the model proposed in Section 3 can reflect the influence of endo-confidence on decision-making process, we change the values of the parameters a and b to reflect the effect of different importance degrees of endo-confidence measurement on the decision results. We make a comparison between the clustering results, the final weights of the cluster G 11, g , the overall consensus index OCI 11 and the final decision results. The same data and other parameters in Section 4.1 are utilised. We first set the parameters a ¼ 0:4, b ¼ 0:3 and denote it as Model I with Case I. Then, the parameters become a ¼ 0:2 and b ¼ 0:5 which is named as Model I with Case II. We can see the detailed results of the above two cases in Table 3.
According to Table 3, we can find that in Case I, the expert e 10 is the neighbor of experts e 3 e 5 , e 8 and e 20 , while the expert e 10 is distinguished into one cluster with the experts e 11 and e 16 in Case II. In the two cases, the similarities of the evaluation information have not changed. Hence, the difference in clustering results is entirely caused by the difference of the similarities of endo-confidence levels. In other words, the similarity-based clustering algorithm takes both evaluation information and endo-confidence levels into consideration, and the clustering results are more reasonable.
Moreover, as we can see in Table 3, we have different updated weights w ð/Þ 11, G 11, g due to different clustering results and different modified endo-confidence levels of the cluster G 11, g : Furthermore, the overall consensus indexes of Case I and Case II equal to OCI ð0Þ 11 ¼ 0:8348 and OCI ð0Þ 11 ¼ 0:8327 respectively. We can conclude that under different endo-confidence levels, there are differences in the overall consensus index due to the different collective evaluation information of each cluster and the differences in its corresponding weights. The overall consensus indexes of Case I and Case II after modification are OCI ð5Þ 11 ¼ 0:8850 and OCI ð6Þ 11 ¼ 0:8898 respectively. Notice that, the changes in the overall consensus index DOCI 11 ¼ OCI ð/Þ 11 ÀOCI ð0Þ 11 are significantly different under Case I and Case II, that is D I OCI 11 ¼ 0:0502 and D II OCI 11 ¼ 0:0571 respectively. Thus, we can conclude that the feedback mechanism which considers both evaluation information and endo-confidence levels make sense. The final result of both Case I and Case II is A 1 1 A 4 1 A 2 1 A 3 : Although the above two cases conclude in the same result, their scores of the overall evaluation are different. We can find that the parameters a and b are sensitive to the decision result.
Furthermore, we make a comparison among Model I (with Case I and Case II) and Model II proposed in Section 4.2. In the clustering process, Model II does not consider the similarities of endo-confidence levels. Hence, the experts with different characteristic are divided into one cluster (see the experts e 3 , e 5 , e 8 , e 9 , e 10 , e 11 , e 16 , e 18 , e 20 ) which may lead to biased decision results. The weight w ð3Þ 11, G 11, g of Model II is always determined by the number of experts in the cluster G 11, g : On the one hand, the determination of the weights does not consider the psychological factor. On the other hand, the weights could not be flexibly adjusted during the interaction process. After three iterations, the overall consensus index OCI ð3Þ 11 ¼ 0:8839 and the consensus level is reached. Although Model II can reach consensus in fewer rounds, it will cause more information loss in each round of adjustment. Notice that the final result of Model II is A 4 1 A 1 1 A 2 1 A 3 , which is different from the result of Model I. It is obvious that endo-confidence has an impact on decision-making results. Thus, it is necessary to consider experts' endo-confidence levels when making decisions and the model proposed in this paper has great value and practical significance.

Conclusions
In this paper, we propose a decision-making process under PLTSs which considers the endo-confidence levels of DMs. A method to determine the endo-confidence level of each DM from his/her evaluation information is given. We give a novel way to normalise the PLTSs which holds more original information. The similarity-based clustering algorithm is utilised to distinguish the experts into different clusters according to the similarities of both evaluation information and endo-confidence levels. Then, the relationship between endo-confidence and weight is discussed. Motivated by hyperbolic sine function, the weight determination method is proposed. Subsequently, we present the consensus measurement to calculate the overall consensus index and the feedback adjustment mechanism, which can help clusters adjust their evaluation information to reach the consensus level. Also, we give the selection process to choose the best alternative with the consensus evaluation information.
Although the proposed consensus model with endo-confidence has its advantage, sometimes, the single form of evaluation information could not properly describe all the attributes of alternatives. Hence, the consensus based on endo-confidence with heterogeneous evaluation information should be discussed in the future. Also, noncooperative behaviour is also hot topic in the consensus problem , and self-confidence have an influence on the degree of non-cooperative behaviour. As a result, considering the effect of endo-confidence on the non-cooperative behaviour in consensus model is another topic in our future study. Moreover, sometimes it is not easy for us to directly obtain sufficient eloquent data in the form of PLTSs. Hence, in the future, we may study the method to extract PLTSs from the natural language by using natural language processing technology, and give the applications of the decision method presented in this paper. Note 1. It is worth mentioning that, the hyperbolic sine function is not the only function that satisfies these conditions.

Disclosure statement
No potential conflict of interest was reported by the authors.
Source: calculated by the methods using the original data.