Similarity Measure of Hesitant Fuzzy Sets Based on Implication Function and Clustering Analysis

Hesitant fuzzy set (HFS) permits several possible values as the membership degree of an element to a set to express the decision makers’ hesitance. Since its appearance, HFS has been extensively applied in multi-attribute decision making, group decision making and evaluation process. Considering that the similarity measure of hesitant fuzzy sets (HFSs) is an important index in intelligent system, and the implication function can describe many subtle differences which is very suitable for dealing with hesitant fuzzy information. In this paper, we merge implication function with HFS to investigate the similarity measure of HFSs, propose some new formulas to calculate the similarity measures of HFSs which are different from the existing similarity measures of HFSs based on the distance measure, and do some comparison analysis. Meanwhile, we introduce the union and intersection operations of HFSs, the hesitant fuzzy similar relation and the hesitant fuzzy equivalent relation, and develop the hesitant fuzzy clustering algorithm. Finally, three numerical examples are used to illustrate the effectiveness and validation of our proposed method.


I. INTRODUCTION
Since Zadeh [37] introduced fuzzy set in 1965, fuzzy set theory has achieved great success in many fields such as decision making, approximate reasoning, fuzzy control and so on. But in the practical application, sometimes it is difficult to establish the membership function of fuzzy set because of various reasons, therefore, some new approaches and theories were proposed to be extensions of fuzzy set theory such as intuitionistic fuzzy set [1], Type-2 fuzzy set [6], interval-valued fuzzy set [38] and fuzzy multiset [34]. Recently, Torra [23] and Torra and Narukawa [24] introduced the concept of hesitant fuzzy set (HFS) which permitted the membership degree having a set of possible values. Because HFS can reflect the human's hesitancy more objectively than the other extensions of fuzzy set, hence it becomes a useful tool to deal with uncertainty, and has attracted the attention of many researchers in a short period of time, especially in decision making and evaluation process. For example, The associate editor coordinating the review of this manuscript and approving it for publication was Khalid Aamir.
Chen et al. [5] investigated the correlation coefficient of hesitant fuzzy sets (HFSs) and applied in clustering analysis, Liao et al. [14] studied a novel correlation coefficient of HFSs and applied in decision making, Tyagi [25] presented the correlation coefficient of dual hesitant fuzzy sets and its application, Wei [28] investigated hesitant fuzzy prioritized operator, Yu et al. [36] proposed generalized hesitant fuzzy Bonferroni mean operator, Rodriguez et al. [20] investigated hesitant fuzzy linguistic term sets for decision making, Xu [31] developed the hesitant fuzzy set theory and its application in decision making. Wang et al. [26] proposed dual hesitant fuzzy power aggregation operators based on Archimedean t-conorm and t-norm and applied in multiple attribute group decision making, Qin et al. [19] and Tan et al. [22] investigated Frank aggregation operator and hesitant fuzzy Hamacher aggregation operator of HFSs and applied in multiple criteria decision making, respectively. For more details, please refer to Xu [31] and Rodríguez et al. [21].
The distance and similarity measure are two important indexes in the fuzzy set theory and have been extensively applied in some fields such as decision making, VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ pattern recognition, machine learning, approximate reasoning, market prediction and so on. For example, Wang [27] first introduced the concept of similarity measure of fuzzy sets, Zwick et al. [48] investigated geometric distance and Hausdorff metrics and gave a comparative analysis of similarity measures of fuzzy sets, Zeng and Li [39] investigated the axiomatic definitions of the inclusion measure, the similarity measure, the fuzziness of fuzzy sets and their relationship. In the past decade, the similarity measure has been extended to hesitant fuzzy set from different points of view. Xu and Xia [33] investigated the distance and similarity measure of HFSs, Peng et al. [18] proposed the generalized hesitant fuzzy synergetic weighted distance measure and applied in multiple criteria decision making, Zhang and Xu [45] investigated some novel distance and similarity measures of HFSs and applied in clustering analysis, Farhadinia [7], [8] studied the distance, similarity measure and information measure of HFSs and extended in interval-valued hesitant fuzzy set and higher order hesitant fuzzy set, Li et al. [12], [13] introduced the concept of hesitancy degree, and presented some new formulas to calculate the similarity measures of HFSs, Zeng et al. [41] proposed the similarity measures of HFSs based on the hesitancy degree of hesitant fuzzy element and applied in pattern recognition. Liao et al. [15] investigated the cosine distance and similarity measure of hesitant fuzzy linguistic terms and applied in qualitative decision making. Until now, it needs to point out that most of these existing similarity measures of HFSs are proposed based on the distance. Considering the complexity and the diversity of practical problems, and the simplicity of the calculation method of the similarity measures, these will make the decision-makers into trouble in practical applications, hence it is necessary to propose more similarity measures and their calculation methods in order to be applied into various scientific fields such as pattern recognition, approximate reasoning, clustering analysis and so on. It is well known that implication function plays a fundamental role in approximate reasoning, fuzzy control, fuzzy relational equation, fuzzy DI-subsethood(inclusion) measure and image processing. Thus the implication function has been extensively studied by many researchers in theoretical research and practical application. For example, Bustince [4] investigated the indicator of inclusion grade for interval-valued fuzzy sets based on implication function, Baets and Kerre [2] investigated fuzzy inclusion and its inverse problem, Mas et al. [16] studied the law of importation for two types of implication function, Pei [17] investigated the unified full implication inference algorithm of fuzzy reasoning, Jin et al. [11] investigated the certainty rule-base and its inference method, Zhou et al. [47] characterized intuitionistic fuzzy rough set based on intuitionistic fuzzy implication function, Beliakov et al. [3] investigated the properties relating to consensus measures and proposed two kinds of general model to build component-wise from aggregation function and implication function, Zhai et al. [42] investigated the semantical and syntactical characteristics of fuzzy decision implication function, Klir and Yuan [10] studied the important properties of implication function, and applied it to geographical analysis under uncertainty environment, Jayaram and Mesiar [9] simplified the expression of implication function, and obtained some special implication functions, Zeng et al. [40] studied the similarity measure for vague sets based on implication function, Wen et al. [29] investigated hesitant fuzzy Lukasiewicz implication operator and realized a direct clustering algorithm. Motivated by Zeng et al. [40] and Wen et al. [29], considering that the implication function can describe many subtle differences, in this paper, we investigate the implication function for hesitant fuzzy element, and put forward some new formulas to calculate the similarity measures of HFSs based on implication function. Furthermore, we apply the novel similarity measures of HFSs, develop the hesitant fuzzy clustering algorithm, and use three numerical examples to illustrate the effectiveness and validation of our proposed method.
The rest of our work is organized as follows. In Section 2, we review some basic notions of HFS and some basic operations of hesitant fuzzy elements. In Section 3, we review implication function, propose some formulas to calculate the similarity measures of HFSs based on implication function, and do some comparative analysis with some existing similarity measures of HFSs. In Section 4, we introduce the hesitant fuzzy similar relation and the hesitant fuzzy equivalent relation, develop the hesitant fuzzy clustering algorithm, and use three numerical examples to illustrate the effectiveness and validation of our proposed method. The conclusion is given in the last section.

II. PRELIMINARIES
Throughout this paper, we use X = {x 1 , x 2 , · · · , x n } to denote the discourse set, HFS and HFE stand for hesitant fuzzy set and hesitant fuzzy element, respectively. H (X ) stands for the set of all hesitant fuzzy sets on X , H (x) stands for the set of all hesitant fuzzy elements for x, h and h(x) denote hesitant fuzzy set and hesitant fuzzy element on x, respectively. Definition 1 ( [23]): Given a fixed set X , then a hesitant fuzzy set (HFS) on X is in terms of a function that when applied to X returns a subset of [0, 1].
For convenience, the HFS is often expressed simply by mathematical symbol in Xia and Xu [30].
And Xia and Xu [30] gave other forms of (3) and (4) as follows: To establish an order between HFEs, Xia and Xu [30] introduced the score function of HFE, and proposed a comparison law.
Definition 3 ( [30]): For a hesitant fuzzy element h(x) on It is noted that the comparison law is proposed under the two assumptions: 1) The values of all the HFEs are arranged in an ascending order; 2) The HFEs have the same length when they are compared. If any two HFEs have different lengths, the shorter one will be extended by adding the minimum value, the maximum value, or any value in it until both HFEs have the same length. The selection of this value mainly depends on the decision makers' risk preference. Optimists anticipate desirable outcomes and may add the maximum value, while pessimists expect unfavorable outcomes and may add the minimum value. Although the results may be different if we extend the shorter one by adding different values, it is reasonable because the decision makers' risk preferences can directly influence the final decision. In this paper, we denote l(h(x)) as the number of elements in h(x), and extend the shorter one by adding the minimum value until it has the same length with the longer one.
It appears a contradictory result. Through the above analysis, we find out that the comparison of the score functions of h 1 (x) and h 2 (x) is carried out in two different spaces such as 3-dimension space and 5-dimension space, respectively. So, it's meaningless to compare the score function.
In order that the calculation results are comparable, we think that multiple HFEs should be compared as points in the same space, not as points in different spaces. Namely, we should extend the short HFEs of H (x) according to the same length. For a given element x ∈ X = {x 1 , x 2 , · · · , x n }, l(h(x)) is called the maximum number of elements in H (x), where l(h(x)) is the number of the elements in h(x). Here, we will extend the shorter HFE of H (x) to the same length l x by adding the HFE's minimum value in calculation process, thus we have the following property.
Known by Definition 2, we can complete the proof of Property 1.
In the following, we give the union and intersection operations of HFSs.
Definition 5: Known by Definitions 4 and 5, we can complete the proof of Property 2.
Aimed at the similarity measure and distance of HFSs, some scholars investigated the relationship between the distance and the similarity measure of HFSs from different points of views.
Definition 6 ( [41]): For three HFSs h 1 , h 2 , h 3 , then S is called the similarity measure of HFSs if it satisfies the following properties: Definition 7 ( [41]): For three HFSs h 1 , h 2 , h 3 , then D is called the distance of HFSs if it satisfies the following properties: Here we list some existing distance D and similarity measures S of HFSs h 1 and h 2 from Xu et al. [33] and Zeng et al. [41] in the following based on X = {x 1 , x 2 , · · · , x n }.
is the similarity measure of HFSs h 1 and h 2 .

III. SIMILARITY MEASURE BASED ON IMPLICATION FUNCTION
The implication function ''if · · · , then · · · '' plays an important role in approximate reasoning and fuzzy control. Baets and Kerre [2] applied the implication function to investigate the similarity measure of fuzzy sets, Bustince [4] investigated the inclusion measure of interval-valued fuzzy sets based on the implication function. Zeng et al. [40] investigated the similarity measure of vague sets based on implication function. In this section, we first investigate the similarity measure of HFSs based on the implication function. Furthermore, we list some properties related to implication function.
(1) Neutrality of truth(NT) The Lukasiwicz implication function R Lu (a, b) = (1−a+ b) ∧ 1 satisfies all properties in the above, and is often used to investigate the similarity measure of fuzzy sets.
Hence, we complete the proof of Lemma 1. Theorem 1: For HFSs h 1 , h 2 , X = {x 1 , x 2 , · · · , x n }, let T be a t-norm in fuzzy set theory, and the weight of the element is the similarity measure of HFSs h 1 and h 2 , where '' * '' is the product operation, and S 1ω (h 1 , h 2 ) is also called the weighted similarity measure of HFSs.
Proof: Known by the operational rule of t-norm and the property (I1) of the function I (., .), (S1) and (S2) of the similarity measure of HFSs h 1 and h 2 are obvious. (S3) Known by the property (I3) of the function I (., .) and Lemma 1, By applying the above-mentioned property (I4), we have Hence, we complete the proof of Theorem 1.
, · · · , n, and I (., .) is defined as Lemma 1, then is the similarity measure of HFSs h 1 and h 2 . Theorem 2: For HFSs h 1 , h 2 , X = {x 1 , x 2 , · · · , x n }, and the weight of the element x i ∈ X is ω i , i = 1, 2, , · · · , n with is the similarity measure of HFSs h 1 and h 2 , where ''*'' is the product operation, and S 2ω (h 1 , h 2 ) is also called the weighted similarity measure of HFSs h 1 and h 2 . The proof is similar to Theorem 1. Remark 1: S 1ω (h 1 , h 2 ) and S 2ω (h 1 , h 2 ) are the weighted similarity measures of HFSs h 1 and h 2 .

B. COMPARISON ANALYSIS
In the following, we will propose some new formulas to calculate the similarity measure of HFSs based on Theorem 2. Firstly, we give three classical implication functions which satisfy NT, IP, and OP.
(2) Implication function R 0 : .84}, respectively. By applying the above-mentioned similarity measure S R 0 , we have S R 0 (h, h 1 ) = 0.8320, S R 0 (h, h 2 ) = 0.7300, then we can obtain that the sample h belongs to the pattern h 1 . By applying the similarity measure S R e , then we have the same conclusion.
Remark 2: If we apply the similarity measure S R Lu in the above example, then we have S R Lu (h, h 1 ) = 0.8980, S R Lu (h, h 2 ) = 0.8980. Namely, the similarity measure S R Lu is invalid in this example.
In the following, we will do further analysis for the structure of similarity measure S R Lu (h 1 , h 2 ). VOLUME 8, 2020 Firstly, known by Zeng et al. [41] and Xu et al. [33], if D is the distance of HFSs h 1 and h 2 , then S(h 1 , h 2 ) = 1−D(h 1 , h 2 ) is the similarity measure of HFSs h 1 and h 2 . Here, we choose the normalized Hamming distance in Xu et al. [33], hence we know that the similarity measure of HFSs h 1 and h 2 , S(h 1 , h 2 ), as shown at the bottom of the next page, is based on Hamming distance.
On the other hand, the similarity measure of HFSs based on Lukasiwicz implication function, S R Lu (h 1 , h 2 ), as shown at the bottom of the next page.
Consequently, we find that the structure of similarity measure S R Lu (h 1 , h 2 ) is similar to the structure of similarity measure based on Hamming distance. Namely, we will focus our attention on selecting some effective and reasonable implication functions to establish the similarity measures of HFSs.
In the following, we will do some comparison about our proposed similarity measures of HFSs with some existing similarity measures of HFSs from Zeng et al. [41] and Xu et al. [33]. The calculation results are listed in Table 1 and  Table 2.
Known by Table 1 and 2, we can find that the similarity measures, S R Lu , S Ro and S Re , are corresponded with our intuitive analysis.

IV. CLUSTERING ALGORITHM
Clustering analysis is an important modelling method. Some scholars investigated the topic and extended it into intuitionistic fuzzy set and hesitant fuzzy set. For example, Wang [27] and Yao et al. [35] investigated the clustering analysis algorithm of fuzzy sets and applied in production predict, assessment process and decision making. Xu et al. [32] proposed the clustering algorithm of intuitionistic fuzzy sets based on the association coefficients of intuitionistic fuzzy sets, and extended it to the interval-valued intuitionistic fuzzy sets. Chen et al. [5] investigated the correlation coefficient of HFSs and applied in clustering process. Farhadinia [7] investigated the relationship among the information measures for HFSs and applied the similarity measure of HFSs in the clustering analysis. Wen et al. [29] investigated hesitant fuzzy Lukasiewicz implication operator and realized a direct clustering analysis algorithm. In the following, we will use the similarity measure of HFSs based on implication function, establish the hesitant fuzzy similar relation and hesitant fuzzy equivalent relation, and apply in clustering analysis. Firstly, we recall several definitions and theorems on the HFS' clustering algorithm [7]. Definition 9 ( [7]): Let h i (i = 1, 2, · · · , m) be HFSs, and R = (r ij ) m×m is called a hesitant fuzzy similar relation(matrix), where r ij = S(h i , h j ) denotes the similarity measure of HFSs h i and h j and satisfies: (1) 0 ≤ r ij ≤ 1, i, j = 1, 2, · · · , m; (2) r ii = 1, i = 1, 2, · · · , m; (3) r ij = r ji , i, j = 1, 2, · · · , m. Definition 10 ( [7]): Let R = (r ij ) m×m be a hesitant fuzzy similar relation(matrix). If R 2 = R • R = (r (2) ij ) m×m , then R is called the hesitant fuzzy equivalent relation(matrix), where Remark 3: If the universe X = {x 1 , x 2 , · · · , x n } is finite set, then the hesitant fuzzy (similar, equivalent) relation becomes the hesitant fuzzy (similar, equivalent) matrix, respectively. Theorem 3: [7] Let R = (r ij ) m×m be a hesitant fuzzy similar matrix, then for any non-negative integers m 1 and m 2 , the composition matrix R m 1 +m 2 = R m 1 •R m 2 is also a hesitant fuzzy similar matrix.
Theorem 4: Let R = (r ij ) m×m be a hesitant fuzzy similar matrix, then after a finite number of composition operations: , there must exist a positive integer k such that R 2 k = R 2 k+1 , and R 2 k is called the hesitant fuzzy equivalent matrix.
Remark 4: If the universe X = {x 1 , x 2 , · · · , x n } is finite set, and R = (r ij ) m×m is the hesitant fuzzy (similar, equivalent) matrix, then the cut level matrix of hesitant fuzzy (similar, equivalent) matrix, R λ = (r λ ij ) m×m , is a boolean (similar, equivalent) matrix, respectively. Namely, the cut level matrix of hesitant fuzzy (similar, equivalent) matrix, R λ = (r λ ij ) m×m , is a Boolean (reflexive and symmetric, reflexive, symmetric and transitive) matrix, respectively. Now we present the hesitant fuzzy clustering algorithm in the following.

Clustering algorithm
Step 1 Calculate the similarity measure of HFSs and construct the hesitant fuzzy similar matrix. Let {h 1 , h 2 , · · · , h m } be a set of hesitant fuzzy sets on X = {x 1 , x 2 , · · · , x n }, we can calculate the similarity measure of HFSs and establish the hesitant fuzzy similar matrix R = (r ij ) m×m , where Step 2 Calculate the hesitant fuzzy equivalent matrix. Calculating the composition matrix of hesitant fuzzy similar matrix, R → R 2 → R 4 → · · · → R 2 k , until R 2 k = R 2 k+1 holds. Known by Theorem 4, which implies that R 2 k is the hesitant fuzzy equivalent matrix.
For a cut level λ, a λ−cut matrix R λ = (r λ ij ) m×m is established using Definition 11 to classify the hesitant fuzzy sets h j , j = 1, 2, · · · , m. If all elements of the ith line(column) in R λ are the same as the corresponding elements of the jth line(column) in R λ , then the hesitant fuzzy sets h i and h j are the same type, known by this clustering principle, the whole m hesitant fuzzy sets h j , (j = 1, 2, · · · , m) can be classified.
Considering that the cut level λ is determined by the elements of hesitant fuzzy equivalent matrix, hence, when the λ−cut level is determined, then the λ−cut matrix of the hesitant fuzzy equivalent matrix can be obtained. Therefore, our research topic is how to select the appropriate cut level λ in order that we can have a reasonable clustering results.
In the following, we give three numerical examples to illustrate the practicability of our proposed algorithm, where the similarity measures of HFSs are obtained based on implication function.
Example 3 (Adapted from Chen et al. [5]): The software in a computer-integrated manufacturing(CIM) environment plays a more and more important role in human activity such as industrial production, business administration and so on. The major function of CIM software is responsible for production planning, production control and monitoring. In order to select the most appropriate CIM software, we should give the corresponding evaluations for the ones offered on the market. Suppose that there are seven kinds of CIM software h i (i = 1, 2, · · · , 7) to be evaluated, and four attributes to be considered: x 1 : functionality; x 2 : usability; x 3 : portability, and x 4 : maturity. Because the invited experts have different backgrounds, skills, experience and levels of knowledge to make such an evaluation, hence they will lead to a different evaluations for the same attribute. To clearly reflect the different opinions of different experts, the evaluation data are represented by the HFSs and listed in Table 3. Here, we apply the similarity measures, S R Lu , S R e and S R 0 of HFSs, respectively, establish the hesitant fuzzy similar matrix R = (r ij ) m×m by Step 1-2, and calculate the hesitant fuzzy equivalent matrix, R 4 1 , R 4 2 and R 4 3 , respectively, as shown at the bottom of the next page.
Step 3: Choose the λ−cut level, calculate the λ−cut level matrix R 4 iλ of hesitant fuzzy equivalent matrix R 4 i , i = 1, 2, 3, and classify. The related clustering results are listed in Table 4, Table 5 and Table 6, respectively.  The clustering result of h j (j = 1, 2, · · · , 7) for S R e .
Remark 5: Our classification conclusions are in coincidence with those of Chen et al. [5].
Example 4: The decision-making of ship's general scheme is an important part in the process of ship's general design. The design process involves many subjects, and the cost of construction and maintenance is huge. Therefore, it is necessary to demonstrate the design scheme in the stage of ship's general design. In addition, because ship design involves many aspects of knowledge, and contains many qualitative and uncertain factors, so the selection of ship design scheme requires many experienced experts to make scheme group decision.
In the following, we utilize the idea of TOPSIS to do clustering analysis, where we use the similarity measure of HFSs h 1 and h 2 , S R e (h 1 , h 2 ).    Step 1: Construct hesitant fuzzy element h ij through combing evaluation result of plan A i from different experts and obtain the evaluation result of plan A 1 , A 2 , A 3 , A 4 , A 5 based on hesitant fuzzy elements(see Table 11). Step 2: Construct positive ideal plan A + and negative ideal plan A − , making A 6 = A + , A 7 = A − (see Table 11).
Step 3: Add A 6 and A 7 into evaluation result of five plans and make clustering analysis by our proposed algorithm. And the clustering results are listed in Table 12-Table 14.     Known from the above results, we find that A 2 is the best because A 2 is the first to be grouped with A 6 .
Example 5 (Adapted from Zhang et al. [43]): In order to complete an operational mission, a military committee has been set up to provide assessment information on the six kinds of operational plans. Here the attributes are considered: (1) x 1 is the effectiveness of operational organization; and (2) x 2 is the effectiveness of operational command, and the hesitant fuzzy information is listed as follows.   value until it has the same length with the longer one, then we can obtain the hesitant fuzzy data in the following. In the following, we use the similarity measure S R e (h 1 , h 2 ) to establish the hesitant fuzzy similar matrix, and apply our clustering algorithm. To illustrate the effectiveness of our algorithm, we make comparison with some other algorithms such as HFMST clustering algorithm [44], IFMST clustering algorithm [46], and FMST clustering algorithm [44]. By adjusting different threshold of every algorithm,    we can obtain the different number of clusters such as 6,5,4,3,2,1 respectively, the comparison results are listed in Tables 15 and 16.
Remark 6: Known by the Tables 15 and 16, A 2 is the last sample clustered by HFMST algorithm, A 1 is the last sample clustered by both FMST algorithm and IFMST algorithm. Our result is identical to IFMST and FMST algorithm and also fits people's intuition. Meanwhile, comparing with IFMST algorithm, our algorithm can provide a complete clustering result, and IFMST algorithm provides a null clustering results when we make classes equal 3, it's easy to know from Table 15. And our algorithm can provide a step by step result, which splits one sample each time in different classes, and simulates hierarchical agglomerative clustering process better than FMST algorithm. Hence, we find that our algorithm has better comprehensive performance.

V. CONCLUSION
Hesitant fuzzy set is a powerful tool to deal with imprecise information. Considering that the similarity measure of HFSs is an important index in intelligent systems, and some existing similarity measures based on the distance measure have some flaws in pattern recognition and decision making, therefore it is necessary to provide more and more similarity measures of HFSs and apply in real life. In this paper, we merge implication function with hesitant fuzzy set to propose some new formulas to calculate the similarity measure of HFSs. It needs to point out that though some similarity measures based on distance measure may need to make improvement in the real application, however we cannot cancel them and even replace them by others. Because each similarity measure has the necessity of existence and its own characteristics, and it is proposed by scholars to satisfy the different criterions and requirements, therefore, we will pay more attention in how to select the most appropriate similarity measure according to the characteristics of the actual problem in the future.
On the other hand, we introduce the hesitant fuzzy similar relation and the hesitant fuzzy equivalent relation, and develop the hesitant fuzzy clustering algorithm based on the transitive closure. Furthermore, we use three numerical examples to illustrate the effectiveness and validation of our proposed method. Considering that our proposed similarity measures are more reasonable and effective than some existing similarity measures and have desirable characteristics, including intuition and logicality. Thus, we believe that our proposed similarity measures can be extensively applied in some fields such as pattern recognition, approximate reasoning, decision making system, and classification. Meanwhile, it also needs to point out that we should develop some other algorithms to establish the hesitant fuzzy equivalent matrix, including optimal hesitant fuzzy equivalent matrix based on decomposition construction and based on perturbation analysis, these works will become an important topic in hesitant fuzzy cluster analysis in future.
RONG MA is currently pursuing the Ph.D. degree with the School of Artificial Intelligence, Beijing Normal University, Beijing, China. His current research interests include image segmentation, cluster analysis, and decision making.
QIAN YIN is currently an Associate Professor with the School of Artificial Intelligence, Beijing Normal University, Beijing, China. Her major was computer science. She has published more than 90 academic articles and many books and textbooks. She has undertaken and participated in 863 national projects and national natural science foundation projects. Her main research areas are image processing, computational intelligence, and deep learning. He is also the Chief Editor of the Scholars Journal of Economics, Business and Management, and an Associate Editor of the IEEE Transactions on Fuzzy Systems, the Information Sciences, and the International Journal of Fuzzy Systems. He is a member of the Advisory Board of the Knowledge-Based Systems and Granular Computing, and the Editorial Board of more than thirty professional journals. He has contributed more than 550 journal articles to professional journals.