Social Network Search for Global Optimization

In this paper, a novel metaheuristic algorithm called Social Network Search (SNS) is developed for solving optimization problems. The SNS algorithm simulates the attempts of users in social networks to gain more popularity by modeling the moods of users in expressing their opinions. These moods are named Imitation, Conversation, Disputation, and Innovation, which are real-world behaviors of users in social networks. These moods are used as optimization operators and model how users are affected and motivated to share their new views. To evaluate the performance of the SNS algorithm, two comparative studies with different properties were conducted. In the first step, 210 mathematical functions have been chosen, which include 120 fixed-dimension, 60 N-dimension, and 30 CEC 2014 problems. Seven metaheuristics are selected from the literature, and the statistical results of these methods are calculated and analyzed. Also, to provide a valid judgment about the performance of the new algorithm, four nonparametric statistical tests have been used. In the next step, the performance of the proposed algorithm is compared to some state-of-the-art algorithms in dealing with CEC 2017 problems. According to the performance of algorithms, the SNS method is capable of achieving better results compared to the other metaheuristics in 101 cases (48%) and performed the same or comparatively in dealing with the other problems.


I. INTRODUCTION
Optimization is a part of the nature of human works. Expressing the issues in the form of optimization problems and then attempt to solve them is a very old task and dates back to the 4th century BC when Euclid raises the issue of maximizing the area of parallelogram inside a triangle. Today, optimization is known as a branch of applied mathematics and like other issues, mathematics is the first tool used to solve optimization problems. The establishment of the mathematical methods for solving optimization problems is contribute to the development of the calculus of variations. The gradient-based methods are one of these mathematical methods. These methods utilize the gradient of the objective function for solving the optimization problems and this property is the main drawback of these type of solvers [1]. These days, the optimization problems have become more complex in which their formulations are so difficult to be determined by the gradient-based methods. Besides, some of the problems have an implicit objective function and the gradient cannot be calculated easily. Therefore, many The associate editor coordinating the review of this manuscript and approving it for publication was Turgay Celik . classical techniques based on mathematics are inadequate for producing pleasing results in a reasonable time [2].
The drawbacks of classical methods encouraged researchers to create new methods and then metaheuristic algorithms were invented [3]. The intrinsic and natural behavior of organizations in nature is the main source of these metaheuristic methods. Most of the natural phenomena are performed with a specific heuristic ordering. This heuristic may have been evolved over millions of years or the laws of nature may have formulated it. The heuristic rules in these phenomena are organized in such a way that the processes are performed in their simplest form. In other words, these processes may have a very complex appearance, but these complex processes follow simple logical rules. By modeling the behavior of these heuristic phenomena, they can be modeled as efficient computational methods. By studying the logic governing the heuristic of these systems, one can take advantage of the inherent benefits. Based on the practical heuristic of these phenomena, their intelligence can be used for various purposes such as simulation, modeling, and optimization methods [4].
Metaheuristic methods are optimization tools that try to combine basic heuristic methods with randomization and rule-based theories, which are usually taken from natural phenomena such as evolution and swarm intelligence. Adding the randomness brings the performance of the heuristic rules to a higher level [5]. Almost any metaheuristic algorithm has a general process as shown in Fig. 1. The algorithm steps significantly affect the performance of algorithms. In other words, algorithm steps, describe the unique operators of each method in which new solutions are created. The operators of each algorithm refer to the optimal process of a particular phenomenon that they have imitated.
Fogel, Rechenberg, and Schwefel published their primary studies related to Evolutionary Programming (EP) [6] and Evolutionary Strategies (ES) [7] in the late 60s and 70s. ES was designed for numerical optimization and is one of the first basis for studies on Evolutionary Algorithms (EA) in the branch of bio-inspired computation. In 1975, another basement was formed by Holland with the publication of his book on Genetic Algorithms (GA) [8]. This work of Holland is the most famous in the field of optimization methods. In correspondence to the EA, Swarm Intelligence (SI) algorithms are inspired by the collective intelligence of a population of agents with simple behavioral patterns for communication and cooperation. In the early 90s, the fundamental concepts of Particle Swarm Optimization (PSO) [9] and Ant Colony Optimization (ACO) [10] formed the basic ideas of SI algorithms. Numerous SI methods have been introduced ever since by imitating intelligent patterns found in different phenomena in nature. The category of SI methods contains three branches. The first inspirational motivation is the behavioral models of animals, such as the Artificial Bee Colony (ABC) [11] or Firefly Algorithm (FA) [12]. The second branch includes algorithms based on physical laws, such as Charged System Search (CSS) [2]. The last one contains the algorithms that mimic various optimal behaviors of humans in different conditions. Teaching-Learning Based Optimization (TLBO) [13] is one of the human based algorithms.
Evolutionary and Swarm-based algorithms are the main branches of the metaheuristic methods. However, many algorithms use both the evolutionary and swarm operators. Cuckoo Search (CS) [14] is one of these types of algorithms. The first phase of CS is a swarm operator in which its goal is to move towards the best agent, but in the second phase, crossover is integrated with mutation and new solution generated during an evolutionary operator.
In the last decades, a huge number of metaheuristic algorithms were developed, and the study of these methods is very popular among researchers from different fields. Simplicity, flexibility, and robustness are the main reasons for their popularity. Some of the most famous algorithms are presented in Table 1.
Metaheuristic algorithms are approximate, but their results have high accuracy and are very close to the global optimum solution [37]. These methods perform a global search in the space of the problem with an appropriate speed by employing different operators. Also, these methods find the optimal solution by comparing the limited number of results based on their rules.
Studies on metaheuristics classified into two main categories: theoretical and practical works. In practical, optimization techniques are used to find optimal solutions, while developing, modifying, improving, and hybridizing new algorithms are the most common theoretical works. New metaheuristic methods are developed to find the optimal solution for complex problems in less time than previous ones and with higher accuracy. These aims are satisfied by developing more robust algorithms that have a better ability in searching the space of problems. In addition, this property arises from the more powerful operators that relate to a heuristic phenomenon. The operators used in each algorithm express the relationships of agents of the imitated phenomenon as simple mathematical equations. In other words, these operators simulate search style in the space of problems. Given this simulation, each algorithm can behave differently when dealing with different problems, so that one particular algorithm may not solve all problems. Therefore, it is necessary to create a new high-performance optimization algorithm that able to solve more types of problems, with better accuracy in less time compared to the previous methods.
This paper proposes a novel intelligence algorithm called Social Network Search (SNS) that simulates human behavior as users of a social network. Social network users can influence the opinions of other users on the network by sharing their views, opinions, and thoughts. Here, each agent is considered as a user and influenced by its interactions with other network users. Each of the users can also share their thoughts in the form of posts on the network and affect other people's opinions. In other words, the SNS simulate special moods that the views and opinions of users are influenced by their communications and efforts for increasing their level of popularity on the network.
Two steps are considered to evaluate the ability of the novel SNS algorithm in solving optimization problems. In the first step, a set of 210 mathematical problems (120 fixdimensional, 60 n-dimensional, and 30 CEC 2014 special season [38]) has been used, and then the performance of the SNS algorithm compared with seven classical and novel metaheuristic methods which are chosen from the literature. The statistical results of the SNS and these metaheuristics provides a suitable dataset to be analyzed by nonparametric statistical methods. In the second step, the SNS is compared to some state-of-the-art algorithms in dealing with some complicated problems presented in CEC 2017 [39] special season. The attained results showed that the SNS algorithm is better than the other methods in most of the cases.
The remaining of this paper is organized as follows: Section II describes the inspiration and mathematical model of the proposed SNS Algorithm. Section III studies the performance of the SNS algorithm in dealing with different types of optimization problems. Section IV analyzes the behaviors of the SNS algorithm from different perspectives. Finally, conclusions are given in Section V.

II. SOCIAL NETWORK SEARCH (SNS)
Human beings are a social species, which always tries to communicate with each other. Social networks are virtual tools that created for this goal with the advent of technology. The proposed SNS algorithm simulates the interactive behavior among users in social networks to achieve more popularity. In this section, we first discuss how to model an optimization algorithm from the behavior of users in the social networks, and then the implementation of the algorithm is presented.

A. BASIC PRINCIPLES OF BEHAVIOR IN SOCIAL NETWORKS
Social networks are platforms where users can interact virtually with other users. In social networks, users can follow their favorite persons and get to know their thoughts and views. So, interacting with other users of the network may affect their opinions. The process of interacting with and influencing other users of the network goes through an optimal process so that users are always trying to increase their level of popularity on the network. This optimization process is the base of the current algorithm. Fig. 2 shows a general model for a social network.
In recent years, various social networks such as Researchgate, Facebook, Twitter, Instagram, and so on, have been developed. Each of these networks is designed for a specific purpose, but it can be said that the behavior of users on these networks is more or less the same. During the interactions between users, they will become familiar with other views from network users. Now, if known views are better than the current one, they will accept new views and improve their own. Then, by sharing the improved views on the network, they will strive to improve their position in the network.

B. DECISION MOODS AND MATHEMATICAL MODEL
The user's viewpoint can be affected by other views in different moods containing: Imitation, Conversation, Disputation, and Innovation. Imitation means that the views of other users are attractive, and usually, users try to imitate each other in expressing their opinions. Conversation says that users can communicate with each other and use the other views. In the Disputation, users can dispute with a group of users and talk about their opinions. Finally, Innovation indicates that sometimes a topic that users share on the networks comes from their new experiences and thoughts. Almost all metaheuristic algorithms apply a set of operations to generate new solutions. In the SNS algorithm, the new solution is achieving by one of the four moods that are look like real-world social behavior. Description and mathematical modeling of these operators (moods) are described as follows:

1) MOOD 1: IMITATION
The main property of social networks is that users can follow each other and if a person shares a new post, followers of that person may be informed about the shared topic. This feature (propagation of views) has turned networks into powerful tools for promoting information and ideas.
Users in social networks follow their relatives and some famous person, which they like. Then they will get notified  with the opinions of the people who have followed the new events. Now, if the new event has challenging concepts, they will strive to post a topic about it by imitating the view of another person. The mathematical formulation of this mood can be expressed as: where, X j represents the vector of the jth user's view (position) which is selected randomly and i = j, X i is the vector of the ith user's view. Also, rand(−1, 1) and rand (0, 1) are two random vectors in intervals [-1, 1] and [0, 1], respectively. In this mood, the new solution will be generated according to imitation space (Fig. 3), and this space is created using the radii of shock and popularity. The shock radius (R) reflects the amount of influence of the jth user, and its magnitude is considered as a multiple of r. The value of r shows the popularity radius of the jth user, which it is calculated based on the difference in the opinions of the ith and jth users. Also, the final effect of the shock radius is reflected by multiplying its value to a random vector in the interval of [-1,1], in which if the components of the random vector be positive, the shared view will be agreed with the jth opinion and vice versa. The process of the Imitation mood illustrated in Fig. 3. As can be seen, by using (1), the space of imitation will be formed, and then a point as a new view will find in the imitation space to share on the network.

2) MOOD 2: CONVERSATION
In social networks, users can interact with each other virtually and converse about different issues. The Conversation is a state in which users learn from each other and increase their information about events in the form of private chat. In Conversation, users find a sight about events through other views, and finally, due to the differences in opinions, they can draw a new vision of the issue according to (2): where, X k demonstrates the vector of the issue which is randomly chosen to speak about it, also, R is the effect of chat, which is based on the differences of opinion and represents the change in their beliefs about the issue (X k ). D is the difference between the views of users and it is no parameters for such computation of difference among views, rand (0, 1) is a random vector in the interval [0,1], X j is the vector of a randomly selected user's view for a chat and X i is the vector of view of the ith user and it should be noted that i = j = k which j and k are selected randomly. In addition, sign is the sign function and sign(f i −f j ) determines the moving direction of X k by comparing f i and f j . The process of this decision mood is shown in Fig. 4. As can be noted, the user's view about the issue changes as a result of conversations with the jth user. The changed opinion is considered as a new view to 92818 VOLUME 9, 2021 share with others. Changing the user's view about the events is considered as the relocation of the events.

3) MOOD 3: DISPUTATION
The disputation mood imagines a state that users explain their views about events to some other peoples and defend their opinion. In social networks, this work is done by different manners for instance in comments and groups sections. In the comments section, users see different views from other persons and maybe influenced by the expressed reasons. Besides, users can have a friendly relationship with others, so they create a virtual group to discuss their opinions on a specific subject.
In modeling this mood, a random number of users is considered as a commenter or member of a group and the new affected view in disputation is as: where, X i is the view vector of ith user, rand (0, 1) is a random vector in the interval [0, 1], M is mean of views of commenters or friends in the group. AF is the Admission Factor, which indicates the insistence from users on their opinion in discussions with other persons and is a random integer that can be either 1 or 2. round (.) is a function that rounds its input to the nearest integer number, and rand is a random number in the interval [0, 1]. N r is the commenters or group size and is a random number between 1 and N user , and N user is the number of users of network (Network size). This process illustrated in Fig. 5 in which at first, N r number of users are selected randomly, then M is determined and finally by using (3), a new view can be generated.

4) MOOD 4: INNOVATION
Sometimes what users shares, is the product of their thoughts and experiences. In other words, when a person thinks about a specific issue, perhaps look at that issue in a novel way, and be able to understand the nature of that problem more accurately or can find a completely different view about it. A particular subject may have distinct features, and each of them affects the understanding of the problem. As a result, by changing the idea about one of them, the general concept of the subject will change, and a novel view will be achieved. This concept is employed to formulate the new opinion through Innovation mood as follows: where, d is the dth variable that is selected randomly in the interval [1, D], and D is the number of problem's variables. rand 1 and rand 2 are two random numbers in interval [0, 1]. Also, ub d and lb d are maximum and minimum values for the dth variable. n d new represents the new idea about the dth dimension of the problem. x d j is the current idea about dth variable presented by another user (jth user which selected randomly and i = j) and ith user wants to change it because of new idea (n d new ). Finally, the new view about the dth dimension will be created as x d inew . x d inew is an interpolation about the current idea (x d j ) and the new idea (n d new ). Change in one dimension (x d inew ) causes a general change in the main concept, and can be considered as a new view to share. This process can be modeled as follow: As it is seen from (5), x d inew is a new insight into the issue under consideration from the dth viewpoint and replaced with the current view (x d i ). The outline of the construction of the new view shown in Fig. 6.

C. CHOOSING A DECISION MOOD TO CREATE THE NEW VIEW
In many algorithms that define several models to create new solutions, each agent of the algorithm must experience all of these models repeatedly. In contrast, in the SNS algorithm, only one of pre-defined four models, so-called decision VOLUME 9, 2021  moods, will be selected and executed randomly for each user in each iteration of the algorithm. In other words, all of the moods described here are real-world behaviors of users in social networks and it seems that the correct assumption is that only one of these moods occurs at a specific time (iteration) for users. As a result, the chance of occurrence of these moods is considered to be small by using a random procedure with a uniform distribution as shown in Fig. 7.

D. NETWORK RULES (CLAMPING THE ANSWERS)
Each social network defines a set of roles for its users and all users must consider these roles in shared views. Network rules in optimization algorithms, correspond to the limitations (LB and UB) of the problem's variables. Limiting the views of users is according to: In (6), x i is the ith variable of X inew (new view), ub i and lb i are the ith component of LB and UB of problem.

E. PUBLISHING RULE (REPLACEMENT STRATEGY)
Due to the different moods of decision-making and their process, the opinion of each user will change, and the new view can be used. However, whether or not a new view can be shared will depend on its worth. In other words, if the new view is better than the current one, it will be accepted and shared, otherwise, it will be rejected. Therefore, to determine the value of new view, the objective function of X inew must be calculated and then compared to the value of the current view (X i ) by (7): for minimization problem : F. THE TERMINATING CRITERION In the metaheuristic algorithms, the search process will be finished according to one or a combination of some terminating criteria, and the best result will be reported. Some of these criteria are explained here: • The mean of variation of the objective function across the entire network is less than the specified tolerance.
• The best objective function value in a specified number function evaluations (NFEs), unchanged.
• The best result reaches to a specified value. This value can be the global solution that determined in the literature or can be a threshold value, which is defined based on the required precision.
• After a maximum number of NFEs. This maximum value can be determined based on the required computational effort of problems.
• The value of the objective function does not change during the specified period of time. This period is the time in which the objective function do not change across the entire network.
• The optimization process time has reached the predetermined value. The process time is calculated using the CPU time and its threshold is defined based on the specifications of computer systems and objective function complexity.

G. IMPLEMENTATION OF ALGORITHM
The flowchart of the SNS algorithm is illustrated in Fig. 8, and according to the basic principles of social behavior in networks, the SNS algorithm is implemented in three levels including initialization, increasing popularity, and checking terminating conditions as follows:

1) LEVEL 1: INITIALIZATION
• Create initial network: To create an initial network, at first, the number of users, the maximum number of iterations, and limits of the variables are determined. Then the initial view for each user created as: where, X 0 is the primitive view vector for each user, and rand (0, 1) is a random vector in the interval [0, 1]. UB and LB are the vector of maximum and minimum vector of the variables, respectively. Then, the objective function for each user's views is calculated.

2) LEVEL 2: INCREASING POPULARITY
For each user in each iteration of the algorithm, repeat the following steps: • Select and implement a decision mood: Select randomly one of the four moods with uniform distribution and then follow the procedure of the selected mood.
• Control the limits of the new view: Control the new view by the network rules according to the Equation (6).
• Evaluation the new view: Calculate the objective function for the new view.
• Check the publishing role (Replacement strategy): If the new view is better than the current one, publish it. Otherwise, the new view will be rejected. (according to the Equation (7))

3) LEVEL 3: CHECKING TERMINATING CONDITIONS
• Terminating conditions: Repeat the increasing popularity level until a terminating criterion is achieved.

III. VALIDATION
This section investigates the performance of the proposed SNS in dealing with different types of optimization problems. Two comparative procedures are considered based on the properties of the utilized problems. In the first one, the algorithm is tested using traditional benchmark functions and compared with some successful methods from the literature, while in the second approach, the performance of the SNS algorithm is compared to some state-of-the-art algorithms in dealing with the problem of CEC 2017 special season.

A. TRADITIONAL BENCHMARK TEST FUNCTIONS
In this subsection, at first, the description of the 210 mathematical benchmark problem is presented. Then, the selected metaheuristic methods and their settings are reviewed. In the next subsection, the evaluation criteria and results are explained, and finally, nonparametric statistical methods are used to evaluate the performance of the new algorithm.
The No Free Lunch (NFL) theorem [40] has logically proved that no algorithm can solve all types of problems with different characteristics. To evaluate the capability of the proposed SNS in solving various sets of benchmark functions with different properties, a set of 210 mathematical problems has been used. Based on the dimensions and the type of these problems they have been categorized into three groups: Fixed-dimension, N-dimension, and CEC 2014 special season problems. These 210 benchmark functions are most of the well-known mathematical functions and are used here to show the capability of the SNS in solving further problems compared to other algorithms. Also, another application of these problems is to create a suitable dataset to be used in non-parametric statistical methods to examine the performance of the proposed algorithm more carefully. Between these functions, F 1 to F 120 are Fixed-dimensional functions, and the first 92 functions have two dimensions while the other 28 functions have dimensions of 3 to 10, accordingly. The second group of benchmark functions consists of 60 test cases. In these problems, the dimensions are free and are called N-dimensional test functions. In this study, the dimensions are set to 30 (thirty dimensional (30D) functions) as F 121 to F 180 . The third group of problems consists of 30 difficult mathematical functions of the CEC 2014 special season. In CEC 2014, three rotated, thirteen shifted and rotated, six hybrid, and eight composite functions are considered which are named as F 181 to F 210 . It should be noted that the error values are considered for F 181 to F 210 and the dimension of these benchmarks are set to 30 as well. The details of the discussed mathematical functions in these groups are all presented in Tables 11, 12, and 13 in Appendix A. In these tables, C, NC, D, ND, S, NS, Sc, NSC, U, and M denote Continuous, Non-Continuous, Differentiable, Non-Differentiable, Separable, Non-Separable, Scalable, Non-Scalable, Unimodal and Multi-modal, respectively. In addition, R, D, and Min describe the variables range, variables dimension, and the global minimum of the functions.

2) METAHEURISTIC ALGORITHMS FOR COMPARISON
To evaluate the overall performance of the SNS algorithm, various optimization methods are utilized as comparative strategies to provide a valid study. The selected metaheuristics for this purpose are the CS, TLBO, GWO, SOS, CSA, WOA, and CGO algorithms. Some of these algorithms are newly introduced and the most recent and improved versions of these algorithms are utilized, here. CS is a method that inherited its operator from GA, DE, and PSO, and in recent years has been recognized as a convenient optimization tool. TLBO, GWO, WOA and SOS are newly developed methods that have introduced new operators for solving optimization problems and have shown worthy performance in dealing with various optimization issues. CSA is a method that derived its operator from the PSO algorithm in which a global search version of PSO is combined with the mutation operator. Finally, CGO is a robust algorithm that innovated a novel method for solving optimization problems, and its results showed that it is capable of outperforming various metaheuristics in dealing with different optimization problems. According to this description, it can be concluded that these algorithms seem to be proper for comparing the performance of the SNS algorithm. Also, some of these algorithms have specific parameters that have a vital role in the performance of algorithms and they should be tuned carefully. Between selected methods, CS and CSA have some parameters, and a summary of these parameters is presented in Table 2. It is worth mentioning that these parameters have been selected based on the previously published works or by performing some sensitive analyses for selected examples, and our simulation results shows that the value of these parameters can be utilized with a high level of confidence. The other utilized metaheuristics are parameter free. One of the features of parameter-free algorithms is that they solve problems independently from the characteristics of search space. In other words, the parameter-free algorithms are closer to the definition of block box methods. While in parametric algorithms, the parameters should be tuned based on the characteristics of the search space and this task takes them away from this concept. In addition to this disadvantage, the process of parameter tuning will need more efforts and computational costs. In other words, to estimating the proper value of a parameter, different set of parameters should be tested in a specific interval (for example 10 different options). Therefore, each problem needs to be investigated 10 times, and consequently, their computational cost will increase with the same proportion and the total NFEs for each problem will be 10 * (required time for one run).

3) NUMERICAL RESULTS
This section presents the obtained results of the SNS algorithm and other methods in solving mathematical test functions. Due to the random nature of the metaheuristic algorithms, the results obtained from one run is not sufficient to evaluate the performance of an algorithm. Therefore, each of the algorithms used in this study runs 50 times independently for each problem. Also, the population size for all of the algorithms is set to 50. We believe that the performance of the algorithm should not be affected by this value; however, since some results are reported from literature, it seems using another value is not fair. In other words, all of algorithms use the same population size for all problems to have a fair competition. In addition, determining the population size for each algorithm in dealing with each problem will be very tedious due to need more computational costs.  Obviously, the number of different runs is necessary because of stochastic nature of these methods and if the algorithms treat more stable, performing a small number of runs becomes sufficient. Since the number of different runs does not affect the performance of algorithms and this is just used to create a data set of performance of algorithms, we use the same number reported in the literature.
The termination criterion is a combination of the third and fourth criteria that presented in section II.F. In fact, in prepared codes for implementing selected algorithms, we use a parameter to count the evaluations (CountEval) just after performing a function evaluation process, and a while structure is used for controlling this counter as Algorithm 1. Therefore, the number of evaluations can be controlled in all condition. The maximum number of function evaluations (MaxEval) is considered as 150000 for all of the metaheuristics (the maximum number of iterations determined based on the chosen MaxEval), and the tolerance of 1×10 −12 from the optimal solution is considered as threshold value. According to this criterion, as soon as the best answer of the algorithms reaches a tolerance less than the predefined value, the algorithm stops, and the difference between the obtained solution and the global solution is considered zero, otherwise the search process will be continued until the maximum NFE reaches to 150000. Also, the NFEs reported for each algorithm will be counted until the algorithms meet each of the stop criteria.
The statistical results of 50 independent optimizations runs for the Fixed-dimension, N-dimension, and CEC2014 benchmark problems are presented in Tables 14, 15 CountEval ← CountEval + 1% repeat for each X new 10 . . . with each of the benchmark problems. Also, the last row of each function shows the rank of algorithms. The ranking is based on the value of the Means. Besides, if the Means of several algorithms were the same in solving one problem, the ranking was based on the NFEs. The mean of results represents the accuracy of the algorithms, and NFEs is a criterion that determines their computational cost. Therefore, both of these criteria are necessary to be considered in the ranking process to determine which algorithm is capable of providing robust performance in dealing with optimization problems. Besides, in ties (a situation in which both the Means and NFEs are equal), average ranks are computed. The   According to these results, the SNS algorithm has a comparative result compared to the other methods. In most cases, the SNS achieved the first rank. Also, the number of times that each algorithm obtained each of the ranks is counted for  fixed-dimension, n-dimension, and CEC 2014 problems and presented in Table 3 (not counted in ties). As it is seen, in dealing with 120 fixed-dimension problems, the SNS method placed in the first rank 54 times. Also, the SNS method has never been ranked as last one. In solving 60 n-dimension benchmark problems, the SNS gained the first rank 35 times, without being in the last two ranks. In dealing with 30 CEC 2014 problems, the SNS obtained the first rank 11 times without placing in the last three ranks.

end while
As the overall rank of each algorithm, the average of ranks is calculated and presented in Table 4. Based on this table, it can be understood that the SNS algorithm is able to obtain the first rank in all three groups of problems. This rank indi-cates the superiority of the SNS algorithm over other selected algorithms. Also, despite the NFL theorem stating that there is no way to solve all the problems, the SNS algorithm has been able to solve more problems than other algorithms.

4) NONPARAMETRIC STATISTICAL ANALYSIS
In this section, nonparametric statistical methods are used to compare the SNS algorithm with the other algorithms. Usually, these methods are employed to decide when one algorithm is considered better than another one. Nonparametric statistical tests are separated into two groups: pairwise comparisons and multiple comparisons. The pairwise comparison is a comparison between two algorithms, while the multiple comparisons compare more than two algorithms. In this paper, four well-known nonparametric tests, the Wilcoxon signed-rank test (pairwise comparison),the Friedman test, Friedman Aligned Ranks, and Quade tests (multiple comparisons), are conducted for this purpose [41].
The statistical hypothesis provide insight to conclude inferences about the data and samples. For this reason, two hypotheses, the null hypothesis H 0 and the alternative hypothesis H 1 are defined. The null hypothesis, H 0 , states that there is no difference between the two algorithms, whereas the alternative hypothesis, H 1 , indicates a difference. To determine the probability of rejecting the null hypothesis, a level of statistical significance (α) is defined. Also, in most of cases, instead of using α, the p-value is defined, which is the probability of the truth of H 0 . If the p-value will be less than VOLUME 9, 2021   the α, then H 0 is rejected, and whatever the p-value is smaller, the null hypothesis is rejected with more probability [41].
The Wilcoxon signed-rank test is a non-parametric statistical test and used to compare two samples [42]. In optimization, the Wilcoxon signed-rank test aims to detect differences in the performance of two different algorithms by calculating the differences between their ranks on average results from solving the problems. The results of the Wilcoxon signed ranked test for all the pairwise comparisons concerning the SNS for studied benchmark functions are presented in Table 5. In all experiments, the level of significance, α, is considered to be equal to 0.05.
In Wilcoxon signed ranked test, if the R + is less than the R − the SNS performs better than the compared method. According to these results, R + in all cases is less than R − except for CGO in solving the CEC 2014 problems. These results show that the SNS performed better than all methods in solving all types of problems. In addition, in solving CEC 2014 problems comparing to the CGO, the difference between T and R + is not very large. Also, the p-values show a significant improvement over the CS for n-dimension functions, TLBO for all types of problems, GWO for all type of problems, SOS for n-dimension and CEC 2014 functions, CSA for n-dimension and CEC 2014 functions, WOA for all type of problems.
The result of the Friedman test is shown in Table 6 for ranking the used algorithms. The Friedman test is a non-parametric statistical test developed by Milton Friedman [43]. This method is used to compare several sets of data by determining the average rank of them. According to the Friedman test, the SNS placed in the first rank in all types of problems. In fixed-dimension problems, the R statistic of the SNS is not so different from the results of CS and TLBO because just the Mean result of methods is used as a metric for compression of the performance of algorithms, and the NFEs are not considered. If the effect of NFEs is considered, the result of this test changed to what is shown in Tables 3 and  4. In both conditions, the SNS ranked first, and the result is the same for the SNS algorithm.
In Friedman aligned rank test, the average of each set of values are calculated and then subtracted from the results [44]. The ranks are based on the shifted values, and called aligned ranks. The results are presented in Table 7. According to the results of the Friedman aligned ranks test, in dealing with Fixed-dimension problems, the SNS gains the second rank, and the CS algorithm was placed in the first rank. In solving N-dimension benchmarks, the SNS achieved the first rank, and the SOS algorithm was placed in the second rank. Also, in solving the CEC 2014 special season, the Friedman aligned rank method ranked the SOS algorithm as the first algorithm and placed the SNS algorithm in the second rank.
Quade test is the other non-parametric statistical method introduced by Dana Quade in 1979 [45]. In this method, the effect of the weight of the rows is emphasized. The Quade test is an extension of the Wilcoxon signed-rank test and often performs more effectively than the Friedman test. The results of the Quade test are presented in Table 8. The Quade test shows that the SNS method can earn the first rank in all type of problems compared to other methods. Also, in solving Fixed-dimension problems, the CS placed in the second rank while CGO is on the second place for the other problems.

B. COMPARING TO STATE-OF-THE-ART ALGORITHMS
In this subsection, the CEC 2017 special season problems are considered to compare the performance of the SNS algorithm with four other state-of-the-art algorithms including effective butterfly optimizer with covariance matrix adapted retreat (EBOwithCMAR) [46], ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood (LSHADE-cnEpSin) [47], multi-method based orthogonal experimental design (MM_OED) [48], and proactive particles in swarm optimization (PPSO) [49]. The list of 30 mathematical functions presented in Table 9. Also, the mathematical details of these functions are presented by the CEC 2017 competition committee [39]. These mathematical functions are consisting of three unimodal and seven multimodal shifted and rotated functions, ten hybrid functions and ten composite functions. These test functions are considered in four dimensions of 10, 30, 50, and 100.
The statistical results of the SNS and seven other successful algorithms in solving 10-, 30-, 50-and 100-dimension problems are presented in Tables 17, 18, 19, and 20 in Appendix C, respectively. These results are based on the 51 independent runs. The tolerance of 1 × 10 −8 from the optimal solution is considered as threshold value The total number of function evaluations for each test problem is taken as 10000×D, where D is the problem dimension. The results confirm that the SNS method can provide very VOLUME 9, 2021  comparative results in solving these complex optimization problems.
The selected techniques for comparing in this step are some advanced methods, for example LSHADE-cnEpSin is one the very advanced method with some additional tools. Its framework is based on the following algorithms: -Self-adapting control parameters differential evolution (jDE) is an adaptive version of the differential evolution (DE) in which the crossover rate (Cr) and mutation factor (F) are determined adaptively.
-The adaptive differential evolution with an optional external archive (JADE) can be considered as a new version of jDE with two modifications. The first one is related to the generation Cr and F, and the second one is on using a new formulation for the mutation. -Success-history-based adaptive differential evolution (SHADE) is an improved version of JADE in which Cr and F are adapted based on historical memory. LSHADE is an enhanced version of SHADE that equipped the SHADE with the Linear Population Size Reduction (LPSR) strategy. -LSHADE-cnEpSin is a method that develops a new version of LSHADE using Ensemble sinusoidal differential covariance matrix adaptation with Euclidean neighborhood. As it is clear the final developed method is somehow very complex and improved variant compared to a new simple algorithm such as SNS that aims to reach good result but save the simplicity for implementing.
Also, user-friendliness and simplicity are essential features of this new algorithm that are considered in its framework. While adding some features of the advanced algorithm to the new algorithm can improve its performance. As a result, the point worth mentioning is that the SNS may not outperform all of these methods and the main aim of this comparison is to determine the level of the SNS despite its simplicity. This comparison determines the level of efficiency of the SNS algorithm among advanced methods in solving complex problems in which the complexity of these methods in implementation is increased due to the benefit from special techniques in their structures. Fig. 9 presents the average rank of advanced algorithms compared to the SNS algorithm for CEC 2017 problems. As it is clear although the SNS is not the best algorithm, it is among the three best ones.
The CEC 2017 committee [39] proposed a simple and efficient procedure to study the computational time and complexity of algorithms in dealing with the CEC 2017 problems, as presented in Algorithm 2. According to this procedure, the complexity is reflected by calculating four times: T 0 , T 1 , T 2 , andT 2 . The T 0 is the computing time of the test program in lines 1 to 12 in Algorithm 2. The  Table 10. According to these results, the SNS algorithm can perform competitively compared to other metaheuristics. VOLUME 9, 2021

IV. DISCUSSION AND ALGORITHM ANALYSIS
This section first explains how the proposed method employs the exploration and exploitation in the search space of the problem, and then the mechanism of the decision moods is studied in terms of analysis of interactive forces. Then the convergence and global search capability of the SNS are analyzed. Finally, the computational cost and complexity of the SNS will be investigated.

A. EXPLOITATION AND EXPLORATION ANALYSIS
Exploration reveals the ability of an algorithm in local optima avoidance to discover the more promising area(s) of the search space. Also, exploitation shows the local search ability in achieved regions for improving the quality of the obtained solutions. These capacities should be embedded in the operators, and the right balance between them increases the efficiency of the algorithm.
In the SNS algorithm, the new solution is created in the process of Imitation, Conversation, Disputation, and Innovation moods. Each of these operators has their specific manner, which can lead to exploitation or exploration. The aspects of each operator briefly described below: • In the Imitation mood, users try to imitate other users, and the new solutions are generated based on the shock radius (R), popularity radius (r) and random numbers. The place of the new solutions determines the exploration and exploitation of this mood. In other words, if the generated solution is placed between the i-th and j-th solutions, it is considered as exploitation, and if its place becomes out of them, it approaches exploration. Also, as the algorithm progresses, its exploitative behavior becomes more evident due to the convergence of users and slight changes in their positions. Therefore, this mood can benefit from        both exploration and exploitation features depending on the location of the solution and the current iteration number.
• Conversation mood models the discussion among users, and this operator changes the views of users about an issue (X k ) in a better direction using the sign(f i − f j ). Therefore, this mood exploits the search space around the k-th user.
• In the Disputation mood, the cumulative effect of individuals is considered. In addition, AF is an advantageous coefficient that randomly expands the step size (M − AF × X i ) of movements. To study exploration and exploitation, the algorithm is analyzed in three conditions: initial, middle, and final iterations. In the first case, exploration is the most possible scenario.
In the second one, if AF = 2, the algorithm explores the search space, and if AF = 1, exploitation is the dominant form of search. Also, at the final stage, due to the vicinity of the agents, M converges to X i . In this situation, if AF = 1, the exploitation is the most probable mode, else if AF = 2, the algorithm explores the search space between agents and origin. Therefore, the disputation mood provides both exploitation and exploration in different stages of the iterations.
• Innovation modifies the solutions using a trial mutation operator according to the new idea (n d new ). Due to the high randomness of the new idea, the present solution (X i ) is transformed into a completely different point in the search space (X new i ). Therefore, this mood works as an explorative operator and is a very effective approach for local optima avoidance.
The point worth mentioning is that the right balance between exploration and exploitation is a challenging task. This subject is provided by randomness in selecting the decision moods in this algorithm. Fig. 10 shows that how random selection of these operators led to the global optimum converging in dealing with F 121 .

B. INTERACTIVE FORCES ANALYSIS
The effective forces between the agent of a swarm can be classified into two categories: the aggregation and the congregation [50]. In aggregation, the nonsocial or an external force control the agents and has two modes: passive and active.
Passive aggregation is a passive grouping by involuntary processes, like dense of planktons in the water, in which the forces of water flow transport the planktons passively. Also, active aggregation is a grouping by attractive resources, such as food [51]. The congregation is the grouping by the social or VOLUME 9, 2021 internal forces of the swarm itself. Also, the congregation can be classified into the passive and social congregation, as well. The passive state is the congregation of an individual in which there is no display of social behavior. On the other hand, social congregations usually can be seen in a group where the members are related. The active transform of information is needed in social congregations. For instance, ants use pheromone or their tentacles to transfer information about the location of resources, [51].
According to the above definitions, Imitation is an active aggregation. In this mood, the external force is applied due to the fame of the randomly selected user. Also, the Conversation is a passive congregation mood that models the force of randomly selected user and issue (a group of agents). The Algorithm 2 The procedure of time complexity assessment for the SNS algorithm. 8 x ← log(x) 9 x ← exp(x) 10 x ← x/(x + 2) 11 end for 12  Disputation mood can be considered as a social congregation because of the effects of a group of users. The last mood, Innovation, is considered as the passive aggregation since the new idea of users, placed in a random location, and users have no authority in controlling it. Therefore, the used operators in the SNS algorithm contain all types of interactive forces. This future causes a good performance of the proposed SNS.

C. CONVERGENCE BEHAVIOR ANALYSIS
One idea to analyze the behavior of algorithms in solving problems is using the convergence curves. According to convergence plots, it is possible to understand how algorithms converge towards the optimal solution in a certain number of iterations. In this study to survey on the convergence ability of the SNS, 18 functions are selected from the fixed and n-dimensional problems. The chosen benchmarks have different properties (the properties of these functions are listed in the third column of Tables 12 and 13). The first nine functions are unimodal, and the rest of them are multimodal.
The convergence curve of the different algorithms in solving unimodal functions plotted in Fig. 11. The convergence plots confirm that the SNS has a better performance compared to other methods in solving F 6 , F 75 , F 124 , F 142 ,F 155 , F 157 , and F 167 (seven out of the nine problem). Also, in dealing with F 44 and F 169 the SNS performed as second algorithm. Another point is that the curves of the SNS method has a steep slope, and it means that the SNS has an appropriate convergence rate, and this shows that the SNS can exploit the search space of the problems in a very convenient manner. Besides, the convergence curve of metaheuristic algorithms in solving multimodal problems is plotted in Fig. 12. For multimodal functions, the SNS method converges to the global optimum without trapping in local optima and has a very convenient rate in solving F 56 , F 98 , F 119 , F 160 , F 170 , F 174 , and F 175 compared to the other methods (for solving F 163 and F 176 , the SNS performed as second best). This behavior can indicate that the SNS algorithm manage exploration and exploitation abilities very well.

D. GLOBAL SEARCH ANALYSIS
The search algorithms have three main steps during their process: Global searching, the Converting stage, and Local searching [26]. Fig. 13 shows the idealized schema of this process by drawing the standard deviation of objectives versus iterations. In the Global search stage, the standard deviation of objectives increases due to exploring the whole search space. After finding a desirable area of search space, in the next state, the Converting, the search procedure continuously diverts from the Global search to the Local search. After Converting, the algorithms search around the best solutions to find the global optimum.
Multimodal test functions are selected to validate the global search ability of the algorithms. These problems have multiple local optima, and the algorithm should escape from them to find the global solution. In solving these types of problems, the algorithm falls into a local optimum and tries to escape from it, and then the standard deviation will increase. This process is an essential part of the global search procedure, and the algorithms cannot experience it in solving the unimodal test functions, since they have no local optima. In other words, they cannot show the global search phase  (standard deviation increment). The algorithms in solving unimodal problems only traverse the last two phases (Converting stage, and Local searching).
To analyze the globality of the proposed SNS, nine problems, including F 45 , F 74 , F 107 , F 119 , F 121 , F 128 , F 138 , F 143 , and F 158 , are considered. The Fig. 14 shows the standard  deviation graph of these problems. These plots reveals that the algorithm reaches a peak (Global search) and after that falls rapidly (Converting stage). At the final step, the slop of diagrams decreased, and the standard deviations converge to a constant value (Local search).
In some cases (F 119 , F 143 , and F 158 ), the SNS repeats the global search process many times. Each time, the SNS explores the search space and finds a new region. Then the standard deviation is increased, and after that, the global search converts to the local search. Later during the local search process, the explorative operators find a new region again. This process is repeated many times until the last time; in which the domain of the global optimum is found and exploited.

E. COMPUTATIONAL COST AND COMPLEXITY ANALYSIS
To review the complexities and to analyze the basic operations of any algorithm, the complexity analysis is performed. In computational complexity theory, the Big O notation is used to indicate the relationship between the number of data and computational resources needed to solve a problem using an algorithm. This symbol is usually used to check the time or memory required to solve a problem with a large number of inputs.
The complexity of the SNS is examined on two levels: initialization level and popularity level (main loop). In the first level of the SNS, at first, a random population of solutions are generated and then evaluated. The complexity of the random solutions is given by O(NP * D), where NP is the number of users and D is the dimension of the problem. Also, the complexity of evaluation is calculated as O(NP) * O(F(x)) in which F(x) is the objective function. Besides, the popularity level is an iterative loop that iterated MaxIter times, and in each iteration, a new solution is generated for each user as a new view and then evaluated. The computational complexity of this level is determined as O(MaxIter * Np * D). Also, the computational complexity of function evaluations during the iterations is defined as O(MaxIter * Np) * O(F(x)).

V. CONCLUSION
The social network search (SNS) is a new metaheuristic algorithm for solving Global optimization problems. This algorithm introduces four novel optimization operators namely, Imitation, Conversation, Disputation, and Innovation. These operators (moods) model the real-world behaviors of users in social networks in expressing their opinions. In the present study, the SNS algorithm employed for solving 120 Fixed-dimensional functions, 60 N-dimensional functions, 30 CEC 2014. From the comparative study, the SNS has shown its potential to handle various optimization problems, and its performance is much better than other algorithms in terms of the selected performance metrics. Also, to have a valid judgement about the efficiency of the SNS algorithm, four nonparametric statistical analysis methods are conducted. The results show that the SNS algorithm ranks first in most cases. This is partly because there are no parameters to be fine-tuned in the SNS. To further evaluate the proposed algorithm, its ability compared with advanced algorithms in solving CEC 2017 problems. The gained results demonstrate that the SNS can achieve very competitive performance. In addition, the mechanisms of decision moods were analyzed in terms of search style in the space of the problem (exploration and exploitation). Then the type of forces that each of these moods creates among users were investigated. Also, the globality and convergence capabilities of the proposed SNS are examined and discussed. As further studies, the ability of this algorithm should be examined in dealing with other complex real-world optimization problems in different branches of science. Also, different editions can be employed to improve the performance of the SNS algorithm by developing novel moods of social network users or modifying the current ones.

APPENDIX A: DETAILS OF BENCHMARK FUNCTIONS
The details of the benchmark functions are presented in Tables 11, 12, and 13 for fixed-dimensional, n-dimensional, and CEC 2014 problems, respectively.