An Enhanced Firefly Algorithm Using Pattern Search for Solving Optimization Problems

Firefly Algorithm (FA) is one of the most recently introduced stochastic, nature-inspired, meta-heuristic approaches used for solving optimization problems. The conventional FA use randomization factor during generation of solution search space and fireflies position changing, which results in imbalanced relationship between exploration and exploitation. This imbalanced relationship causes in incapability of FA to find the most optimum values at termination stage. In the proposed model, this issue has been resolved by incorporating PS at the termination stage of standard FA. The optimized values obtained from the FA are set as the initial starting points for the PS algorithm and the values are further optimized by PS to get the most optimal values or at least better values than the values obtained by conventional FA during its maximum number of iterations. The performance of the newly developed FA-PS model has been tested on eight minimization functions and six maximization functions by considering various performance evaluation parameters. The results obtained have been compared with other optimization algorithms namely genetic algorithm (GA), standard FA, artificial bee colony (ABC), ant colony optimization (ACO), differential equations (DE), bat algorithm (BA), grey wolf optimization (GWO), Self-Adaptive Step Firefly Algorithm (SASFA), and FA-Cross algorithm in terms of convergence rate and various numerical performance evaluation parameters. A significant improvement has been observed in the solution quality by embedding PS in the standard FA at the termination stage. The result behind this improvement is the better exploration and exploitation of the solution search space at this stage.


I. INTRODUCTION
Optimization is the process of finding the solution of a problem with the most cost-effective or highest possible achievable performance using the resources in hand by minimizing the undesired factors and maximizing the desired ones. The optimization can be as simple process as taking a dinner at a restaurant with all the basic nutrients required for a human body with the least possible cost to the investment of billions of dollars for carrying out space research. The solution found The associate editor coordinating the review of this manuscript and approving it for publication was Huiling Chen . with the highest possible benefit and least possible cost is called an optimal solution. Finding the most optimal solution from few alternatives is considered to be one of the most complicated processes [1], [2]. In the modern fast growing information and communication era, the development of algorithms, techniques and approaches for solving various types of optimization problems is one of the most focused and attractive areas among researchers' community.
The phenomenon behind this remarkable attraction is the indefensible hype of increase both in the solution cost and decrease in the resources for solving these complicated problems prevailing in academia and industries. Therefore, many artificial intelligence (AI) techniques [3], evolutionary approaches [4] and swarm intelligence (SI) algorithms [5] have been developed for solving various types of optimization algorithms. Based on the inspirations taken from behaviors, characteristics, inter-coordination and working mechanisms of different natural creations, various types of AI techniques have been developed targeting numerous areas according to the problem nature. The major AI techniques include expert systems (ES) [6], artificial neural network (ANN) [7]- [9], fuzzy logic (FL) [10], artificial immune system (AIS) [11], generalized regression neural network (GRNN) [12], genetic algorithms (GA) [13], Henry gas solubility Optimization (HGSO) [14], Slime mould algorithm (SMA) [15], Equilibrium optimizer (EO) [16] and genetic programming (GP) [17] which can be used for solving optimization problems or assist other optimization algorithms in solving these problems.
Moving forward, a subfield of AI known as evolutionary intelligence (EI) is another family of techniques, algorithms and procedures taking motivation from biological evolutionary stages of living organisms mainly human beings. Technically, these are population based complex problem solving techniques for solving various types of optimization problems and work on trial and error mechanism [18]. In operational procedures of these algorithms, a solution search space in initially generated and then updated iteratively. During each iteration, the less desired solutions are removed from the solution search space and randomly generated small changes are incorporated to widen the solution search space [19]. The major algorithms lying under the umbrella of evolutionary computation are GA, GP, evolutionary programming, gene expression programming, evolutionary strategy, differential evolution, neuroevolution and learning classifiers system.
A widely applicable, famous and recently emerged subfield of AI known as swarm intelligence (SI) has drawn the attention of researchers over the last few decades targeting problems of optimization nature. SI gets inspiration from the collective behavior of social swarms of bees, ants, worms, termites, schools of fish and flock of birds in their goals achieving activities. A collective behavior is shown by the swarm of individuals for foraging, reproduction, living and division of important tasks among the available individuals. In fact, a decentralized manner by individuals based on local information collected from the environment is used for decision making [20], [21]. Examples of swarm intelligence methods used for solving optimization problems include PSO (Particle swarm optimization) [22], ant colony optimization (ACO), artificial bee colony optimization (ABC), cuckoo search optimization algorithm [23], bat algorithm [24]; krill herd bio-inspired optimization algorithm, clustering algorithms and firefly algorithm (FA).
In finding solution of optimization problems, the domain of optimization function is called solution search space. In other words, the collection of all considerable or feasible solutions is called solution search space of the optimization problem [25]. Every solution of the problem in the solution search space is called candidate solution. Like other characteristics of optimization problems, the solution search space is dependent on various parameters of the problem and the algorithm being used for solving the problem. Different solutions of an optimization problem present in the solution search space are associated with the values of independent variables and their corresponding values of dependent variables [26].
When searching the solution search space for finding the optimal solution, two phenomena namely local search and global search are used for covering the all possible solutions of the problem. The local search is performed by introducing local or small changes to the current solution to check the local solutions of the problem whereas for performing global search, the searching mechanism is widened to cover more diverse solutions [27]. Different optimization algorithms use different mechanism for performing local and global search process. For example, selection, mutation and crossover operators of GA are used for controlling the local search and global search of GA [28]. In ABC, different mechanisms of employed bees, onlooker bees and scout bees are used for performing local and global search [29]. The local and global search of optimization algorithm can be defined by its exploration and exploitation capability.
All SI approaches are inspired from collective coordinated behavior of various types of swarms e.g. ants, termites, bees, flock of birds and schools of fish. The major role in establishing this strong coordinated and well organized system is played by the individuals of the swarm but for achieving different types of complex and sophisticated goals, these individuals provide their contribution to the decentralized system. In order to accomplish complex tasks, a strong intercommunication system is required among all the participating individuals. For example, making sophisticated nets by worms and termites and search of food by ants and bees in a systematic approach is the result of their well-managed coordinated and collective behavior. All these swarms utilize two phenomena namely exploration and exploitation to make their coordinated system more powerful in achieving their desired goals. Exploration means collecting new information whereas the exploitation means using the existing information for communication among different individuals of the swarms to better manage their coordination. During solving the optimization problems, the exploration is the process of increasing the solution search space to bring variations in the values of optimization function (collecting new information) [30] whereas the exploitation means focusing on the so far found solutions to check all the nearby solutions to enable the search space to not skip the most optimal solution present in the local solution search space (using the existing information) [31].
A trade-off is required between the exploration and exploitation during finding solution of the optimization problems [32]. In swarm intelligence, the swarm of individuals shows a collective behavior based on a self-organized and decentralized coordination system for achieving different goals like reproduction, foraging, food search and other day to day activities. In this decentralized approach, the local information is collected by the individuals from the environment and this locally collected information contributes towards the overall decision making process of the whole swarm. The trade-off between exploration and exploitation is maintained by all optimization algorithms for finding the most optimal solution of the problems using different mechanisms which are explained in their corresponding sections.
The rest of the paper is organized as follows: Section II shows related work; Section III presents the proposed solution; Section IV shows experimental results; Section V presents results discussion while Section VI presents the conclusion and future work.

II. RELATED WORK
In order to solve optimization problems, different types of algorithms, methods, techniques and procedures have been proposed with all the approaches having their own advantages and disadvantages. Most of the optimization algorithms developed come under the umbrella of subfields of AI known as evolutionary intelligence (EI) and swarm intelligence (SI) [33], [34]. The foremost and most widely applied algorithm of EI approaches is GA which is mainly used for solving optimization problems. The major steps of GA for solving optimization problems include initial random solution generation, selection process, mutation operation and crossover operation. All these operations mimic the biological evolutionary stages of living organisms. The main components of GA are genes and chromosomes. Genes represent the decision variable whereas the chromosomes represent different solutions of the optimization problem. GA is an iterative procedure and in each iteration the value of optimization function is evaluated according to some pre-defined evaluation parameter [35]. GP is considered to a developed form of standard GA but in GP, the solutions of optimization problem are represented by computer programs. The prominent characteristic of GP that makes it discriminant from GA is its representation format. In GA, the solutions are represented linearly whereas in GP, the solutions are given a hierarchical representation. A tree is developed representing different solutions of the problem. The mathematical symbols (e.g. +, −, /, ×) represent the internal nodes of the tree whereas the variables of the function represent the external nodes of the tree [36].
Evolutionary programming is another important evolutionary algorithm paradigm with similarity to GP in representation and working procedure. In evolutionary programming, the values of numerical parameters can change whereas the program representing the optimization function remains fixed [37]. The main operator that introduces variations in the solution search space is mutation in evolutionary programming. Every parent in evolutionary programing generates its own offspring because every member of the population is considered as a part of some species instead of member of the same species [38]. Similarly, gene expression programming is an evolutionary algorithm borrowing some characteristics and operators from both GA and GP when solving optimization problems. When manipulating the chromosomes, the gene expression programing is more similar to GA than GP. Like GP, the representation is tree like whereas performing manipulations; crossover operator is taken from the GA [39]. The primary base of the differential evolution is vector differences and hence this technique is majorly more suitable for solving optimization problems of numerical representation. Similar to GA and evolutionary strategies approaches, differential evolution also possesses iterative nature. The differential evolution comes under the category of metaheuristic algorithms which use a larger solution search pace of candidate solutions [40]. The main drawback associated with differential evolution is that it does not guarantee the most optimal solution of the problem being targeted. In differential evolution, a population made up of candidate solutions is maintained and new candidate solutions are created by considering the current solution search space and the best solution in each iteration is maintained [41].
Likewise, a similar technique to genetic programming known as neuroevolution combines some characteristics from GA and ANN. The structure description and weights connections in ANN are represented by genomes which are GA concepts. The major phenomenon of neuroevolution is that the neural network structure, rules, parameters are generated and controlled by an evolutionary algorithm [42]. The next evolutionary intelligence technique is learning classifiers systems (LCS) which is AI approach combining the discovery and learning components [43]. For making various types of predictions, LCS seeks procedures for identification of context dependent rules that use piecewise manner for collecting, storing and applying knowledge. The piecewise manner means breaking down the whole solution search space into simpler smaller parts [44].
In [14], the authors have presented Henry Gas Solubility Optimization (HGSO), a novel physics-based algorithm that mimics the behavior of Henry's law is proposed and termed HGSO. This work is being inspired by the Henry's law of solubility which proves that, ''on a constant temperature, the specific amount of a gas which dissolvable in a specific volume and type of liquid is directly proportional to the partial pressure of that gas in equilibrium with that liquid''. The experimental results conducted from 47 optimization problems, CEC'17 test suite with three engineering design problems revealed that HGSO targets a balance between exploration and exploitation abilities of the search space which avoid the problem of local optima.
Shimin Li et al [15] proposed a new stochastic optimizer known as slime mould algorithm (SMA) based on the oscillation mode of slime mould in nature. In their research work, they have presented few novel features using a unique mathematical model based on adaptive weights in order to simulate the process of producing positive and negative feedback of slime mould waves during propagation. The stimulation has been controlled with the help of biooscillator for generating the optimal path for connecting food with excellent exploratory ability and exploitation propensity.
They conducted an extensive comparative experiments of proposed SMA with several metaheuristics approaches using a set of benchmarks functions to verify its efficiency. Additionally, they used four classical engineering problems in order to validate the efficacy of the algorithm in strict optimizing problems. The final results presented in different table and figures proved that the proposed SMA is a competitive optimization technique on different search landscapes.
In [16], the authors have presents a new optimization algorithm named Equilibrium Optimizer (EO). EO is inspired by the control volume mass balance models, which were used to estimate both dynamic and equilibrium states in physics. In their proposed EO approach, each particle is considered as single solution and the concentration of that particle refers to its position and act as a searching agent. The position are randomly updated with respect to current-best solutions by the search agents and named as equilibrium candidates. When the final best particle from equilibrium candidates reach to the equilibrium state, it is then considered as optimal result. A distinct ''generation rate'' term is proved to boost the exploration and exploitation ability of EO, for avoidance of local minima. The EO algorithm is tested based on 58 unimodal, multimodal, and composition benchmark functions with three engineering application problems. The compare the results of EO with many widely used optimization approaches including Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), Gravitational Search Algorithm (GSA), Salp Swarm Algorithm (SSA), CMA-ES, SHADE, and LSHADE-SPACMA. They considered average rank of Friedman test, for all 58 mathematical functions which proved the efficiency and significance of EO.
As previously discussed, few important SI algorithms for solving optimization problems include PSO, ACO, ABC, bat algorithm, cuckoo search algorithm and FA. PSO is one of the most common computational methods applied for solving optimization problems. In its basic operations, the candidate solutions are used to create the initial population. In PSO, the candidate solutions are dubbed particles that move around the solution search space with specific velocity using some mathematical operations. The local best position held by particle influences the movement of each particle and all the particles move in their coordinated system to move towards global best solution [22].
An important and commonly used SI approach is ACO in which computational problems are solved by using probabilistic approach. The major constituent of ACO is artificial ant which is a type of multi-agent taking inspiration from the behavior of natural ants [45]. The predominant paradigm used in ACO is the strong coordination and communication among real ants based on the quantity of pheromone. In order to assist each other in exploring the surrounding environment for searching their food, the natural ants direct and guide each other by laying down a material called pheromone. For solving optimization problems, the optimal solutions are located by artificial ants that move around a solution search space that represents the possible solutions of the problem being targeted [46].
In a similar way, ABC is a type of SI algorithms inspired from natural behavior of honey bees for the searching their food source. In ABC, the initial population is generated representing the food sources. There are three types of bees namely employed bees, onlooker bees and scout bees [47].
The assumption is that one employed bee represents one food source which means the total number of food sources is equal to the number of employed bees in the colony. The major responsibility of employed bees is to go to source food and collect nectar. The onlooker bees check the amount of nectar brought by employed bees. The scout bees search for new food sources. For solving optimization problems, the nectar amount is associated with the fitness value of optimization function [48].
Another newly developed SI algorithm named grey wolf optimization (GWO) was introduced by Mirjalili in 2014 [49]. The major source of inspiration leading to the development of this optimization technique was the hunting mechanism of grey wolves. Similar to many other optimization algorithms, GWO works in three stages namely solution search space initialization, updating the solution search space and the termination of the algorithm. After the initialization of the solution search space, the algorithm runs in two stages during the total iterations. The first half of the iterations is used for exploration of the solution search space where as the second half of the iterations is used for exploitation of the solution search space. During exploration, the prey is searched where as during the exploitation stage, the prey is attacked which is correlate with the successful termination of the algorithm by finding the optimal solution of the problem considered [50].
Similarly, whale optimization algorithm (WOA) is another newly proposed optimization model developed by Mirjalili in 2016 by taking inspiration from hunting behavior of whales. The algorithm consists of three stages namely searching the prey, encircling the prey and attacking the prey. For solving optimization problem, the search process is associated with the solution search space of the problem. The solution search space is updated which is associated with the exploration of the solution search space which is represented by the searching phase of the algorithm. Similarly the attacking phenomenon of WOA technique is associated with the exploitation capability of the solution search space [51].
Another SI technique developed by Mirjalili in 2015 was named ant lion optimizer (ALO). This model is inspired from hunting process of antlions in nature. The description of hunting mechanism of ant lions in nature has been outlined in five stages namely random walk of the ants, trap building, entrapping of the ants in the traps, attacking the prey and rebuilding of the traps. When implemented as a technique for solving optimization problems, the whole process is divided into three main stages namely initialization of the solution search space, the updating the solution search space and the termination stage. The newly developed algorithm was tested by using different benchmark optimization functions and some real world problems and was found efficient in solving problems of the optimization [52].
In 2019, another optimization algorithm named harris hawk optimization (HHO) was developed by Mirjalili. The mathematical model developed was inspired from the cooperative and chasing behavior of the harris hawks in nature. This mechanism is called surprise pounce in which many harris hawks pounce a prey from different directions to surprise the prey. The harris hawks have many patterns which are based on changing scenarios and patterns of the prey. The mathematical model developed was tested and validated on various types of benchmark optimization functions and some engineering problems (Heidari et al., 2019).
In a similar attempt, another SI mathematical model was developed in 2019 by Wang and his colleagues. The newly introduced SI technique is inspired from the migration behavior of monarch butterflies population. The migration is usually form America and Canada towards Mexico and some other destinations covering miles of distance. During their migration, there is strong communication and coordination system among all the butterflies that leads to the development of new nature inspired algorithm called monarch butterfly optimization (MBO). In MBO, the residing areas for the butterflies are divided into two zones. One zone consists of North America and Canada (Land 1) and Mexico (Land 2). In order to solve the optimization problems, the positions of the butterflies and their offspring are updated which are associated with the updating the solution search space. The newly developed technique has been tested on different types of benchmark optimization functions and compared with many standards other optimization algorithms [53].
Similarly, an important optimization algorithm of SI area was developed by Wang in 2018 by taking inspiration from reproduction process of earthworms in nature. The algorithm was named earthworm optimization algorithm (EOA). There are two kinds of reproductions of earthworms in nature called reproduction 1 and reproduction 2. The offspring of earthworms are generated by reproduction 1 and reproduction 2 independent of each other. After their independent reproduction, the weighted sums of all the offspring are used from which the next generations are created. Reproduction 1 can generate only one offspring at a time where as reproduction 2 can generate many offspring at a time. This second kind of offspring use crossover operator of standard GA and DE. For avoidance of trapping of solution in local optima, the EOA uses a new type of mutation operator called Cauchy mutation (CM). The newly developed model was tested on several types of benchmark optimization functions [54].
Another swarm based optimization algorithm called elephant herding optimization algorithm was developed by Wang in 2015. The newly developed optimization model was named elephant herding optimization (EHO) algorithm after its inspiration from the herding mechanism of groups of elephants. Naturally, elephants that belong to different clans live together in a group led by matriarch. In their living mechanism, the male elephant will generally leave the group when he is sufficiently grown and can survive independently. These two properties of their livings can be mathematically modeled using two operators called updating and separating operator. In EHO, the elephants can be updated by updating operator followed by separating operator. The developed technique was tested by using some benchmark optimization functions and the results have been compared with few standard well known optimization algorithms [55].
A similar bio inspired algorithm was developed by Wang in 2018 called moth search (MS) algorithm inspired from some properties of moths. A dominant feature of moths is their movement towards or away from source of light. Another characteristic of moth is that they follow levy flights. In MS, the light source is considered as the best moth. Other moths that are close to the best fly around their positions in the form of levy flights. On the other hand, the moths that are far from the best will try to fly toward the best using a big step. These two features of the moth correspond to the exploration and exploitation capabilities of the algorithm when it is applied for solving optimization problems. The performance and efficiency of the newly developed technique have been tested on few standard benchmark functions as well as few real world applications [56].
Cuckoo search is another kind of swarm intelligence technique inspired from the natural behavior of some cuckoos that lay their eggs in the nests of birds of some other species. This behavior of laying eggs in the nests of other birds makes the host birds resist them in few ways. For instance, if an intruder egg is discovered by the host bird in its nest, it is either thrown by the host bird or the host bird will leave the nest and make a new nest. For solving optimization problem, the solutions are represented by eggs in the nest [57].
Similarly, bat algorithm is also a type of SI algorithm mainly used for global optimization. This technique takes inspiration from echolocation behavior of bats. When catching a prey, the bat uses three parameters namely its velocity, its position and loudness. These three properties change continuously when the bats see their prey. The values associated with all these parameters used to calculate the fitness values of optimization function when bat algorithm is used for solving optimization problems [58].
FA is a newly introduced stochastic, nature-inspired meta-heuristic algorithm that has attracted the attention of researchers since its inception for solving various types of optimization problems. FA is inspired from the natural phenomenon of light emission by fireflies for their strong intercoordination to achieve different types of goals. The major source of attraction among different fireflies is the light emitted by fireflies. Higher the light emitted by a firefly, higher will be its force of attraction for other fireflies and vice versa. The light emitted by the fireflies is associated with the objective function to be optimized when solving problems of optimization nature. In order to solve optimization problems, Xin-She Yang associated the intensity of the light emitted by fireflies with the objective function to be optimized [59].
In order to have an in depth analysis and understanding of FA, its modified and hybrid versions, few review articles have been written. For example, [60] presented a comprehensive research article consisting of modifications and hybridizations of FA with other optimization algorithms. In the same year, Yang et al. published another review article highlighting the recent work done related to FA till that date. In 2015, Fister again published a review article on FA highlighting the challenges and recent advances of FA [61].
Tilahun and Ngnotchouye [62] reviewed the FA and presented a comprehensive study on the development and modifications introduced in the FA. In a similar fashion, [63] reviewed the dynamic parameters adaptation methods of FA. In [64], Tilahun and Ngnotchouye again presented a comprehensive review article highlighting the continuous versions of FA. Similarly, [65] reviewed the applications of FA and its different variants applied in image processing.
Palit et al. [66], proposed a binary firefly algorithm for cryptanalysis of the cipher text from plain text. In their research work, enhancements were made in almost all components of firefly algorithm due to the new representation of the fireflies. The results of the proposed approach were compared with genetic algorithm, which proved the efficiency of the proposed algorithm. [67] improved FA based on Gaussian distribution for moving the fireflies towards the gbest in each iteration to improve the convergence speed. In the conducted research work, the step size or the randomization parameter was set fixed. The proposed algorithm was evaluated on five standard optimization functions. A new firefly algorithm was developed by Yang in [59] based on the Levy flights movement strategy. In order to improve the convergence rate of the conventional firefly algorithm, Dos et al. [68] combined the firefly with the chaotic maps. They used chaotic sequence to avoid the FA to be trapped in local optima. These chaotic maps were used by the proposed algorithm by tuning the light absorption coefficient and the randomization parameter present in the traditional firefly algorithm that are used for position changing by the fireflies. For solving unconstrained optimization problems, in [69] Subutic et al., proposed a parallelized FA. The authors tested the efficiency of the proposed approach with few standard benchmark functions. The experimental results proved the significance of parallelized approach in term of execution time for the parallelized approach which was much less as compared to the standard firefly algorithm.
In conclusion, FA is one of the most recently and well known SI technique applied for solving optimization problems and different types of modifications and hybridization of FA with other algorithms have been introduced which were highlighted in this work.

A. PROBLEM CONTEXTUALIZATION
The operational steps of all the iterative optimization algorithms consist of three major stages namely the algorithm initialization, updating the solutions and termination of the algorithm [70]. During the initialization stage, the initial solution search space is defined by randomly generating the solution search space by keeping in consideration the dependent and independent variables of the optimization problem and the dependent values of optimization function. During each iteration of the algorithm, the solution search space generated in the initialization stage is updated to consider more values worth contribution towards finding the most optimal solution [71]. In the termination stage, the algorithm is terminated in either of two conditions. Firstly, the required optimal value of the optimization function is obtained. Secondly, the maximum number of iteration is reached. If the most optimal value of the optimization function is obtained, the approach is said to be successful and no further processing is required. But, if the maximum no of iteration is reached and the algorithm terminates without getting the most optimal value, then the algorithm is suffered from poor exploration [72] or poor exploitation [73] of the solution search space during the initialization or solution search space updating stage. The complete description of this context is shown in Figure 1.The exploitation and exploration of the solution search space can be improved at that different stages but that is a complicated procedure as the internal working mechanism of iterative algorithms is complex process. In the proposed approach, the issue has been resolved by simply embedding the PS at the termination stage of standard FA which takes the solutions from the standard algorithm and improves the exploration and exploitation of the solution search space that further optimizes the values obtained.

B. PROBLEM FORMULATION
FA is one of the recently developed SI algorithm used for solving various types of optimization problems. Like other optimization problems, the FA consists of three major stages VOLUME 8, 2020 when applied for solving problems of optimization nature; the initialization stage, the firefly position changing stage and the termination stage. In the initialization stage, the solution search space is randomly generated by keeping in consideration the variables used in the optimization function and the associated value of the optimization function. In the firefly position changing stage, the positions of fireflies are updated by using a factor known as randomization factor for discovering more solutions of the problem been targeted. In the termination stage, the algorithm terminates. In the initialization stage, the randomness in the initial solution space may create imbalanced relationship between exploration and exploitation of the solution search space leading to slower local and global convergence rates of the algorithm resulting in degraded solution quality. The randomization factor in firefly position changing stage decides the movements of the fireflies. If this value is not handled very carefully, it may lead to degraded solution quality of the problem. During the termination stage, the optimized values obtained depend upon the relationship between exploration and exploitation of the solution search space established in the initialization and firefly position changing stage. If the algorithm terminates and the values obtained are the most optimal values, then there is no need of further processing. But, if the algorithm terminates and the values obtained are not the most optimal values, then further processing is required to obtain the most optimal value. In the proposed model, this has been achieved by incorporating PS at the termination stage of standard FA. The problem formulation is shown in Figure 2.
In Figure 2, Maxitr shows the maximum number of iterations for which the algorithm is run. The red color in figure shows the point where the problem arises and the solution is provided at this point.

C. LIMITATIONS OF FA
As mentioned in previous sections, the FA is one of the most recently introduced swarm intelligence techniques applied for solving optimization problems. When it is applied for solving optimization problems, it work in three major stages namely the initialization stage, the firefly position changing stage and the termination stage. There are some major drawbacks associated with each of these stages. The first limitation of standard FA is the initial random solution generation at the initialization stage. This randomness in the initialization stage leads to imbalanced relationship between the exploration and exploitation of the solution search space that ultimately degrades the solution quality. Secondly, in each iteration the solution search space is updated using a factor known as the randomization factor. If the value of this randomization factor is taken small, the solution search space will be highly exploitative and poor explorative which will lead the solution to be trapped in local optima. Similarly, if this randomization factor is taken large, the solution search space is highly explorative and poor exploitative resulting in skipping the most optimal solution even present in the vicinities of the current solution. A third major drawback with standard FA is its termination without being able to get the most optimal solution of the optimization problem.

III. PROPOSED SOLUTION
In the proposed model, the performance of standard FA has been improved by embedding PS algorithm at the termination stage of FA if the solution obtained by standard FA is not the most optimal solution. The proposed model consists of three main stages. The initial solution is randomly generated as in the standard FA. The randomization factor in the firefly position changing is used for updating the solution search space and the PS is used at the termination stage that improves the solution quality. The pseudo code of the proposed model is shown in Algorithm 1.

A. CONVENTIONAL FA
Xin-She Yang introduced the FA for the first time who was inspired from the light emission capability of fireflies which is the primary source of communication among different fireflies to achieve different goals e.g. reproduction and food search [74]. In order to solve optimization problems, Xin-She Yang associated the intensity of the light emitted by fireflies with the objective function to be optimized. According to physics rules of light, the intensity of the light emitted by firefly, I (r), at distance r from the firefly can be observed by Equation 1: where I 0 represents the light intensity generated at the light source. If γ is the absorption coefficient of the medium, then the light intensity I , at distance r is given by the Equation 2: where r represents the distance between source of light and the light observation point. This light intensity can be associated with the attractiveness between the fireflies in the FA which is given by Equation 3: where β o represents the attractiveness at distance r = 0. The distance between two fireflies' x i and y j called as Euclidean distance is given by Equation 4 : In each generation, the position of the firefly can be changed according to the following Equation 5: where α represents the parameter of randomization, represents random number generated in Gaussian distribution. The parameter of randomization is used to control the solution search space. The working mechanism of conventional FA is shown in Algorithm 2.

B. COMPUTATIONAL COMPLEXITY OF STANDARD FA
The less computational complexity of almost all heuristic and metaheuristic techniques make them easily implementable. This phenomenon of these algorithms leads to their applicability in very complex and difficult problems solving procedures. Like other heuristic and metaheuristic algorithms, FA is also considered to be one of the simplest and easiest approaches in terms of computational complexity. In order to calculate the time complexity of the standard FA, we need three parameters namely the population size, the number of iterations and the calculation of objective function. Suppose, the population size is represented by n, the iteration numbers is represented by i. The population is represented by a two dimensional array for which we need two loops to operate. In order to perform the iteration process, we need a single loop. So, the worst case computational complexity is O(n 2 i). In most of the cases, the value of n is very small (like 20) as compared to the value of i (say 500 or greater than 1000), the computational cost is very less due to the linear behavior of i. The major factor that increases the computational cost is the computation of the objective function. In all metaheuristic algorithms, the major contribution towards the complexity of these techniques is the later one case namely the calculation of the objective function. In case of FA, if n is taken as large, there is possibility of using the one inner loop for calculating the light intensity of the fireflies which is associated then with the objective function by applying a sorting algorithm. In this case, the computational complexity of the algorithm will be O(n log i) (Yang and He, 2013).

C. PATTERN SEARCH ALGORITHM (PS)
Pattern search is a kind of optimization algorithm having strong capability of solving various kinds of optimization problems where other standard optimization algorithms face failure in finding the most optimal solution [75]. Its easy implementation, conceptual simplicity and computational efficiency make it more applicable than the other optimization techniques. The basic operations of the pattern search VOLUME 8, 2020 consist of few technical steps. First of all, the mesh size (MS), the expansion factor (EF), the contraction factor (CF) and the maximum number of iterations (Maxitr) is specified. After specifying these parameters, the starting points of PS are set initially or these points are taken form some other algorithms. In our work, these points have been taken form the standard FA. Using these starting points, the mesh points and pattern vectors are created. The mesh points are used to evaluate the values of objective function to be optimized. In order to check the optimized function values, the mesh points values obtained are checked. If the obtained values obtained are better than the previous values, it means the algorithm is going in right direction, the mesh point's values are expanded and the procedure proceeds. If the values obtained are not better than the previous values, it means the mesh size needs to be contracted and new points are set as starting points. This procedure continues until the termination stages reaches or the maximum number of iteration reaches. The pattern search algorithm has been applied for solving various types of optimization problems. The working mechanism of PS is shown in Algorithm 3.

D. PATTERN SEARCH BASED FIREFLY MODEL (FA-PS)
In the basic operation of standard FA, the algorithm terminates in two cases; the required optimal value is obtained or the maximum number of iteration reaches. If the algorithm terminates after getting the required optimal value, then the approach is said to be successful in achieving the target goal and there is no further processing required. If the algorithm terminates after the maximum number of iteration reaches and the value obtained is not the most optimal value, then the number of iterations can be increased. Even after increasing the number of iterations, the value obtained is not the most optimal value, and then some other technique can be applied to get the most optimal value or at least the value better than the value obtained by standard FA in its maximum iterations. If the maximum number of generations of firefly algorithm reaches and the solution obtained is not the most optimal solution, then the pattern search algorithm is introduced to further enhance the exploration and exploitation of the solution search space to obtain the most optimal solution or at least the solution better than the so far obtained solution. Pattern search is a kind of optimization algorithm having strong capability of solving various kinds of optimization problems where other standard optimization algorithms face failure in finding the most optimal solution. Its easy implementation, conceptual simplicity and computational efficiency make it more applicable than the other optimization techniques. In the proposed approach, pattern search (PS) has been applied to target this issue and to improve the results obtained by standard FA. The PS takes the value obtained by standard FA as starting point and performs further processing to get the minimum or maximum value of the minimization or maximization functions, respectively. The working mechanism of the PS based FA is shown in Algorithm 4. The detailed data flow diagram of the proposed model is shown in Figure 3.

E. ADVANTAGES AND LIMITATIONS OF THE PROPOSED MODEL
The newly developed model is a hybrid model of standard FA and PS which is advantageous over the few other standard SI techniques apart from the standard FA. The benefits of the proposed technique are three fold. Firstly, its easy implementation in terms of making a simple hybridization of two conventional models leading to an efficient mathematical model makes it distinct from other sophisticated techniques which involve complicated mathematical operations. The easiness behind the scenario is its straight forward operational steps. The output values obtained from the standard FA are given as inputs to the PS which further performs its processing and tries to optimize these values to improve the power of the standard FA in terms of getting the most optimal value of the optimization problem being kept under consideration. The starting points of the standard PS are the values obtained from the FA which are further processed in order to enhance the strength and power of the optimization model. No complicated operations are involved in this whole procedure.
Secondly, the result oriented technique in terms of getting the most optimal solution or at least better solution than the solution obtained by the standard FA makes the new model an efficient one. The performance of the proposed model is pretty prominent as it has been outlined in the experimental setup section of the document. Since, the PS technique used in the model further performs processing on the values obtained from the standard FA, the improvement in the efficiency of the model is logically, mathematically and experimentally quite eminent. The provides distinction to the proposed model as compared to other modified models or hybrid models which are suffered from ambiguity when physically implemented.
Lastly, almost all optimization models are suffered from the drawback of the failure in getting the most optimal solution in their total number of iterations. The proposed model provides a strong and standard procedure for solving this issue which can be implemented in other optimization models as well by keeping in consideration the operational stages performed in this work.
Although, the proposed model has been quite successful in achieving a better quality solution as compared to the standard FA but there is a serious disadvantage associated with this new model. When the values obtained from maximum number of iterations of standard FA are given as inputs to the PS for further processing, it increases the complexity of the algorithm. So, it will need proper attention in near future to address this issue of the proposed model and propose a valid, easy and acceptable solution.
The major aim of the proposed FA-PS model is to achieve improvement in the optimization capability of conventional FA in obtaining the optimized values but not the complexity of the algorithm. The proposed model has been developed by adding functionalities of PS algorithm to the standard FA. These two optimization algorithms have different types of working mechanism for handling optimization functions. So, embedding the concepts of one algorithm in another algorithm involves few technical procedures. For example, developing hybrid model of FA and PS involves the mapping of fireflies of FA into the PS parameters and converting the PS attributes back to fireflies. During this process in the proposed approach, no operational step has been removed from the conventional FA and PS algorithms, rather new operators have been added. This addition of new operators increases the complexity of the hybrid models. Secondly, the validity of the proposed FA-PS model has been tested on eight minimization functions and six maximization functions. In order to make more generalization of the experimental results, it is necessary to further perform the experimentation on various types Further, as explained in previous sections, two other drawbacks apart from the one targeted in this work are also associated with the standard FA. In order to have complete and concrete representations of the FA working mechanism and its proper applications for solving various types of complicated optimization problems, these issues must be resolved. The proposed model is one problem oriented approach focusing on only one major problem of the FA which is the major drawback of the proposed model.

A. EXPERIMENTAL SETUP
This section describes the hardware and software resources used in our research activity. The experiments in for this paper were conducted on Intel(R) core(TM) i5 CPU with 2.7 GHz processor and 8GB of random access memory. For quick implementation, the proposed model and other comparative approaches were coded using MATLAB R2017a.

B. PARAMETERS TUNING
The parameters of all the considered optimization algorithms were tuned with different values for checking the output results related with different values. The technique adopted for tuning the parameters is trial and error mechanism as followed by other researchers working in this area because there are no fixed hard and fast rules for tuning the parameters of the optimization technique. The values and ranges for tuning parameters of all the optimization algorithms presented in this work have been taken from the standard models specified by their authors. The maximum iterations of all the algorithms namely FA, GA, ABC, ACO and the proposed were varied from 100 to 400 with an increment of 20. The final number of iterations for all the algorithms was set to 300 as for this value all the algorithms gave excellent results. The initial population size for FA and GA was in the range of 40 to 100 with final population size considered as 80. The colony size of ABC was taken as in the range of 100 to 400 with increment of 20 whereas the number of ants in ACO was varied from 50 to 150 with increment of 10. The gamma, alpha and beta values for FA was varied in the range of 0.5 to 2 with increment of 0.5, 1 to 4 with increment of 0.5 and 0.1 to 0.5, respectively. In case of GA, one point crossover with crossover probability of 0.5 to 0.9 and mutation rate of 0.1 to 0.3 were considered. For ABC, the employed bees and onlooker bees were used in the range of 50 to 200, respectively. In case of ACO, the randomness factor, evaporation rate, ppheromone control parameter and heuristic information parameter were taken in the range of 0.6 to 0.9, 0.1 to 0.5, 1 to 5 and 1 to 5.

C. OPTIMIZATION FUNCTIONS
The performance of the proposed approach has been tested on total of fourteen functions including eight minimization functions and six maximization functions outlined in Equation 6 to Equation 19. where where where where  where The convergence rates of fourteen optimization functions are shown graphically in this section. Figure 4 to Figure 11 show the convergence of the proposed model in comparison with the ABC, ACO, GA and FA in minimizing the values of the functions. The convergence of six maximization functions by the proposed model and the other optimization standard approaches is shown in Figure 12 to Figure 17. Figure 4 to Figure 11 shows the convergence rates of all the considered minimization functions for FA, GA, ABC, ACO and the proposed FA-PS model. All the algorithms were run for 300 iterations. The figures clearly show that the proposed FA-PS go beyond the optimization algorithms namely FA, GA, ABC and ACO in terms of getting the minimum value for all the minimization functions given in the figure. In case of    the proposed FA-PS model, the FA was run for 200 iterations and then the PS was embedded which was run for 100 iterations. For different minimization functions, the convergence of the all the considered algorithms are different and no improvement can be observed after specific iteration which leads to the degraded solution quality resulting in failure the algorithms to get the most optimal solution (minimum value in case of minimization functions). The solution quality of the standard FA is worse than different algorithms in different minimization functions. After the introduction of PS at the   termination stage, the solution quality gets better and the optimal values obtained by the developed FA-PS then outperform all the other considered algorithms. Figure 12 to Figure 17 show the convergence rate of six maximization functions for obtaining the maximum values for the optimization functions. Figure 12 to Figure 17 shows the comparison of the developed FA-PS model with other optimization algorithms for maximization functions in terms of convergence rate in getting the maximum value. The figure clearly shows that the    proposed FA-PS is more proficient than all the other considered optimization techniques in terms of getting the maximum value for all the maximization functions presented in the figure. In FA-PS model, the FA was run for 200 iterations and then the PS was introduced which was run for 100 iterations. Since there is no improvement in the solution quality of the standard FA, PS was embedded to further improve the exploration and exploitation of the solution search space the enhances the solution quality resulting in better convergence   of the optimization algorithm towards getting the maximum value. The introduction of PS after a fixed number of iteration improves the convergence rate of the FA that results in better solution quality as compared to other standard algorithms.

E. COMPARATIVE ANALYSIS
In this section, the performance of the proposed model has been compared with other standard optimization algorithms including GA, FA, ABC and ACO. The performance  evaluation parameters include best case solution, average case solution, worst case solution and standard deviation of solutions for twenty runs of all the considered algorithms. The comparison has been carried out for both the minimization functions and maximization functions considered for evaluation. As stated earlier, total of eight minimization functions and six maximization functions have been used in this work for comparison. Table 1 shows the comparison of FA-PS model with standard FA, GA, ABC and ACO in terms of the performance evaluation parameters considered in the work. A Total of eight functions have been considered for all the algorithms.

1) COMPARISON FOR MINIMIZATION FUNCTIONS
In the conduction experiments for the first three minimization functions, all the comparative models are smoothly converge until half iterations. The fast convergence of the proposed FA-PS can be observed after 200 iterations. In the few last complex minimization functions, The proposed model has fast convergence from 100 to straight 300 iterations. The Table 1 shows that the proposed FA-PS model outperforms all the other approaches showing the efficiency of the proposed technique.

2) SIGNIFICANCE OF RESULTS FOR MINIMIZATION FUNCTIONS
In order to show that the results obtained for the proposed model, t-test is conducted to reveal that there is significant contrast between the values of FA-PS and the standard optimization algorithms. The null hypothesis, H o in this case is ''there is no significant difference between the values of proposed model and the other optimization techniques with which the developed model has been compared''. The t-test results are shown in Table 4. This table shows that if p <= 0.05 (95% confidence level), then the proposed model perform better than the other considered approaches. In the case of all the functions, FA-PS have better results; therefor the null hypothesis is rejected. Table 3 shows the comparison of the proposed FA-PS model with other well-known optimization algorithms namely standard FA, GA, ABC and ACO. The parameters for the comparison include worst case solution, best case solution, average case solution and standard deviation. Similarly like minimization functions, the proposed shows better convergence for maximization as well. The reason of smooth convergence of the proposed FA-PS model is the properties of both FA and PS together in one model. The Table 3 reveals that the FA-PS model outperforms other standard optimization techniques. As shown in Table 3, a total of six maximization functions have been considered for the evaluation of the proposed model.

4) SIGNIFICANCE OF RESULTS FOR MAXIMIZATION FUNCTIONS
This section shows the significance of the results obtained by conducting t-test for maximization functions. The test has been conducting for showing that a significant contrast exists between the results of standard optimization techniques and the proposed model. The null hypothesis, H o in this case is ''there is no significant difference between the values of developed FA-PS and the other optimization techniques with which the proposed model has been compared''. The t-test results for maximization functions, shown in Table 2 reveals that if p <= 0.05 (95% confidence level), then the FA-PS  perform better than the other considered approaches. In the case of all the functions, the proposed model have better results; therefor the null hypothesis is rejected.

V. RESULTS DISCUSSION
The results obtained from the extensive experimentation are analyzed based on the convergence rates and the numerical comparison outlined in previous sections. Total of 14 functions have been considered out of which 8 are minimization functions and 6 are standard maximization functions. For all minimization and maximization functions, there are noticeable fluctuations in the convergence rates in the initial iterations of all the algorithms. These fluctuations can be seen for initial 150 iterations in case of minimization functions F1, F2 and F3 whereas these are visible in initial 100 iterations in case of minimization functions F4, F6 and F7. In case of F5 and F8, these changes can be observed after nearly 140 th iterations.
As it can be observed in all the minimization functions, there is no observable change in the convergence rate of these functions after that specified iterations for all the optimization algorithms considered in our experimentation except the standard FA in which there are still some fluctuations after these iterations but these fluctuations also take a smooth convergence rate after few iterations. If the convergence rates of the techniques are keenly observed, it is evident that the values obtained by standard FA are adequately worse than few other techniques for different minimization functions. The major reason behind this low performance of all the standard algorithms is their poor relationship between the exploration and exploitation capabilities of the solution search space. Similar to other standard optimization techniques, FA is also suffered from imbalanced relationship between the exploration and exploitation capability of the solution search space that leads to degraded solution quality ultimately resulting in not getting the most optimal solution. In order to overcome this drawback, the PS is introduced at the position when there is no further improvement in the convergence rate of the standard FA.
After integrating the PS at the termination of the FA, the results of the convergence are sufficiently improved. This improvement is the result of enhancing the strength of the FA in getting the most optimal solution or atleast better solution than the solution obtained by standard FA or the other wellknown optimization algorithms e.g. GA, ABC, ACO, bat algorithm, DE, GWO, SASFA and FA-CROSS techniques. As revealed by the convergence rates of the minimization functions, the proposed FA-PS model is adequately better than the all standard optimization algorithms considered in the experimentation due to the introduction of PS after the execution of standard FA when there is no improvement observed. The integration of PS at the termination stage introduces improvement in the exploration and exploitation capabilities of the solution search space resulting in better quality solution.
In case of all the maximization functions, there is no improvement in the convergence rates of all the functions for all the considered optimization models nearly after 150th iteration. The convergence of different techniques is different at various levels as shown in the figures. In case of some functions, the FA behaves better than the other algorithms whereas in case of few functions, the other algorithms are better than the FA and other techniques. The major phenomenon behind these differences is the difference in the nature of different optimization techniques to build a relationship between the exploration and exploitation of the solution search space to update their solutions to adapt to the situation.
As shown in the figures, there is no improvement in the convergence rates of all the techniques after a fixed number of iterations. Similar to the minimization functions, the convergence rates of the standard FA to get the most optimal value (in case of maximization functions, the largest value obtained) or at least better value than the value obtained by the standard FA and other conventional optimization algorithms have different capabilities in getting the quality solution.
The differences in the solution qualities of standard FA, GA, ABC, ACO, bat algorithm, DE, GWO, SASFA, FA-CROSS and the proposed FA-PS model are also quite evident in the numerical representations of the solutions provided by these techniques. The performance evaluation parameters of the algorithms are the best case solution, average case solution, best case solution and the standard deviation of the solutions as shown in Table 1 and Table 2. There are sufficient variations in the solutions provided by different standard optimization algorithms for different minimization and maximization functions. For minimization functions F1 and F2, the standard FA outperforms the standard ABC, GA and its solution quality is a little bit degraded than ACO where as in case of F2, the standard FA is better than ABC and ACO whereas worse than the standard GA. Similarly, for function F3, the standard FA outperforms all the other standard optimization algorithms. If keenly observed all the minimization and maximization functions, similar differences can be found in all the cases for all the considered algorithms. The reason behind these differences has already been outlined in the last few paragraphs. The worst case solution, average case solution, best case solution and the standard deviations of the solutions for FA-PS show that this model outperforms all the other standard optimization algorithms. Similar observations of results can be found in comparison with BA, DE and standard GWO algorithm.
If critically analyzed, all the standard techniques of optimization follow almost the same flow when handling maximization and minimization functions. Resultantly, it is evident that if the optimization algorithms are used in their standard format for solving specific problems, they all have similar strength and pattern for resolving issues. The only distinction comes into existence if there is some modification in the standard algorithm or if it is used in hybridization with other algorithms. The introduction of modification or hybridization gives further strength to the algorithm making it more efficient. Eventually, the performance of the standard FA has been adequately improved when PS is introduced at the termination stage of the FA. The introduction of the PS at the termination stage leads to further processing of the FA which results in better quality solution making the model quite powerful in terms of convergence to minimum or maximum value as requirement of the optimization functions considered. The fairness, concreteness, efficiency and accuracy of the results is quite logical, mathematical and understandable by keeping in considerations the overall analysis and clarifications presented in this regards.
The contributions of this research are very significant and related to swarm intelligence specifically to optimization algorithms. Further, this research activity has a prominent role towards improving the performance of optimization algorithm which paves the way in its application for resolving real world optimization problems by keeping in consideration the operations, model, mathematical working mechanism and logical execution of the developed technique. The contributions of this work can be viewed in many dimensions. The literature has been reviewed and the whole working mechanism of many optimization approaches have been explored in special reference with the standard FA. The problems associated with FA have been identified and explored for clarification of the technique and proper in depth understanding of the drawbacks associated with this newly introduced optimization algorithm. Exploring the standard FA for the identification of unexplored problems and providing an acceptable solution is the major contribution of the work carried out in this activity. In this research total of three major issues related to optimization algorithm were highlighted. Two out of these three problems have not been previously identified explicitly in the literature. 1) Firstly, initial solution search space in almost optimization algorithms is generated randomly. This randomly generated initial solution degrades the quality of solution to a considerable amount. The standard FA is also suffered from this limitation. 2) Secondly, a major drawback associated with standard FA is its failure in getting the most optimal value due to the fact that after a fixed number of iterations, no significant improvement can be observed in the solution quality. 3) In this work, the validity of second problem has been proved and solved by outperforming experimental results targeting the problem. 4) Introducing a new hybrid model of FA in combination with PS is the major work performed in this activity. In this work, an easily implementable model has been developed by making a simple hybridization of two conventional models leading to the development of an efficient mathematical model that gives a distinction to this model from other complicated approaches which involve complicated mathematical operations. The easiness behind the scenario is its straight forward operational steps which have been adequately explained in previous sections. The output values obtained from the standard FA are given as inputs to the PS which further performs its processing and tries to optimize these values to improve the power of the standard FA in terms of getting the most optimal value of the optimization problem being kept under consideration. The starting points of the standard PS are the values obtained from the FA which are further processed in order to enhance the strength and power of the optimization model. No complicated operations are involved in this whole procedure.
Further, huge experimentation has been performed which leads to the development of a result oriented technique which is quite successful in getting the most optimal solution or at least better solution than the solution obtained by the standard FA makes the new model an efficient one. The performance of the developed technique is sufficiently better than other standard models as it has been outlined in the experimental setup section of this work. Since, the PS technique used in the model further performs processing on the values obtained from the standard FA; the improvement in the efficiency of the model is logically, mathematically and experimentally quite eminent. This gives a distinct role to the proposed model as compared to other modified models or hybrid models which are suffered from ambiguity when practically implemented.
Additionally, the comparative analysis of the proposed model with other standard optimization techniques further strengthens the proof that the developed model is better than many conventional optimization algorithms. Lastly, the identification of the problem targeted reveals that almost all optimization models are suffered from the drawback of the failure in getting the most optimal solution in their total number of iterations. The proposed model provides a strong and standard procedure for solving this issue which can be implemented in other optimization models as well by keeping in consideration the operational stages performed in this work.

A. LIMITATIONS
Firstly, the major aim of the proposed FA-PS model is to achieve improvement in the optimization capability of conventional FA in obtaining the optimized values but not the complexity of the algorithm. The proposed model has been developed by adding functionalities of PS algorithm to the standard FA. These two optimization algorithms have different types of working mechanism for handling optimization functions. So, embedding the concepts of one algorithm in another algorithm involves few technical procedures. For example, developing hybrid model of FA and PS involves the mapping of fireflies of FA into the PS parameters and converting the PS attributes back to fireflies. During this process in the proposed approach, no operational step has been removed from the conventional FA and PS algorithms, rather new operators have been added. This addition of new operators increases the complexity of the hybrid models. Secondly, the validity of the proposed FA-PS model has been tested on eight minimization functions and six maximization functions. In order to make more generalization of the experimental results, it is necessary to further perform the experimentation on various types of other uni-modal functions and multi-modal functions and also other multi-objective real world optimization problems with various types of affecting parameters.

VI. CONCLUSION AND FUTURE WORK
In this work, a serious drawback associated with optimization algorithms has been targeted in special reference to firefly algorithm. Almost all evolutionary intelligence and swarm intelligence algorithms terminate after reaching the maximum iteration number without obtaining the most optimal value in many cases. This problem is the result of imbalanced relationship between the exploration and exploitation capability of the solution search space. This issue of imbalanced relationship between the two properties can be resolved by introducing some modifications in different operational stages of the optimization algorithms. This solution does not give guarantee of the improving the solution quality of the problem to the maximum instant in addition to its failure in identification of the problem location. This research work is focused on resolving the problem at the termination stage of the optimization technique. Firefly algorithm has been considered as the test case for checking the performance of the proposed model for solving optimization problems. The model has been tested on eight minimization functions and six maximization functions. All the results reveals that the proposed optimization solution has fast convergence to the most optimal solution before reaching to the maximum iterations. A t-test has been conducting on both maximization and minimization functions in order to validate the significance of the proposed FA-PS output in a comparison with state-of-art optimization algorithms. As in this research, we had improved the convergence of standard FA for optimization problems, in near future, the solution proposed in this research will be extended to other optimization methods with keeping in considerations the results analyzed in this work. RAO NAVEED BIN RAIS received the M.S. and Ph.D. degrees in computer engineering with specialization in networks and distributed systems from the University of Nice, Sophia Antipolis, France, in 2007 and 2011, respectively. He is currently working as an Associate Professor with the Department of Electrical and Computer Engineering, College of Engineering and Information Technology, Ajman University, UAE. He has experience of more than 15 years in teaching, research, and industrial development. His research interests include network protocols and architectures, information-centric and software-defined networks, network virtualization, machine learning, and Internet naming and addressing issues.
MUHAMMAD AAMIR received the master's degree in computer science from the City University of Science and Information Technology, Pakistan, and the Ph.D. degree in information technology from University Tunn Hussien Onn Malaysia. He had worked for two years in Xululabs LLC as a Data Scientist. He is currently working on research related to big data processing and data analysis. His fields of interest are data science, deep learning, and computer programming.
UMAIR MUNEER BUTT received the B.S. (CS) degree from GIFT University, Pakistan, in 2012, and the M.S. (CS) degree from the National University of Sciences and Technology (NUST), Pakistan, in 2016. He is currently pursuing the Ph.D. degree with the School of Computer Science, Universiti Sains Malaysia. He has more than three years of teaching and research experience in machine learning, data science, and image processing. He has served as a Research Associate for three years and worked on different real-world applications. His current research interests are data science, data mining, and machine learning.
MUBASHIR ALI received the B.S. degree in computer science from AIlama Iqbal Open University Islamabad, Pakistan, in 2011, and the M.S. degree in software engineering from Bahria University Islamabad, Pakistan, in 2014. He is currently pursuing the Ph.D. degree with the School of Engineering and Applied Sciences, The University of Bergamo, Italy. He has also served as a Software Engineer for more than five years in research and development-based public sector organizations in Pakistan. His research interests include NLP, machine learning, data science, social media analysis, and software repository mining.
ADEEL AHMED (Graduate Student Member, IEEE) received the M.Phil. degree in computer science from Quaid-i-Azam University, Islamabad, Pakistan, in 2011, where he is currently pursuing the Ph.D. degree with the Department of Computer Science. He has software industry experience for few years and his research interests include social network analysis, recommender systems, machine learning, and swarm intelligence.
IMRAN ALI KHAN received the master's degree from Gomal University, D. I. Khan, Pakistan, and the Ph.D. degree from the Graduate University Chinese Academy of Sciences, China. He is currently working as an Associate Professor with the Department of Computer Science, COM-SATS University Islamabad, Abbottabad Campus, Pakistan. He has produced over 50 publications in journal of international repute and presented papers in international conferences. His research areas include wired and wireless networks, and distributed systems.
OSMAN KHALID received the master's degree from the Center for Advanced Studies in Engineering and the Ph.D. degree from North Dakota State University, USA. He is currently an Assistant Professor with COMSATS University Islamabad, Abbottabad Campus. His research areas include: recommender systems, network routing protocols, the Internet of Things, and fog computing.