Hybrid PSO-α BB Global Optimisation for C² Box-Constrained Multimodal NLPs

<inline-formula> <tex-math notation="LaTeX">$\alpha $ </tex-math></inline-formula> BB is an elegant deterministic branch and bound global optimisation that guarantees global optimum convergence with minimal parameter tuning. However, the method suffers from a slow convergence speed calling for computational improvements in several areas. The current paper proposes hybridising the branch and bound process with particle swarm optimisation to improve its global convergence speed when solving twice differentiable (<inline-formula> <tex-math notation="LaTeX">$C^{2}$ </tex-math></inline-formula>) box-constrained multimodal functions. This hybridisation complemented with interval analysis leads to an early discovery of the global optimum, quicker pruning of suboptimal regions in the problem space, thus improving global convergence. Also, when used as a heuristic search algorithm, the hybrid algorithm yields superior solution accuracy owing to the combined search capabilities of PSO and the branch and bound framework. Computational experiments have been conducted on CEC 2017/2019 test sets and on n-dimensional classical test sets yielding improved convergence speed in the complete search configuration and superior solution accuracy in the heuristic search configuration.


I. INTRODUCTION
Deterministic global optimisation techniques are rigorous and complete search algorithms [1] that aims to find an −accurate global optimum solution in finite time unlike its stochastic counterpart (particle swarm optimisation [2], genetic algorithm [3], differential evolution [4], simulated annealing [5], ant colony optimisation [6] and other competing techniques [7]). These techniques generally proceed by lower and upper bounding of the solution space using branch and bound frameworks [8]. Among existing deterministic global optimisation approaches, αBB is an elegant branch and bound (BB) framework [9], [10] that guarantees convergence to the true optimum for twice differentiable problems. However, the framework generally suffers from a slow convergence speed owing to its often loose convex relaxation. Several research works have been done in the literature [11]- [15] aiming to improve bound tightness and thus boost the overall computational efficiency. In the same tune, interval analysis is also used as a zero-order convex relaxation that supplements α-convex relaxation and The associate editor coordinating the review of this manuscript and approving it for publication was Sotirios Goudos .
provide tighter lower bounds [16]. While several research works have been centred on bound tightness, not so much research work has been performed in other areas of the procedure, including the performance of the upper bound solver. In a bid to further improve convergence speed, the current study looks at the exploration capability of the upper bound solver that typically makes use of local search algorithms. The study proposes an efficient use of particle swarm optimisation (PSO) as an upper bound solver in the BB-procedure to allow early discovery of the true optimum, which could lead to early pruning of suboptimal regions and speed up global convergence provided the availability of tight bounds supplemented by interval analysis. In addition, the hybrid algorithm can be used as a heuristic search algorithm combining the search capability of PSO with that of the branch and bound framework to which a deterministic stopping criterion is added, thus yielding a solver that is beneficial to both frameworks, stochastic and deterministic. The main contributions of the paper are as follows: 1) A novel PSO-αBB hybrid algorithm is proposed that improves the convergence speed of classical αBB for C 2 multimodal bound-constrained problems. 2) A PSO via branch and bound framework heuristic search algorithm is proposed with improved solution accuracy. 3) Additional insights on the bound tightness performance of α-convex relaxation and classical interval analysis are provided and discussed.
The rest of this paper is organised as follows: Section 2 elaborates on the αBB search procedure presenting the configuration of the state of the art inclusive of all pruning rules. Section 3 describes the particle swarm optimisation algorithm, including critical components related to this study. Section 4 presents the proposed hybrid algorithm. Section 5 describes the computational experiments used to assess the performance of the proposed algorithm, discusses the findings and proposes future research areas. Section 6 provides a conclusion to the study.

II. DETERMINISTIC GLOBAL OPTIMISATION: αBB A. CLASSICAL ALGORITHM
Deterministic global optimisation typically proceeds by an exhaustive partitioning of the solution space in which upper and lower bounding of the problem inner regions anticipates pruning of sub-optimal areas in the search for the −global optimum. This divide and conquer approach is performed within a branch and bound framework [8]. αBB is a branch and bound framework that offers an elegant lower bounding scheme that creates an underestimator by adding a function term to the initial problem with minimal inclusion of additional parameters (α) applicable to twice differentiable functions. The algorithm was developed by [17], [18].
A convex underestimator f l to a nonlinear function f is obtained by adding a quadratic function to the original non-convex function in such a way that it overpowers any non-convexity in the original function: where x l i and x u i are the bounds on the dimensions of the solution space and N , the dimension order of the problem. The formulation of the convex understimator relies on the accurate estimation of the α vector, which requires that the Hessian matrix H f l be semi-definite: where is a diagonal matrix whose diagonal elements are elements of the α vector and H f , the hessian matrix of the original problem. Theoretically, elements of the α vector can be obtained by finding the smallest eigenvalues in each dimension of the hessian matrix of f within the bounded region [18]: where λ i (x)'s are the eigenvalues of H f (x) in the given interval. One efficient way to solve equation 3 is by using interval analysis that finds the interval Hessian matrix [ Among existing computation methods to find the α vector based on equation 4, the technique based on the scaled Gershgorin theorem [9], [10] is typically used such that the minimum eigenvalue of an interval matrix [A] = ([a l ij , a u ij ]) is given by and thus To alleviate the conservative nature of α-convex relaxation, it is supplemented with interval analysis [16] [ Further details on more complex functions (cos, sin, asin, acos, log, etc.) can be found in [19]. While interval analysis also leads to overestimation in some instances due to the dependency problem [20], our computational experiments suggest that it often leads to much tighter bounds than αconvex relaxation and thus can validly complement the lower bounding process. Convergence speed of the branch and bound process is accelerated by domain reduction techniques [21] and interval analysis based pruning rules [22], [23]. While domain reduction techniques are used selectively depending on the 806 VOLUME 10, 2022 problem being solved, interval analysis based pruning rules can routinely be used to supplement the classical fathoming mechanism (See section II-B). Algorithm 1 describes a pseudo-code of a typical αBB procedure.
The minimum and maximum iteration limit to complete the branch and bound framework search is a function of the problem dimension size (n), the size of the solution space, the tightness of the underestimator (α), and the tolerance level required ( ) [18]: exhibiting linearithmic (O(nlog(n))) and exponential (O(n n )) time and space complexity in the best and worst-case scenario respectively.

Algorithm 1:
A Typical α BB-Procedure if f j > BUB then 6 Fathom node X j and move to next iteration; if f is monotonous or non-convex then 10 Reduce X j and move to next iteration; 11 end 12 Solve for ub j = local_solver(f(x)),x ∈ X j ;

14
Compute α and set f l (x) = f (x) + CT (α, x); 15 Solve for lb j = local_solver(f l (x)),x ∈ X j ; 16 if lb j > BUB then 17 Fathom node X j and move to next iteration; 18 end 19 Update node lower bound: LB j =max(f j , lb j ); The box V should be reduced to an edge piece on the monotonous dimension [23] and does not contain the global optimum except possibly on its edge piece.

3) NON-CONVEXITY TEST
''If f is concave over a given box V ⊂ X , the box V does not contain the global optimum except that the edges f in X touches the edges of V. Concavity can be tested by the sign of the diagonal elements of the interval Hessian matrix of f in V'': The box V can be pruned or reduced to its left or right edge piece on the concave dimension [23]. Pruning rules in box-constrained global optimisation. The following representation has been extracted from [22]. I 1 relates to the monotonicity test, and I 2 refers to the concavity test. I 3 relates to the bound test.

III. PARTICLE SWARM OPTIMISATION
Particle swarm optimisation is a population-based optimisation algorithm inspired by the foraging of flocks of birds in a collaborative methodology. PSO employs an interactive random search for the global optimum based on swarm intelligence. A set of randomly generated potential solutions is generated, and each individual (particle) iteratively improves its position based on its own experience (cognitive learning) and based on the experience of other individuals in the swarm (social learning). The motion of each particle in the search space is thus dictated by a collaborative stochastic search direction: where v k+1 j is the search direction of a given particle for the next iteration, which is a function of its current velocity v k j , its best location thus far p k j (cognitive learning) and the best location g k (j) (social learning) in the neighbourhood of the particle (g k j ) or within the whole swarm (g k ). When g k (j) represent the best particle location in a given neighbourhood, the algorithm is referred to as local, and it is referred to as global when g k (j) represent the best candidate in the whole population [24]. The latter version is used in the study as our computational experiments suggested better time efficiency in line with the focus of the study. The parameters c 1 and c 2 represent acceleration parameters for the cognitive and social learning components. r k 1 , r k 2 ∈ [0, 1] are uniform randomly generated numbers that simulate the stochastic behaviour of the swarm. Figure 2 illustrates geometrically how the motion of each particle within a swarm is dictated.

A. PARAMETER TUNING
Convergence speed and exploration are primarily dictated by the cognitive and social parameters c 1 , c 2 as well as by the inertial weight ω k . c 1 and c 2 decide the speed of each particle as well as the algorithm bias towards either exploitation (fast convergence) or exploration. If c 1 = c 2 > 0, the algorithm is balanced. The particles attract towards the average of p k j and g k . When c 1 > c 2 , the algorithm gives preference to exploration than quick convergence, whereas when c 1 < c 2 , the algorithm favours exploitation over exploration. Smaller values of c 1 and c 2 lead to smooth particle trajectories during the search, while larger values lead to abrupt motion with more acceleration [26]. The original publication proposes the value of c 1 = c 2 = 2 and these settings are typically used in the literature [27]. The inertial parameter w k defines how willing a given particle is in maintaining its current direction. It contributes to dictating bias towards exploration and exploitation as well. The higher the inertial parameter, the more exploratory the search. In the original publication, the inertial parameter was set to 1 (w k = w = 1). However, a more adaptive variation w k is accepted in recent literature [26], [28] that favours a high inertial weight value at the beginning of the search, which progressively drops over iterations: This methodology favours exploration at the beginning of the search and exploitation at the end of the search to eventually narrow the search down to the area containing the best fitness and explore it in detail. The value of w max = 0.9 and w min = 0.4 are typically used in the literature. Apart from the aforementioned parameters, three additional parameters must be set. The population size of the algorithm is also determinant to its exploration capability. A population size of 20-50 particles is routinely used in the literature and deemed satisfactory. The velocity bound of the particles are often determined as lying in a specified range [−v max , v max ] which limits the velocity of the particles. These velocity bounds are typically of the type [27], [29], [30]: where I i is the scaling factor of the maximum velocity in the i th dimension selected by the user. I i has been set to 2 in the current study with a smooth swarm behaviour. Also, the position of the particle (x k+1 j ) is typically bounded not to exceed the problem dimension bounds.

B. STOPPING CRITERIA
Besides the use of a maximum iteration count, several heuristics have been proposed in the literature that could lead to an early stop of the algorithm when some features of the particles do not lead to reasonable improvement [30] which could save computation time. Improvement-based stopping criteria terminate the search when the improvement of the objective function (i.e. f (g k ) or µ(f (p k j )) is not significant for a number of iterations. Movement-based stopping criteria monitor the movement of particles for a given number of iterations. If the particle positions do not sensibly vary for a number of iterations, the search is halted. A gbestbased improvement criterion is used in this work as it is computationally efficient and is a satisfactory heuristic stopping criterion consistent with the focus of the study. A detailed survey on the topic can be found in [30]. Algorithm 2 presents the typical pseudo-code of the PSO search mechanism.

IV. HYBRIDISATION
In the view of the current hybridisation, PSO is used as an intelligent scatter search that favours an early discovery of the global optimum owing to its exploration capabilities, which could lead to early pruning of substandard regions in the BB-procedure, fewer nodes exploration and thus quicker convergence. It is well established in the literature that a best-first search strategy (A * algorithm) in branch and bound procedures leads to fewer node exploration and fast convergence [8], [31]. The rationale of this idea is that a branch and bound procedure should first focus on regions estimated to contain the global optimum because, given tighter bounds, sub-optimal regions will subsequently be pruned and not explored. This philosophy is often the rationale that influences the common use of the best-first node selection strategy in BB-frameworks compared to other node selection approaches [8]. However, it does not Algorithm 2: Typical PSO Procedure Set swarm size N, max_iter K and k = 0; 3 Randomly generate initial population: x j ∈ X 0 ; 4 Randomly generate initial velocities: v j ∈ V ; 5 Set p k j for every particle to x k j ; 6 Compute the swarm initial best point (g k ,BUB); 7 while k < K and heuristic stop not reached do 8 for j=1:N do suffice to estimate the region that most likely contains the global optimum; the use of a local search in such a region could still lead to additional branching if such a solver is trapped in a local optimum (i.e. poor upper bound) while the lower bounding utility is rather tight. A more explorative search for the true optimum would increase the likelihood of exploring fewer nodes and eventually speed up convergence of the BB-procedure. Moreover, the use of PSO as an upper bound solver in the BB-procedure increases the likelihood that upon early termination of the branch and bound process due to a maximum iteration limit, a muchimproved solution than the classical methodology can be obtained. From a PSO point of view, this hybridisation could guide the PSO solution search towards the true optimum owing to the αBB region partitioning scheme and node selection strategy, as well as provide a deterministic stopping criterion that informs the algorithm that the true optimum has been obtained, a feature that PSO alone cannot guarantee. Thus, the hybrid algorithm can also be used as a heuristic search algorithm that uses the combined search capabilities of PSO and αBB to obtain an improved solution. Several PSO-BB hybridisation approaches have been proposed in the literature yielding improved optimisation performances. [32] proposes PSO-BB hybridisation for solving integer separable concave programming problems using PSO as an upper bound solver for discrete problems and linear relaxation, yielding improved convergence speed of the BB-process for several problems. [33] proposes a generic PSO-BB hybridisation for mixed discrete nonlinear programming using an SQP-based nonlinear branch and bound framework as alternating hybrids to obtain a better solution accuracy performance than both PSO and the nonlinear SQP-branch and bound framework controlled by a maximum iteration count. In the methodology, the BB-procedure would initialise an incumbent solution which PSO will use as gbest, further explore and pass on back to the BB-procedure by the time it finds a better solution and the cycle repeats. The approach yielded improved computational efficiency and solution accuracy than both PSO and the branch and bound framework, although it could not guarantee global optimality for non-convex problems [34] and thus could be outperformed by other competing methods. [35] proposes BB-PSO hybridisation for box constrained NLPs using a Lipschitzian problem-specific lower bounding scheme where a branch and bound framework would lead PSO to global optimality. Although the framework provided similar settings as the current work, the framework did not focus on computational efficiency and did not elaborate a generic approach to convex relaxation that would not require knowledge of the problem space nor offered tight bounds. The study focused on leading PSO to global optimality, however, with no emphasis on the computational efficiency of the mix and using a much looser convex relaxation approach than the scheme used in this study. [36], [37] propose hybridisation of conformational simulated annealing (CSA) with αBB as alternating hybrids to solve a highly multimodal protein structure prediction problem, a box-constrained optimisation problem, resulting in a substantial reduction in time. In a similar philosophy, as [33], the result of αBB iteration upper bound solver would seed CSA in its evolutionary strategy to aggregate much better candidate solutions upon which a next αBB iteration will take place, and the cycle repeats. However, the computational cost of conformational simulated annealing [38] is not appropriate for the type of hybridisation used in this study on top of the fact that a different methodology is used in the current work. Further comparative studies can be done on alternating and integrated hybrids. The current study extends from the work of [35] and proceeds with the hybridisation of αBB, a problem-agnostic generic branch and bound framework for twice differentiable problems with guaranteed −global optimal verification where PSO is used as a substitute for the upper bound solver of classical αBB inclusive of additional optimisation in a bid to improve the computation time performance of the αBB as a unit as well as guarantee global convergence of PSO. Also, the current study relies on interval analysis as an improvement over α-convex relaxation in order to support pruning of sub-optimal regions and yields an early stop of αBB. Finally, to alleviate the local convergence shortcoming of PSO that may fail to converge to an optimum even in its vicinity [39], the final gbest at the end of each PSO search is used as an initial value to a local solver (i.e. SQP) taking advantage of gradient information. Figure 3 describes the flowchart of the hybrid algorithm.

V. COMPUTATIONAL EXPERIMENTS
To compare the performance of both αBB configurations, C 2 class functions from the CEC 2017/2019 test sets and n-dimensional classical functions from an extensive literature survey were used [40]- [42] (See Table 1). The computational experiments aimed to assess the convergence speed of αBB against the hybrid algorithm (αBB-PSO) when used as a complete search algorithm (i.e. no maximum iteration count) whereby test functions were set to dimension 2 and 3, respectively. Also, the solution accuracy of the hybrid algorithm when used as a heuristic search (i.e. maximum iteration count) was compared against PSO for the same number of function evaluations using test functions at dimension 10. Table 2 presents a summary of the configuration of the computational study. αBB was configured with a best-first search strategy using node lower bounds, a global optimum tolerance level of = 10 −3 . All pruning rules mentioned in section II-B were used. Branching was performed on the dimension with the longest size at every node. In the complete search configuration, PSO was set with 30 particles, a maximum iteration limit of 50 for dimension 2 and 100 for dimension 3. A maximum stall iteration count of 20 with a function improvement tolerance of 10 − 3 was set for the heuristic stop. In the heuristic search configuration, PSO was set with a maximum iteration count of 10000 with no heuristic stop compared against the hybrid algorithm with a maximum iteration count of 50 and a PSO maximum iteration count of 200 to yield the same total number of function evaluations (i.e. 10000 per particle). An adaptive inertial weight for the PSO procedure was used in all configurations according to equation 16. The cognitive and social parameters were set to c 1 = c 2 = 2 in line with the original publication and typical implementations [27]. The experiments were performed on a general-purpose computer: i3-core processor 64-bit @2.0GHz 8GB RAM. Interval analysis was performed using a third-party MATLAB package INTLAB (version 11) designed by [43]. The MATLAB (version 9.6.0) SQP-based NLP solver was used in this study for all local searches. In the complete search configuration, performance profiles [44] measured in terms of median CPU time were used to compare the convergence speed of the hybrid algorithm against classical αBB to assess their computational efficiency first in reaching the global optimum and second in completing the search across all test functions (See Figure 4 and 5). These results were supplemented by median CPU time performances of each function presented in Table 3 and 4. In addition, the average iteration cost of the BB-procedure in each configuration was recorded to assess the computational cost distribution of each sub-component with the branch and bound frameworks. In the heuristic configuration, the solution accuracy of PSO was compared with that of the hybrid algorithm for the same number of function evaluations using the performance profile in terms of a solution accuracy metric [45] and using median final fitness values of each solver across all functions (Table 5). Finally, bound tightness performances of both interval analysis and α-convex relaxation were recorded to assess the contribution of each lower bounding scheme in quickening convergence (See Table 6). Median scores were admitted unequal only if they were statistically significant as per the Mann-Whitney U statistical test. A significance level of 0.05 was used for the hypothesis tests (H 0 : µ 1 = µ 2 ). All performance metrics were obtained from an average of fifty optimisation runs.

A. DISCUSSION AND RECOMMENDATIONS 1) ON THE CONVERGENCE SPEED TO GLOBAL OPTIMUM REACH
The performance profile in Figure 4 compares the convergence speed of each solver in reaching the global optimum. The results in the figure show that the hybrid algorithm has a much better efficiency in finding the global optimum compared to the classical algorithm with a win probability (τ = 1) of 85% and convergence improvement up to 36 fold. The results on Table 3 substantiate this claim as most test functions reach the global optimum earlier in the hybrid algorithm taking fewer branching iterations. These results are consistent with the rationale of the study of early global minimum reach through PSO usage and concurs with the theoretical rationales that support the superiority of meta-heuristic searches over local search techniques in terms of search capabilities. Figure 5 presents the performance profile that compares the overall convergence speed of both αBB configurations. The results in the figure show that the hybrid algorithm has an overall better efficiency compared to the classical algorithm with a win probability of 60% and an overall convergence speed improvement of up to 38 fold. In a practical sense, the hybrid algorithm can quicken global convergence on a case to case basis up to several orders of magnitude compared to the conventional algorithm. This improvement in convergence speed is correlated with a reduction in the number of branching iterations consistent with the rationale of the study (See Table 4). The results in Table 4 shows that the hybrid algorithm reduces the number of branching iterations for 46% of test problems with statistical significance and thus is a more robust solver.

2) ON THE OVERALL CONVERGENCE SPEED
On the other hand, the hybrid algorithm did not reduce the overall execution time for several other profiles, yielding some increase in computational time. Additional calibration work should be performed to optimise the computational cost PSO as an upper bound solver to yield zero to very negligible overhead thus when no reduction in the number of branching iteration occurs. This calibration could further improve the performance profile of the hybrid algorithm. Finally, a gap can be observed between the performance profile of the hybrid algorithm to global optimum reach compared to its true overall performance profile. More investigations should be conducted on tighter lower bounding schemes that could further unleash the potential of the hybrid algorithm ( Figure 4).

3) ON THE COMPUTATIONAL COST DISTRIBUTION OF BB-COMPONENTS
The results in Figure 6 presents the average computational distribution of each BB-component across both configurations recorded for test problems at dimension 2 and VOLUME 10, 2022     expensive than the classical SQP-based local search of the αBB configuration yielding additional computational overhead responsible for its improved search capabilities; however, also susceptible to deteriorate computational VOLUME 10, 2022 efficiency. As mentioned in section V-A2, this computational overhead can be further minimised or eliminated by calibration of PSO parameters (i.e. heuristic stop parameters, number of particles, maximum iteration count) or potentially by looking at the computational contribution of the SQP finetuning mechanism. The lower bound optimisation (Lb Solver) has been the third-largest contributor in the BB-iteration cost. The computational cost of the node selection process and auxiliary algorithmic routines were very negligible across all configurations.

4) ON THE SOLUTION ACCURACY OF THE αBB-PSO HEURISTIC CONFIGURATION
The performance profile in Figure 7 and results in Table 5 assess the capabilities of the hybrid algorithm as a heuristic search (i.e. maximum iteration limit) against PSO in terms of solution accuracy. This experiment was performed on equal grounds guaranteeing that both PSO and the hybrid will perform an equal number of function evaluations (i.e. 10000 per particle), and the results of both solvers were fine-tuned by an SQP solver subsequently. It can be observed that the hybrid algorithm outperforms PSO in terms of solution accuracy yielding a performance profile with a win probability of 100% and an improvement in solution accuracy in excess of 100 fold. In addition, the hybrid algorithm could guarantee complete search in several instances (F 3 , F 6 , SF 38 , SF 43 , SF 44 and SF 154 ), thus ending the search with confidence of exhaustive exploration. The above results exemplify the benefits of combining the search capability of the branch and bound framework with that of PSO in the same vein as [35]. On the other hand, PSO presented a better time efficiency than the hybrid algorithm. To further improve the computational efficiency of the hybrid algorithm, interval analysis could be used as the sole lower bounding scheme because statistically, it yields much tighter and more computationally efficient than α-convex relaxation ( Figure 6 and Table 6).

5) ON THE CONTRIBUTION OF INTERVAL ANALYSIS TO REGION PRUNING
The results in Table 6 show a comparison of bound tightness between interval analysis and the α−convex relaxation over 814 VOLUME 10, 2022   a series of fifty optimisation runs, counting the number of times interval analysis had attained a tighter lower bound than α-convex relaxation and vice-versa. It also evaluated the gap between the two underestimators in each iteration during BB-procedures. The results show how interval analysis was much tighter than α-convex relaxation on most lower bounding routines for most test profiles (except SF 144 ). In most test profiles, interval analysis totally outperformed convex relaxation in finding tight bounds. Also, in instances where α-convex relaxation was unable to find a finite lower bound (i.e. lb i = −∞), interval analysis supplemented the shortcoming. Hence, interval analysis was able to anticipate tighter bounds much earlier than α-convex relaxation, which eventually led to early pruning to quicken the overall convergence of the hybrid algorithm.
However, it should also be noted that interval analysis does not always lead to tight bounds as in many instances, it also leads to overestimation as caused by the dependency problem related to the inner structure of the problem algebraic expression [20]. This can also explain why the hybrid algorithm did not manage to quicken convergence for several other test profiles, albeit reaching the global optimum earlier in most cases (See Table 3). Further study should be performed towards the improvement of classical interval analysis or the implementation of other interval analysis approaches reported in the literature such as Taylor model-based interval analysis [46], [47] or affine interval analysis [48]. Such an improvement in accuracy will not only improve lower bounding by interval analysis, but it will also ameliorate the tightness of α-convex relaxation, which typically depends on interval analysis (i.e. Interval Hessian matrix). In addition, possible restructuring of the problem algebraic expression [49], [50] can also contribute to improving the bound tightness of classical interval analysis and α-convex relaxation.

VI. CONCLUSION
The current study has proposed the hybridisation of particle swarm optimisation with αBB in a bid to improve the convergence speed of the branch and bound framework, on the one hand, and to obtain a better heuristic solver on the other hand. It has shown an improvement in convergence speed of the branch and bound framework on several test profiles owing to an early solution discovery by PSO and owing to the availability of sharp lower bounds supplemented by interval analysis. Also, it has shown a drastic improvement in solution accuracy when PSO is used via branch and bound framework, combining the search capabilities of both devices. Interval analysis has been very determinant in the proposed algorithm yielding much tighter bounds than αconvex relaxation and at a better computational efficiency, therefore, hinting that it could be used as its substitute to further improve computational efficiency. More investigations should be performed on tighter bounding schemes in interval analysis which would subsequently boost the convergence speed of the hybrid algorithm. Additional calibration work should be conducted to improve the computational cost of PSO as an upper bound solver in order to minimise computational overhead and ameliorate the efficiency of the hybrid algorithm.
YVES MATANGA received the B.Tech. degree in electrical engineering (specialization in electronics) from the Tshwane University of Technology, Pretoria, South Africa, in 2014, the M.Sc. degree in electrical and electronic systems from ESIEE, France, and the M.Tech. degree in electrical engineering from the Tshwane University of Technology, in 2018. He is currently pursuing the Ph.D. degree in electrical and electronic engineering with the University of Johannesburg, South Africa, working on convergence improvement of deterministic and globally convergent optimization algorithms with applications to control systems. He has previously published in Elsevier and the IEEE Africon Conference. His research interests include dynamic systems and control, optimization, artificial intelligence, and signal and image processing.
YANXIA SUN (Member, IEEE) received the joint D.Tech. degree in electrical engineering from the Tshwane University of Technology, South Africa, and the Ph.D. degree in computer science from University Paris-EST, France, in 2012. She is currently working as a Professor with the Department of Electrical and Electronic Engineering Science, University of Johannesburg, South Africa. She has more than 15 years of teaching and research experience. She has lectured five courses in the universities. She has supervised or co-supervised six postgraduate projects to completion. She has published 110 articles, including 35 ISI master indexed journal articles. She is the investigator or co-investigator for six research projects. Her research interests include renewable energy, evolutionary optimization, neural networks, nonlinear dynamics, and control systems. She is a member of the South African Young Academy of Science (SAYAS).
ZENGHUI WANG (Member, IEEE) received the B.Eng. degree in automation from the Naval Aviation Engineering Academy, China, in 2002, and the Ph.D. degree in control theory and control engineering from Nankai University, China, in 2007. He is currently a Professor with the Department of Electrical and Mining Engineering, University of South Africa (UNISA), South Africa. His research interests include industry 4.0, control theory and control engineering, engineering optimization, image/video processing, artificial intelligence, and chaos.