A Bi-Objective Knowledge Transfer Framework for Evolutionary Many-Task Optimization

Many-task problem (MaTOP) is a kind of challenging multitask optimization problem with more than three tasks. Two significant issues in solving MaTOPs are measuring intertask similarity and transferring knowledge among similar tasks. However, most existing algorithms only use a single similarity measurement, which cannot accurately measure the intertask similarity because the intertask similarity is a concept with multiple different aspects. To address this limitation, this article proposes a bi-objective knowledge transfer (BoKT) framework, which aims first to accurately measure different types of intertask similarity using two different measurements and second to effectively transfer knowledge with different types of similarity via specific strategies. To achieve the first goal, a bi-objective measurement is designed to measure intertask similarity from two different aspects, including shape similarity and domain similarity. To achieve the second goal, a similarity-based adaptive knowledge transfer strategy is designed to choose the suitable knowledge transfer strategy based on the type of intertask similarity. We compare the BoKT framework-based algorithms with several state-of-the-art algorithms on two challenging many-task optimization test suites with 16 instances and on real-world MaTOPs with up to 500 tasks. The experimental results show that the proposed algorithms generally outperform the compared algorithms.

A Bi-Objective Knowledge Transfer Framework for Evolutionary Many-Task Optimization Yi Jiang , Student Member, IEEE, Zhi-Hui Zhan , Senior Member, IEEE, Kay Chen Tan , Fellow, IEEE, and Jun Zhang , Fellow, IEEE Abstract-Many-task optimization problem (MaTOP) is a kind of challenging multitask optimization problem with more than three tasks.Two significant issues in solving MaTOPs are measuring intertask similarity and transferring knowledge among similar tasks.However, most existing algorithms only use a single similarity measurement, which cannot accurately measure the intertask similarity because the intertask similarity is a concept with multiple different aspects.To address this limitation, this article proposes a bi-objective knowledge transfer (BoKT) framework, which aims first to accurately measure different types of intertask similarity using two different measurements and second to effectively transfer knowledge with different types of similarity via specific strategies.To achieve the first goal, a bi-objective measurement is designed to measure intertask similarity from two different aspects, including shape similarity and domain similarity.To achieve the second goal, a similarity-based adaptive knowledge transfer strategy is designed to choose the suitable knowledge transfer strategy based on the type of intertask similarity.We compare the BoKT frameworkbased algorithms with several state-of-the-art algorithms on two challenging many-task optimization test suites with 16 instances and on real-world MaTOPs with up to 500 tasks.The experimental results show that the proposed algorithms generally outperform the compared algorithms.

Index Terms-Bi-objective, evolutionary computation, evolutionary many-task optimization (EMaTO), evolutionary multitask optimization (EMTO), knowledge transfer.
In recent years, research into applying EAs to solve optimization problems that contain multiple similar or related tasks has become a popular topic [8].Herein, a task also denotes an optimization problem.These are multitask optimization problems (MTOPs) that can be solved more efficiently by transferring common knowledge among similar or related tasks [9], [10], [11].The evolutionary multitask optimization (EMTO) research paradigm has caused great attention [12], [13], [14], [15], [16].These works have shown that sharing common knowledge may considerably enhance the efficiency of EAs in solving MTOPs.Therefore, many new EMTO algorithms with knowledge transfer strategies [17], [18], [19], [20] have recently been proposed, showing promising results on MTOPs with two or three tasks.
However, when faced with many-task optimization problems (MaTOPs) that are required to simultaneously deal with more than three tasks, existing EMTO algorithms still perform poorly and yield unsatisfactory results [21].Knowledge transfer is more uncertain and challenging in evolutionary many-task optimization (EMaTO) [22].This is mainly due to that only a few tasks are similar to the current task, while the others are either nonsimilar or irrelated to the current task [22].Due to the aforementioned reason, several EMaTO algorithms have been proposed to effectively transfer knowledge across multiple tasks.For example, Liaw and Ting [23] designed an evolution of biocoenosis through symbiosis (EBS) framework to gather and share knowledge from all of the tasks, regardless of the similarity among tasks.However, since it is difficult to extract common knowledge from two irrelated tasks, transferring knowledge between the current task and an irrelated task may be ineffective or even harmful for solving the current task in MaTOP [21], [22].An essential issue in EMaTO is how to measure the similarity of these many tasks so that the current task can be assisted by useful knowledge from similar tasks.Therefore, intertask similarity measurements have been adopted in several algorithms to select similar tasks to help better transfer knowledge.These methods have shown promise in solving MaTOPs because knowledge transfer between similar tasks is more efficient.For example, Chen et al. [21] used the Kullback-Leibler divergence (KLD) as similarity measurement and proposed an archivebased evolutionary framework to transfer knowledge among similar tasks.Liang et al. [22] utilized the maximum mean discrepancy to measure intertask similarity and proposed an EMaTO framework based on multisource knowledge transfer (EMaTO-MKT).Although transferring knowledge from similar tasks considerably enhances the efficiency of knowledge This work is licensed under a Creative Commons Attribution 4.0 License.For more information, see https://creativecommons.org/licenses/by/4.0/Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
transfer, most of the existing EMaTO algorithms only use a single measurement to measure intertask similarity, to the best of our knowledge.However, there are multiple different types of intertask similarities, such as shape similarity and domain similarity (discussed in Section II-C), which cannot be accurately measured simply by only a single measurement.Measuring intertask similarity from different aspects (measurements) may be more accurate and can increase knowledge transfer efficiency while tackling MaTOPs.
To this aim, a bi-objective knowledge transfer (BoKT) framework is proposed in this article for the first time to address the aforementioned research gap on EMaTO.Two goals are achieved in the BoKT framework: one is to accurately measure intertask similarity to find all the similar tasks in different aspects, while another is to effectively transfer knowledge among tasks via suitable knowledge transfer strategy.Herein, the first goal deals with the issue of "which knowledge can be transferred" and the second goal deals with the issue of "how to transfer effective knowledge."To achieve the first goal, a bi-objective measurement (BoM) is developed to accurately measure the intertask similarity of each pair of tasks via two different measurements.In BoM, the first objective (i.e., measurement) is shape similarity, which refers to the similarity between the landscape shapes of the two tasks.The second objective is domain similarity, which refers to the similarity between the global optimal domains of the two tasks.Then, the BoKT framework selects the BoM-based nondominated tasks (i.e., those tasks with either good shape similarity or good domain similarity) to build a similar-task pool (STP) for each task.To achieve the second goal, a similarity-based adaptive knowledge transfer (SAKT) strategy is designed to select the most suitable knowledge transfer strategy based on the type of intertask similarity between the assisted tasks and the current task.
Therefore, the BoKT contributes to two significant issues (i.e., similarity measurement and knowledge transfer) in EMaTO.The specific contributions of this article are concluded as follows.
1) BoM is proposed to accurately measure intertask similarity via two measurements: a) shape similarity and b) domain similarity.The shape similarity helps BoKT select tasks with similar landscape shapes, while the domain similarity helps BoKT select tasks with similar global optimal domains.2) For each task, BoKT creates an STP containing similar tasks with nondominated BoM values.Since STP contains all the similar tasks to the current task, it is more effective and efficient to transfer knowledge between the current task and an assisted task selected from the STP.
3) The BoKT framework uses the SAKT strategy to select the most suitable knowledge transfer strategy based on the type of intertask similarity for efficiently solving different kinds of tasks.The SAKT strategy combines three knowledge transfer strategies, which are intratask strategy, shape knowledge transfer strategy, and domain knowledge transfer strategy.The intratask strategy is mainly adopted for tasks that are not similar to the other tasks.The shape knowledge transfer strategy and the domain knowledge transfer strategy are adopted to efficiently transfer knowledge across shape-similar tasks and domain-similar tasks, respectively.4) The BoKT framework is combined with two wellknown EAs: a) genetic algorithm (GA) [24], [25] and b) differential evolution (DE) [26], [27], [28], to propose BoKT-GA and BoKT-DE, respectively.First, we compare the performance of BoKT-GA and BoKT-DE against some state-of-the-art EMaTO algorithms on CEC19 and WCCI20 MaTOP benchmark test suites to verify their effectiveness and efficiency.Second, to further show the effectiveness of BoKT-GA and BoKT-DE, we conduct the experiments on a real-world MaTOP application, which is called the planar kinematic arm control problems [29] with up to 500 tasks.The results on both the benchmark problems and the real-world application indicate BoKT-GA and BoKT-DE outperform the compared state-of-the-art EMaTO algorithms.The remainder of this article is organized as follows.First, in Section II, we introduce the related works on EMTO and EMaTO and the motivation of this study.Then, the detailed process of BoKT and two BoKT-based EAs (i.e., BoKT-GA and BoKT-DE) are described in Section III.Afterward, the experimental studies of BoKT are given in Section IV.Finally, we draw the conclusion in Section V.

A. EMTO and EMaTO
Taking inspiration from the parallel processing ability of the human brain, EMTO [8] aims at simultaneously solving multiple optimization tasks in a single algorithm execution.Since intertask relativeness widely exists in real-world optimization tasks, sharing common knowledge among tasks can benefit search performance.MTOP is an optimization problem with numerous tasks that are mutually either related or irrelated.MaTOP is a type of special MTOP with more than three tasks.Because MaTOP has a larger number of tasks than MTOP, addressing the MaTOP poses more difficulties than solving the MTOP.
In specific, for an MTOP with K tasks (K is larger than three in MaTOP), let T i denotes the ith task, where the search space is denoted as R i and the objective function is denoted as f i .The goal of the minimization MTOP is to find an optimal solution x i for each task T i that satisfies In MTOP, since the search space for different tasks are usually different, for the convenience of intertask knowledge transfer, the decision variables of different tasks are first mapped into a unified space before by where L i and U i are the upper bound and lower bound of the ith task, respectively.y i is the variable after mapping x i to [0, 1] D .In function evaluation, the variable y i in the unified space [0, 1] D is decoded back to the decision space as

B. Related Works
Many EA variants have recently been proposed to address MTOPs.Through sharing common knowledge across tasks, the majority of existing EMTO algorithms' performance is generally better than traditional EAs for single tasks.Gupta et al. [16] proposed a multifactorial EA (MFEA), which was the first and efficient EMTO algorithm for concurrently solving multiple tasks.Each individual is assigned a skill factor in MFEA, allowing individuals to be assigned to different tasks.The crossover between two parents belonging to different tasks is used to realize intertask knowledge transfer.As compared to traditional EAs, the performance of MFEA is more promising.
Following the success of MFEA, various EMTO algorithms have been proposed to tackle multiple tasks simultaneously.These algorithms can be classified into two categories, which are EMTO algorithms with a single population and EMTO algorithms with multiple populations.
The EMTO algorithms with a single population put all the individuals corresponding to different tasks into a population and perform knowledge transfer among these individuals.For example, based on MFEA, Bali et al. [20] presented MFEA-II, which improved knowledge transfer efficiency by estimating the optimal parameters.Feng et al. [30] the first time extended to combine the MFEA framework with particle swarm optimization and DE, resulting in new promising EMTO algorithms.Zhou et al. [31] noticed that different crossover operators have different properties in knowledge transfer and proposed an adaptive knowledge transfer MFEA.Different from the EMTO algorithms with a single fixed crossover operator, the adaptive knowledge transfer MFEA is more effective by adaptively selecting the fittest crossover operator from several candidate operators.Gong et al. [32] introduced an EMTO algorithm with a dynamic resource allocating strategy.Tang et al. [33] presented a multifactorial DE with aligned subspace continuity transfer strategy (ASCMFDE), which mapped the original decision space to aligned subspace and performed knowledge transfer in the aligned subspace.
The EMTO algorithms with multiple populations have also performed admirably.In the EMTO algorithms with multiple populations, each population corresponds to a task and the knowledge transfer is performed among populations.Feng et al. [19] proposed a novel idea that used an autoencoder to simulate the mapping relationship between tasks and explicitly transfer intertask knowledge among multiple populations, showing outstanding performance.Zhou et al. [34] found that nonlinear mapping can more accurately reflect the relationship among tasks than linear mapping and proposed a novel and effective kernelized autoencoding strategy to achieve nonlinear mapping.Li et al. [35] proposed a metaknowledge transfer method to share metaknowledge.Metaknowledge is a kind of "knowledge of knowledge" that can be transferred more generally among different populations for different tasks.Wu et al. [36] proposed an orthogonal strategy to transfer knowledge and solved different tasks using multiple populations.
Although many EMTO algorithms have had significant success in dealing with MTOPs with no more than three tasks, they are generally inefficient in tackling MaTOPs.Recently, some algorithms have been proposed to solve MaTOPs.For example, Liaw and Ting [23] proposed the EBS framework to adaptively leverage knowledge in MaTOPs.The solutions to all the tasks are collected and transferred among the tasks based on an adaptively controlled probability.However, the transferred knowledge comes from both related or irrelated tasks, and thus the knowledge transfer can be negative.Chen et al. [21] introduced an archive-based EMaTO framework, which maintained a population and a corresponding archive for each task.The intertask similarity is calculated using KLD.Liang et al. [22] proposed the EMaTO-MKT algorithm, which utilized maximum mean discrepancy to measure intertask similarity and local estimation of distribution-based knowledge transfer strategy to accomplish positive knowledge transfer.Shang et al. [37] utilized the explicit knowledge transfer strategy and created an explicit EMaTO algorithm.Thanh et al. [38] introduced a many-task multiarmed bandit EA, which tried to select the most suitable assisted task based on the reward feedback of the action.These EMTO algorithms with intertask similarity measurement can achieve positive and effective knowledge transfer between similar tasks.However, these existing EMTO algorithms only adopt a single similarity measurement that may be not able to accurately measure the intertask similarity.

C. Motivation
How to accurately measure the intertask similarity in the many-task environment is still an open problem.The intertask similarity is a concept with multiple aspects that cannot be accurately measured via only a single metric.In Fig. 1, we illustrate four figures, each of which shows the curves of two tasks.In Fig. 1(a), it is obvious these two curves of task T 1 and task T 2 have similar shapes, while in Fig. 1(b), the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
domains of the two global optima (assume these two tasks are minimization problems) are similar.Based on this observation, we can conclude the tasks in Fig. 1(a) and (b) are similar in different aspects.We define these kinds of intertasks similarity according to shapes and global optimal domains as follows.
Category 1: Shape similarity, which means that the two tasks are with similar landscape shapes.
Category 2: Domain similarity, which means that the two tasks are with similar global optimal domains.
Note that if without special notice in the content of this article, we use "the domain of the task" to express the meaning of "the global optimal domain of the task" for short.Based on the two kinds of similarities, the other two kinds of intertask similarities can be derived, which are shown as follows.
Category 3: Shape and domain similarities, which means that the two tasks both have similar landscape shapes and similar global optimal domains.
Category 4: Nonshape or domain similarity, which means that the two tasks are not with similar landscape shapes or similar global optimal domains.
A single similarity metric cannot accurately measure intertask similarity in the four categories above.This is because a single similarity metric can only identify one type of intertask similarity and the other type of intertask similarity may be miscategorized as nonsimilarity.For example, if a similarity measurement that can only measure whether two tasks have similar landscape shapes is adopted, only the tasks in Category 1 or Category 3 will be viewed as similar tasks, but the tasks in Category 2 will be viewed as nonsimilar tasks.Therefore, it is critical to design multiple measurements to accurately measure intertask similarity in different aspects.
Furthermore, the most suitable knowledge transfer strtegies for solving the tasks with different types of similarity are different.For example, the most suitable knowledge transfer strategy for tasks with domain similarity is transferring knowledge about the global optimal domains from the assisted task to the current task.To transfer knowledge effectively, specific knowledge transfer strategies for tasks with different types of similarity must be designed.To accurately measure the intertask similarity and transfer effective knowledge, this article proposes the BoKT framework, which is detailed in the next section.

A. General Framework of BoKT
Fig. 2 depicts the overall framework of BoKT, whose pseudocode is described in Algorithm 1 for clarity.To solve a MaTOP with K tasks, in the beginning, the initialization process (Algorithm 2) is used to generate a population for each task and an archive ArF with K × K size, which contains the domain similarity among the tasks.
The evolutionary process of the BoKT framework mainly includes two components, which are measuring intertask similarity via BoM and knowledge transfer via SAKT strategy.First, to measure the intertask similarity via BoM, the archive update process (Algorithm 3) is carried out to recalculate the domain similarity stored in ArF.In the process of generating

7:
Environmental selection; 8: Adjust λ i ; 9: End For 10: End End STPs (Algorithm 4), BoM between each pair of tasks is first calculated based on the shape similarity and the domain similarity stored in ArF.Then, BoKT generates an STP for each task, which contains similar tasks with nondominated BoM values.Second, to effectively transfer knowledge among the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
For j = 1:NP 4: Randomly sample an individual x i,j from [0, 1] D ; 5: Evaluate x i,j by f i ; 6: Add x i,j to population P i ; 7: End For 8: For j = 1:K 9: ArF i,j = DomSim(T i , T j ); Randomly select a task T r ; 4: Calculate DomSim(T i , T r );

8:
Find the best individual x * r of population P r ; 9: Replace the worst individual of P i by x * r ; 11: End If 12: End For End tasks, the SAKT strategy (Algorithm 5) is then carried out to select a suitable knowledge transfer strategy based on similarity type to generate offspring.Finally, the population in the next generation is selected by environmental selection, and parameter λ i is adaptively adjusted.

B. Measuring Intertask Similarity 1) Evaluate Intertask Similarity by BoM:
Since the intertask similarity has different types, it should be accurately measured by multiple measurements from different aspects.If we use a single similarity measurement, the tasks with other similarity types will be viewed as nonsimilar tasks and the intertask similarity cannot be accurately calculated.To accurately measure the four categories of intertask similarity discussed in Section II-C (mainly including the shape similarity and domain similarity), this article proposes the BoM to measure the intertask similarity via the shape similarity and the domain similarity.BoM is an intuitive yet effective idea that STP i = {}, L = {}; 3: For j = 1:K 4: If i == j then 5: domSim i,j = inf ; shaSim i,j = ShaSim(T i , T j ); domSim i,j = ArF i,j ; 10: End If Add (shaSim i,j , domSim i,j ) into L; 12: End For 13: Find the indexes of non-dominated tasks in L; 14: Add the non-dominated tasks in STP i ; 15: Sort shaSim i,j and domSim i,j in L in ascend order; 16: For j = 1:K 17: shaRank i,j = the rank of shaSim i,j in L; 18: domRank i,j = the rank of domSim i,j in L; Add the offspring OP j into Q; 17: End If End treats intertask similarity as a bi-objective function with shape and domain similarities as objectives.
Shape similarity refers to how similar a pair of tasks' landscape shapes are.However, it is difficult to directly measure the shape similarity, since gaining knowledge of the entire landscape is difficult.This is because the task in MaTOP is a black-box optimization problem, and estimating the shape of the landscape can require a large number of function evaluations.As a consequence, finding an acceptable method to estimate the shape similarity indirectly is required.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Since the population tends to converge to the global optimum or local optima, population distribution information can reflect the landscape shape of the task.Taking this into account, the population distribution is used to indirectly represent the shape similarity.In existing EMaTO studies [21], KLD is used to calculate the similarity between two distributions.When two distributions are similar and close, the corresponding KLD value is small.In other words, if the mean and standard deviation of the two distributions are similar, the KLD will be small.However, it is inaccurate if the KLD of the two original distributions is directly adopted to reflect the shape similarity.Recalling Category 1 illustrated in Fig. 1(a), since the two tasks have similar landscape shapes, covariances of the two population distributions can be similar.But since the domains of the global optimum of the two tasks are far away, the centers (i.e., the means) of the two distributions can be different.Here, although the two landscapes have similar shapes, the value of KLD can be large.In such a case, the KLD between the unshifted distributions can incorrectly reflect the shape similarity.
To avoid the side-effect of the bias between the means of the two distributions, before calculating the KLD to represent the shape similarity, the bias between the means of the two distributions is cleared up via shifting.Specifically, to calculate the shape similarity between the current task T i and the assisted task T a , every individual x a in T a is shifted to x a as where μ a and μ i stand for the means of the population distributions of T a and T i , respectively.After shifting, the shape similarity of T a and T i is calculated as where C i and C a stand for the covariance matrix of P i and P a , respectively.P a is the shifted population of T a , and P i is the population of T i .tr(.) and det(.) are used to calculate the trace and the determinant of a given matrix.Note that a small value of ShaSim(T i , T a ) indicates the two tasks T i and T a are similar.
The second objective of BoM is domain similarity, which calculates the similarity of two global optimal domains between a pair of tasks.Before describing how to calculate the intertask domain similarity, two tasks with similar domains of the global optimum illustrated in Fig. 1(b) are discussed.Task T 1 is a simple task without local optimum, while task T 2 is more difficult and has many locally optimal solutions.If individuals with optimal fitness values in T 1 also perform well in T 2 , it can be indicated that T 1 and T 2 have similar domains of the global optimum (i.e., these two domains are close).Therefore, an intuitive way to estimate the domain similarity between these two tasks is by evaluating the best solution for one task using the fitness function of the other task and using the resulting fitness value as the domain similarity.
Specifically, to calculate the domain similarity of T 2 to T 1 , the best individual x 2 of T 2 is evaluated by the fitness function f 1 of T 1 .Note that the calculation of domain similarity is an asymmetric operation.That is, the domain similarity of T 2 to T 1 can be represented by the value f 1 (x 2 ), while the domain similarity of T 1 to T 2 can be represented by the f 2 (x 1 ).Without loss of generality, we define the domain similarity of the assisted task T a to the current task T i as DomSim(T i , T a ) = f i x * a (7) where f i (.) is the fitness function of T i , and x * a stands for the best individual of population P a of task T a .Note that for minimization MaTOP, a smaller value of DomSim(T i ,T a ) indicates the domain of T a to T i are more similar.
As the best individual in each population is dynamically changed, the domain similarity between the two tasks also changes and needs to be recalculated.If the domain similarity of each pair of tasks is recalculated, extra K ×(K −1) function evaluations are consumed in every generation.However, in MaTOP, the number of tasks K is relatively large, consuming extra K × (K−1) function evaluations in each generation to calculate the domain similarity is too costly.Therefore, to both accurate the domain similarity and reduce the cost of function evaluations, an archive ArF with K × K size is maintained to store the domain similarity and only partial records of domain similarity in ArF are recalculated in every generation.
The initialization process of ArF is described in lines 8-10 of Algorithm 2. At the beginning of the evolutionary process, the domain similarity between each pair of tasks is calculated and stored in ArF.
The update process of ArF is described in Algorithm 3. In every generation in the evolutionary process, instead of calculating the domain similarity between each pair of tasks, only the domain similarity between the current task and a randomly selected task is recalculated (lines 2-4).If the domain similarity between the current task and the randomly selected task is smaller than the corresponding record of domain similarity in ArF, the corresponding record will be updated (lines 5-7).This way, the cost of function evaluations is lowered to K, which is smaller than K × (K − 1).Specifically, let P i and P r stand for populations of the current task T i and the randomly selected task T r , respectively.If the fitness of the best individual x * r of P r is superior to the best individual x * i of P i , which means that the domain of P r is better than that of P i , then the worst individual in P i will be replaced by x * r to transfer domain knowledge from P r to P i , which is shown in lines 9-11.
2) Generate STPs: Knowledge transfer between two tasks with a high degree of similarity is more effective than knowledge transfer between tasks with a low degree of similarity.With this in mind, for each task, BoKT aims to find out all the similar tasks and records them in an STP.That is, there are K STPs for a MaTOP with K tasks.This way, the intertask similarity of each pair of tasks can be represented by two objective values via BoM.Specifically, the BoM of tasks T i and T j can be denoted as (shaSim i,j , domSim i,j ).
Since the BoM contains two objective values rather than a single value, inspired by the nondominated selection in multiobjective optimization [39], [40], [41], [42], similar tasks are selected based on the dominant relationship of the BoMs.For each task, an STP containing the indexes of similar tasks with nondominated BoM is generated, as shown in Algorithm 4. To generate the STP i for tasks T i , if a task T j satisfies that the BoM between T i and T j is not dominated by any BoMs between T i and the other tasks, the task T j is said to be a BoM-based nondominated task to T i , and index j will be added into STP i .A BoM 1 [denoted as (shaSim 1 , domSim 1 )] is said to dominate BoM 2 [denoted as (shaSim 2 , domSim 2 )] if the two conditions shaSim 1 ≤ shaSim 2 and domSim 1 ≤ domSim 2 are both satisfied and at least one of the two conditions shaSim 1 < shaSim 2 and domSim 1 < domSim 2 are satisfied.This way, the BoM-based nondominated tasks either with similar global optimal domains or with similar landscape shapes to the current task are stored in STP.Besides, after all STPs are generated, shaSim i,j and domSim i,j between the current task T i and other tasks are sorted in ascending order.The shaRank i,j and domRank i,j , which are the ranks of shaSim i,j and domSim i,j , are obtained and are utilized in the SAKT strategy.

C. Knowledge Transfer via SAKT Strategy
The types of intertask similarity between different tasks are usually different, as explained in Section II-C.Category 4 denotes a low level of similarity or nonsimilarity between the two tasks, whereas Categories 1-3 denote a high level of intertask similarity.The most suitable knowledge transfer strategies for different types of tasks are usually different.For example, if the similarity between the current task and the other tasks falls into Category 4, meaning the task is irrelated to other tasks, then transferring knowledge from the other irrelated tasks is ineffective or even causes side effects.Conversely, transferring knowledge from an assisted task selected in STP can help solve the current task more effectively in Categories 1-3.
More than this, for the tasks with different types of similarity (i.e., Categories 1 and 2), specific intertask knowledge transfer strategies should be designed.For example, transferring shape-based knowledge is effective in Category 1, while transferring global optimal domain-based knowledge is effective in Category 2.
Considering these, SAKT strategy is designed to execute the most suitable knowledge transfer strategy for solving different types of tasks.In the SAKT strategy, the intratask strategy, shape knowledge transfer strategy, and domain knowledge transfer strategy are adaptively selected according to the similarity type between the current task and the assisted task.The pseudocode of the SAKT strategy is shown in Algorithm 5.
1) Intratask Strategy: Since transferring knowledge among nonsimilar tasks (i.e., Category 4) is ineffective, we want to increase the probability of executing the intratask strategy on the task if this task is nonsimilar to the other tasks.In the BoKT framework, we assign each task T i a parameter λ i , which controls the probability of selecting intratask strategy [23].Specifically, for each individual, if a random number generated in [0, 1] is smaller than λ i , the intratask strategy is adopted to generate offspring; otherwise, the two intertask knowledge transfer strategies (i.e., shape knowledge transfer strategy and domain knowledge transfer strategy) are adopted to transfer knowledge and generate offspring.During the evolution process, the value of λ i is dynamically adaptively adjusted.For task T i , if the offspring generated by the intratask strategy is better than those generated by the intertask knowledge transfer strategy, λ i should be enlarged to spend more resources on the intratask strategy; otherwise, λ i should be reduced to spend more on the intertask strategy.Specifically, λ i for task T i is adaptively adjusted as follows: where σ stands for the user-defined parameter, namely, decay rate, N r and N e stand for the number of offspring generated by the intratask strategy and intertask knowledge transfer strategy (including the shape knowledge transfer strategy and domain knowledge transfer strategy), respectively, and NS r and NS e stand for the number of successful offspring generated by intratask strategy and intertask strategy, respectively.A successful offspring is defined as an offspring that is selected to enter the next generation via environmental selection.Note that λ i for each task is different, and initial value of each λ i is set as 0.5.

2) Shape/Domain Knowledge Transfer Strategy:
To efficiently transfer knowledge between similar tasks, the shape knowledge transfer strategy and domain knowledge transfer strategy are designed to transfer knowledge among the tasks with shape similarity (i.e., Categories 1 and 3) and the tasks with domain similarity (i.e., Categories 2 and 3), respectively.These two strategies are adaptively selected according to the type of similarity between the current task and the assisted task.If shaRank i,a is smaller than domRank i,a , which means the shape similarity between the two tasks is more evident, the shape knowledge transfer strategy is selected.Otherwise, the domain knowledge transfer strategy is executed on P a and P i .Note that in the SAKT strategy, since the scales of the shape similarity and domain similarity in BoM are different, the comparison (i.e., line 8 in Algorithm 5) is conducted between the rank of shape similarity shaRank i,a and the rank of domain similarity domRank i,a , rather than directly comparing values of shape similarity and domain similarity.The detailed implementations of the intratask strategy, shape knowledge transfer strategy, and domain knowledge transfer strategy for BoKT-GA and BoKT-DE are described later.

D. BoKT-GA and BoKT-DE
The BoKT framework is an effective and efficient framework for MaTOPs that can simply be integrated with different EAs.The BoKT framework is merged with two canonical If rand() < λ i then 5: Randomly select two parents p 1 and p 2 from P i ; 6: Generate OP j via SBX and PM in ( 11) and ( 12); 7: Else: 8: If shaRank i,a ≤ domRank i,a then 9: Shift P a to the center of P i via (4) as P a ; 10: Calculate mean i and std i of P i P a via ( 14) and ( 15); 11: Generate OP j via ( 16); 12: Else: 13: Randomly select a parent p 1 from P i ; 14: Randomly select a parent p 2 from P a ; 15: Generate OP j via SBX and PM in ( 11) and ( 12); EAs, namely, GA and DE, to build BoKT-GA and BoKT-DE.Herein, the intertask strategy, shape knowledge transfer strategy, and domain knowledge transfer strategy of BoKT-GA and those of BoKT-DE are described.The pseudocodes of the SAKT strategy in BoKT-GA and BoKT-DE are presented in Algorithms 6 and 7, respectively.
1) SAKT Strategy in BoKT-GA: In BoKT-GA, first, simulated binary crossover (SBX) [43], [44] and polynomial mutation (PM) [45] are adopted to generate offspring in the intratask strategy.Without loss of generality, the jth offspring OP j is generated as follows: where v d and u d are two numbers randomly sampled from [0, 1].η c , p m , and η m are user-defined parameters.p 1 and p 2 are two parental individuals that are randomly selected in the population P i .In (12), with a randomly generated number r d from [0, 1], the value of β d is calculated as follows: Second, the shape knowledge transfer strategy is designed to transfer useful knowledge among the tasks with shape similarity.As discussed in Section III-B, the population distributions of two tasks with similar landscape shapes can also be similar.In addition, since the values of shape similarity among tasks are calculated via the KLD, the populations with similar distributions can have relatively small values of shape similarity.Therefore, transferring population distribution knowledge If rand() < λ i then 5: Randomly select three parents x 1 , x 2 , and x 3 from P i ; 6: Generate OP j via DE/rand/1 in ( 17) and ( 18); 7: Else: 8: If shaRank i,a ≤ domRank i,a then 9: Shift P a to the center of P i via (4) as P a ; 10: Calculate mean i and std i of P i P a via ( 14) and ( 15); 11: Generate OP j via ( 16); 12: Else: 13: Randomly select three parents from P i P a ; 14: Generate OP j via DE/rand-to-best/1 in ( 19) and ( 18); 15: End If

17:
Add the offspring OP j into Q; 18: End If End can be helpful.The idea of the shape knowledge transfer strategy is to effectively transfer distribution-based knowledge among tasks with similar landscape shapes.The shape knowledge transfer strategy first estimates the merged distribution of the tasks with similar landscapes.This way, the merged distribution contains the distribution knowledge of the tasks with similar landscapes.Then, the shape knowledge transfer strategy generates offspring based on the merged distribution knowledge and thus transfers distribution-based knowledge.
Specifically, as shown in line 1 of Algorithm 6, an assisted task T a is randomly chosen from STP i , then its corresponding population P a is shifted to the center of the population P i of the current task T i via (4), denoted as P a , as shown in line 9 of Algorithm 6.The mean mean i and the standard deviation std i of the union set P i P a are then calculated as According to mean i and std i , the offspring OP j is sampled from the normal distribution as Third, the domain knowledge transfer approach in BoKT-GA is used to transfer the knowledge of potentially promising domains among tasks with domain similarity.As the above description of the domain similarity, if the domains of the current task and the assisted task are similar, the domain of the assisted task may be a potentially promising domain for the current task.Therefore, the individuals of the assisted task can contain the knowledge of potentially promising domain, and the crossover between the individuals from the assisted task and the current task can generate good offspring with a high probability based on this domain knowledge.To generate an offspring OP j , first, two individuals p 1 and p 2 are randomly selected as the parents from the populations P i and P a , respectively.Then, the SBX and PM in (11) and ( 12) are executed to generate the offspring OP j according to p 1 and p 2 .
2) SAKT Strategy in BoKT-DE: In BoKT-DE, first, DE/rand/1 mutation and crossover operations are adopted as the intratask strategy, which are shown as follows: where x 1 , x 2 , and x 3 are three mutually exclusive individuals randomly selected in P i .F and CR are user-defined parameters.drand is the index of a randomly selected dimension.Second, for the shape knowledge transfer strategy, ( 16) is also adopted.Besides, the DE/rand-to-best/1 mutation strategy is performed in the domain knowledge transfer strategy of BoKT-DE, which is given as where x best is the best individual in P i .The x 1 , x 2 , and x 3 are three individuals randomly selected from P i P a .

E. Computational Complexity Analysis
BoKT framework can be divided into three main components, i.e., measuring intertask similarity, generating STPs, and the SAKT strategy.To analyze the computational complexity of each component and the BoKT framework, we denote the task number as K, the population size corresponds to each task as NP, and the number of dimensions of each task as D. First, measuring the intertask similarity includes measuring shape similarity and measuring domain similarity.The computational complexity of computing the shape similarity is O(D 3 + D × NP), and the computational complexity of computing the domain similarity is O(1).Therefore, the time complexity of measuring the intertask similarity of the K tasks in every generation is O(K 2 × D 3 + K 2 × D × NP).Second, the process of generating STP includes finding BoM-based nondominated tasks, whose computational complexity is O(K 2 ).Third, the computational complexity of the SAKT strategy is O(K × NP × D).Therefore, based on the above computational complexity analysis of the main components, in every generation, the computational complexity of the BoKT framework for a MaTOP with

IV. EXPERIMENTAL STUDIES
This section gives the experimental studies on the BoKT framework.First, to evaluate the effectiveness and efficiency of the proposed BoKT framework, we conduct comparisons between BoKT-GA, BoKT-DE, and the state-of-theart EMaTO algorithms on the CEC19 MaTOP test suite.Second, we also conduct the comparison on a more challenging WCCI20 MaTOP test suite.Third, the experiments are designed to analyze the effects of shape similarity and domain similarity in the SAKT strategy.Fourth, to study the influence of the settings of the parameter, we test the performance of BoKT-GA and BoKT-DE under different parameter settings.Fifth, to evaluate the performance of BoKT-GA and BoKT-DE in solving real-world MaTOPs, we conduct comparisons on the planar kinematic arm control problems with multiple tasks [29], [46].

A. Benchmark Functions and Performance Metrics
The experiments are carried out on the single-objective MaTOPs from the CEC2019 competition on EMaTO [47] and those from the WCCI2020 competition on EMaTO [48].For short, the two test suites are known as CEC19 and WCCI20, respectively.Six MaTOPs, each with 50 tasks, are included in the CEC19 benchmarks.Tasks in a MaTOP from the CEC19 benchmark are homogeneous because they are built by shifting and rotating a single-objective basic function.The WCCI20 benchmark includes 10 MaTOPs and each of them also includes 50 tasks.The basic functions of CEC19 and WCCI20 are shown in Table I.The tasks of each MaTOP in CEC19 are based on the same basic function, while the tasks of each MaTOP in WCCI20 are based on different basic functions.Therefore, solving the MaTOPs in WCCI20 is more challenging than solving those in CEC19.
Two metrics are adopted to evaluate the algorithms' performance.The first metric is the average fitness value (AFV).Since each task in CEC19 and WCCI20 is a minimization problem, a smaller AFV indicates that the corresponding algorithm gets a better performance.Suppose the fitness values of the best solutions for the K tasks are denoted as f 1 , f 2 , . . ., f K , AFV is calculated as The second metric is the mean standard score (MSS) [49], [50].Since the scales of the fitness values of different tasks are different, the value of AFV can be dominated by relatively large fitness values.To reduce the influence of the fitness scale, the MSS standardizes the fitness Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.values before calculating the AFV.MSS is calculated as where μ i and σ i are the mean and standard deviation of the fitness values obtained by all the algorithms on the ith task over all the runs, respectively.Similar to AFV, a smaller MSS indicates that the algorithm gets better performance.Besides these two metrics, we adopt Wilcoxon's ranksum test [51] at α = 0.05 to statistically analyze the results.Notations "+/≈/−" denote that results obtained by BoKT-GA or BoKT-DE are "significantly superior/equivalent/inferior" to those obtained by the compared algorithms.

B. Compared Algorithms
Compared algorithms include six state-of-the-art algorithms, which are MFEA [16], ASCMFDE [33], EBS-GA, EBS-DE [23], MaTDE [21], and EMaTO-MKT [22].In the MFEA and ASCMFDE, the population size is set as 5000.In the BoKT-GA, BoKT-DE, EBS-GA, EBS-DE, MaTDE, and EMaTO-MKT, the population size is set as 100.In the DEbased algorithms (i.e., BoKT-DE, ASCMFDE, EBS-DE, and MaTDE), the amplifier factor F and crossover rate CR are set as 0.5 and 0.7, respectively.In the GA-based algorithms (i.e., BoKT-GA, MFEA, EBS-GA, and EMaTO-MKT), parameter η c for SBX is set as 2.0, and parameters η m and p m for PM are set as 5.0 and 1/D, respectively.In the MFEA, rmp is set as 0.3.In BoKT-GA and BoKT-DE, σ is set as 0.9.The parameters that are not mentioned are set as the optimal parameter settings in their papers.The number of maximum function evaluations is set as 5×10 6 .Each algorithm is executed 20 times to obtain the experimental results.

C. Comparison With GA-Based Algorithms
We compare the proposed BoKT-GA against three GA-based algorithms, which are MFEA, EBS-GA, and  EMaTO-MKT.Table II exhibits the results of AFV, MSS, and the number of best tasks on which the algorithm gets the best AFV obtained by BoKT-GA, MFEA, EBS-GA, and EMaTO-MKT.In addition, the convergence curves of log(AFV) on P1, P2, and P4 in CEC19 and P1, P4, and P7 in WCCI20 are shown in Figs. 3 and 4 to observe the convergence behavior of BoKT-GA, MFEA, EBS-GA, and EMaTO-MKT.
As shown in Table II, the results of BoKT-GA for either AFV or MSS are significantly superior to those of the compared algorithms on most problems.Notably, for AFV, BoKT-GA outperforms MFEA, EBS-GA, and EMaTO-MKT on 15, 14, and 14 problems over the total 16 problems, respectively, while for MSS, BoKT-GA outperforms on 15, 15, and 14 cases.On CEC19, BoKT-GA gets generally better performance.Considering the results of MSS, BoKT-GA achieves the best performance on CEC19-P1 to CEC19-P4.On most of the MaTOPs in WCCI20, BoKT-GA achieves the best performance among all the compared algorithms.
We can find that the compared algorithms that only have a single or no similarity measurement get generally worse results on CEC19 and WCCI20 than BoKT-GA.This is because BoM can more accurately measure the intertask similarity than the single similarity measurement.Thus, BoKT-GA achieves promising performance on MaTOPs.Also, the results of the number of best tasks in Table II support this point.BoKT-GA gets the best AFV on more than 30 tasks in most problems, and the average number of best tasks of BoKT-GA is 27.94, which is superior to that obtained by the compared algorithms.Besides, from the curve in Figs. 3 and 4, we can find that BoKT-GA achieves generally faster convergence speed than compared algorithms.

D. Comparison With DE-Based Algorithms
To verify the performance of BoKT-DE, we compare it with the state-of-the-art DE-based algorithms, i.e., ASCMFDE, EBS-DE, and MaTDE.The results of AFV, MSS, and the number of best results of BoKT-DE, ASCMFDE, EBS-DE, and MaTDE are shown in Table III.Besides, convergence curves of log(AFV) on P1, P2, and P4 in CEC19 and on P1, P4, and P7 in WCCI20 are illustrated in Figs. 5 and 6.
Based on the results for AFV and MSS, we can find that the performance of BoKT-DE is generally superior to that of the compared DE-based algorithms on either CEC19 or WCCI20.When the results of Wilcoxon's rank-sum test are taken into account, BoKT-DE significantly outperforms the compared algorithms in more than eight cases, while the experimental results for MSS reveal that BoKT-DE significantly outperforms ASCMFDE, EBS-DE, and MaTDE on 13, 11, and 14 MaTOPs, respectively.Furthermore, BoKT-DE surpasses the compared algorithms in terms of the number of best tasks.BoKT-DE achieves the best results with average 38 tasks in the majority of problems.Even in some MaTOPs, BoKT-DE performs better than the competitors in all 50 tasks.
BoKT-DE also obtains good results in terms of convergence speed, as can be seen from the convergence curves.On CEC19-P2, CEC19-P4, WCCI20-P1, and WCCI20-P7, the convergence speed advantage of BoKT-DE is more apparent.As a result of the experimental analysis, it can be concluded that BoKT-DE outperforms the compared algorithms.

E. Component Effects
To analyze whether measuring intertask similarity using two different measurements is more accurate and how the shape knowledge transfer strategy and domain knowledge transfer strategy are complementary to each other, we compare the BoKT framework with BoM and both shape/domain knowledge transfer strategy and two BoKT variants with a single similarity measurement and a single knowledge transfer strategy.These two variants are the BoKT framework with shape similarity and with shape knowledge transfer strategy and the BoKT framework with domain similarity and with domain knowledge transfer strategy.In these variants, each STP only includes one task, which is the most similar task to the current task based on the single similarity measurement.The knowledge is transferred between the current task and the only task in STP via the single knowledge transfer strategy.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The experimental results for the AFV obtained by BoKT-GA, BoKT-DE, and four variants with single similarity measurement and single knowledge transfer strategy are shown in Table IV.BoKT-GA and BoKT-DE have generally better performance than the other four variations.For example, in 11 cases, the original BoKT-GA and BoKT-DE outperform the BoKT variants with shape similarity and shape knowledge transfer strategy, whereas in nine cases, the BoKT variants with shape similarity and shape knowledge transfer strategy outperform the original BoKT-GA and BoKT-DE.Similarly, in 20 problems, the original BoKT-GA and BoKT-DE get better results than the BoKT variants with domain similarity and domain knowledge transfer strategy, while on seven problems, the BoKT variants with domain similarity and domain knowledge transfer strategy outperform the original BoKT.
Furthermore, separate analyses of experimental results on CEC19 and WCCI20 reveal that BoKT-GA with shape similarity and shape knowledge transfer strategy outperforms BoKT-GA on three CEC19 problems, and BoKT-DE with domain similarity and domain knowledge transfer strategy outperforms BoKT-DE on five WCCI20 problems.These two situations are mainly due to two reasons.First, the tasks in each MaTOP in CEC19 are homogeneous, which means that their landscape shapes are similar.In such a case, the shape knowledge transfer strategy is more effective.Second, the tasks in WCCI20-P5 to WCCI20-P8 and WCCI20-P10 are heterogeneous and contain several local optima, implying that their landscape shapes are not similar and the individuals are distributed differently.It is easy for the population to become trapped in the local optima.In this the domain knowltransfer strategy can help to jump out of the local optima and obtain promising results.Overall, it can be concluded that measuring intertask similarity using BoM is more accurate than using a single measurement.

F. Influence of Parameter Settings
In the BoKT framework, there is an additional parameter called the decay rate, which is denoted by σ and must be  predefined.This section analyses the influence of σ in the BoKT framework and aims to determine the optimal value of σ .Five variants of BoKT framework with different σ are designed, where σ is set as σ = 0.1, σ = 0.3, σ = 0.5, σ = 0.7, and σ = 0.9.The experimental results for AFV obtained by BoKT-GA and BoKT-DE variants with different settings of σ are shown in Tables V and VI, respectively.Also, the mean rank obtained by each BoKT-GA or BoKT-DE variant on total 16 MaTOPs is calculated.According to the experimental results, it can be seen that both BoKT-GA and BoKT-DE achieve the best performance when σ = 0.9.Specifically, the values of the mean rank of BoKT-GA with σ = 0.9 and BoKT-DE with σ = 0.9 are 2.50 and 2.69, respectively, which are the best results among the results obtained by other variants.In addition, the curves for AFV obtained by BoKT-GA on the Rastrigin function with different σ are illustrated in Fig. 7, since the Rastrigin function is a challenging function that contains multiple local optima.MaTOPs containing the Rastrigin function, which are CEC19-P3, WCCI20-P3, WCCI20-P5, WCCI20-P7, WCCI20-P8, WCCI20-P9, and WCCI20-P10, are chosen to exhibit.As shown in Fig. 7, as the value of σ increases from 0.1 to 0.9,   the AFV generally decreases.BoKT-GA with σ = 0.9 produces the best AFV results.Therefore, we can conclude that 0.9 is the best value for σ .

G. Experiments on Real-World Application
To evaluate the performance of BoKT-GA and BoKT-DE in solving real-world MaTOPs, we conduct the comparison experiments among BoKT-GA, BoKT-DE, and the state-ofthe-art EMaTO algorithms on planar kinematic arm control problems with 50, 100, 200, and 500 tasks [29], [46].Fig. 8(a) illustrates a task in the planar kinematic arm control problem.Here, the goal of each task is to optimize the angle of each joint (denoted as α 1 , α 2 , . . ., α d ) to minimize the distance between the tip of the arm (denoted as P D ) and the target (denoted as T).d denotes the number of dimensions, which equals the number of joints and the number of links.Thus, each task in the planar kinematic arm control problem is a minimization problem, whose decision variables are α 1 , α 2 , . . ., α d and objective function is where L t is the total length of the links in the t-th task and α t max is the sum of the maximum angles of all the joints (i.e., the length of each link is L t /d and the maximum angle of each joint is α t max /d).By taking different values of L t and α t max , several different tasks can be obtained.The tasks with different L t and α t max are created by centroidal Voronoi tessellation [46].The number of dimensions d of each task is set as 20, the value range of each angle is [0, 1], and the position of the target T is set as [0.5, 0.5].More detailed properties of the planar kinematic arm control problem can be found in [29] and [46].In the experiment, since the adaptive EMTO (AEMTO) algorithm [29] is promising in solving planar kinematic arm control problems with many tasks, we include AEMTO in the compared algorithms to better show the effectiveness of the BoKT framework.The population size corresponding to each task is set as 20 and the number of generations is set as 100 for each algorithm.The curves of the AFV on 500-task case obtained by BoKT-GA, BoKT-DE, and the compared EMaTO algorithms are illustrated in Fig. 8(b) and (c).For a clear comparison, the MSS values of the final solutions obtained by GA-based algorithms and DE-based algorithms on planar kinematic arm control problems are listed in Tables VII and VIII.As can be observed from the figures and tables, first, the convergence speeds of BoKT-GA and BoKT-DE are generally better than those of the compared algorithms.Second, the accuracy of the final solutions obtained by BoKT-GA and BoKT-DE is also superior to that obtained by the compared EMaTO algorithms.These are mainly because the BoKT framework can accurately measure intertask similarity and effectively transfer knowledge among similar tasks.Therefore, based on the experimental result, we can conclude that the BoKT-GA and BoKT-DE are effective and efficient in solving real-world MaTOPs.

V. CONCLUSION
This article proposed a BoKT framework for solving the MaTOP efficiently.In the BoKT framework, BoM was designed to accurately measure the intertask similarity using two different measurements, which are shape similarity and domain similarity.The BoM-based nondominated tasks were identified as similar tasks for each task and were stored in the corresponding STP.The SAKT strategy was proposed to select the most suitable knowledge transfer strategy based on the types of intertask similarity.The intratask strategy, shape knowledge transfer strategy, and domain knowledge transfer strategy were chosen for solving nonsimilar tasks, tasks with shape similarity, and tasks with domain similarity, respectively.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

Fig. 1 .
Fig. 1.Illustration of four kinds of intertask similarities.(a) Shape similarity.Two tasks with similar landscape shapes.(b) Domain similarity.Two tasks with similar global optimal domains.(c) Similar in shape and domain.(d) Non shape or domain similarity.Two tasks are not with similar landscape shapes or similar global optimal domains.

Fig. 8 .
Fig. 8. (a) Illustration of a task with three joints in the planar kinematic arm control problem.(b) Curves of AFV on planar kinematic arm control problem with 500 tasks obtained by BoKT-GA and the compared GA-based algorithms.(c) Curves of AFV on planar kinematic arm control problem with 500 tasks obtained by BoKT-DE and the compared DE-based algorithms.
Output: ArF-the updated archive.Begin1: For i = 1:K 2:Find the best individual x * i of population P i ;3: ArF-the archive; P 1 , P 2 , . . ., P K -the populations of tasks T 1 , T 2 , . . ., T K .Output: STP 1 , STP 2 , . . ., STP K -the STPs of tasks T 1 , T 2 , . . ., T K ; shaRank-the list contains the rank of the shape similarity; domRank-the list contains the rank of the domain similarity.Begin Input: 1: For i = 1:K 2: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.STP i -the STP of the current task T i .P 1 , P 2 , . . ., P K -the populations of tasks T 1 , T 2 , . . ., T K .NP-the size of each population; Output: Q -the set of offspring; Begin 1: Randomly select a task from STP i as T a ; Input: 2: Q = {};3: For j = 1:NP 4: Algorithm 7 SAKT Strategy in BoKT-DE Input: STP i -the STP of the current task T i .P 1 , P 2 , . . ., P K -the populations of tasks T 1 , T 2 , . . ., T K .NP-the size of each population; Output: Q -the set of offspring; Begin 1: Randomly select a task from STP i as T a ; 2: Q = {}; 3: For j = 1:NP 4:

TABLE I BASIC
FUNCTIONS OF EACH MATOP IN CEC19 AND WCCI20

TABLE II EXPERIMENTAL
RESULTS OF AFV, MSS, AND NUMBER OF BEST RESULTS (#B) OBTAINED BY BOKT-GA, MFEA, EBS-GA, AND EMATO-MKT ON CEC19 AND WCCI20

TABLE IV EXPERIMENTAL
RESULTS OF AFV OBTAINED BY BOKT-GA, BOKT-DE, AND THEIR VARIANTS WITH SINGLE SIMILARITY MEASUREMENT AND KNOWLEDGE TRANSFER STRATEGY ON CEC19 AND WCCI20

TABLE V EXPERIMENTAL
RESULTS OF AFV OBTAINED BY BOKT-GA VARIANTS WITH DIFFERENT σ ON CEC19 AND WCCI20

TABLE VI EXPERIMENTAL
RESULTS OF AFV OBTAINED BY BOKT-DE VARIANTS WITH DIFFERENT σ ON CEC19 AND WCCI20

TABLE VII EXPERIMENTAL
RESULTS OF MSS OBTAINED BY BOKT-GA AND THE COMPARED GA-BASED ALGORITHMS ON PLANAR KINEMATIC ARM CONTROL PROBLEMS TABLE VIII EXPERIMENTAL RESULTS OF MSS OBTAINED BY BOKT-DE AND THE COMPARED DE-BASED ALGORITHMS ON PLANAR KINEMATIC ARM CONTROL PROBLEMS