Optimum Nozzle Design for a Viscous Liquid by Using Multi-Objective Search Approaches

This work deals with the problem of a nozzle design addressed through a multi-objective optimization strategy where governing equations of fluid dynamics model the phenomena. The liquid flow rate and the nozzle length are considered as the design criteria. Two Differential Evolution variants are proposed to obtain a set of design configurations that presents numerous and different synergies between the design criteria. The first variant is based on the Hypervolume performance metric (MODE-HVR) and the second one includes the $\epsilon $ -dominance concept (MODE-HV $\epsilon \text{R}$ ). A comparative study is performed with other optimizers from well-established search approaches for multi-objective optimization such as algorithms based on Pareto dominance (NSGA-II and SPEA2), decomposition (MOEA/D), metric (SMS-EMOA) and hybrid (NSGA-III). Based on the Spacing and Hypervolume indicators of the obtained Pareto fronts, the proposed optimization algorithms can provide more design solutions, promoting the reconfigurability (synergy) in the nozzle design. Hence, the multi-objective design strategy allows the designer to have a wide range of solutions and to choose the most suitable one for a particular application, compared with a traditional design where both design criteria are considered as a single aggregate function.


I. INTRODUCTION
The operation of a factory must be economical, must be designed with the minimum cost of capital, and be closely related to the desired production level and quality standards [1]. Like other industries, in small and medium-sized companies in the food processing industry, the delivery of certain orders is rejected due to the slowness of the processes [2]. Filling machines are part of the packaging line in a beverage industry plant. Among other aspects, the chosen type of filler depends on the demands of the specific package or beverage product [1]. Also, the filling point is essential in a beverage packaging process, in the sense that usually three bottle filling and capping machines are employed working together for the same production line [1]. Processing time can be improved by optimizing fluid flow rate and, therefore, economic and processing benefits can be obtained by this kind of company.
The associate editor coordinating the review of this manuscript and approving it for publication was Lei Wang. Hence, the demands of a beverage product can be related to its flow rate and its transportation. A similar situation occurs in the manufacturing industry of parts based on composite materials. The reduction of the entire industrial process cost is strongly related to the minimization of the mold filling time [3].
Manufacturing and functional aspects are important in the nozzle design for pouring liquids. Manufacturing is benefited with a reasonable dimension of design elements, easy to get components in the market, and few parts. The maximum possible flow rate in the nozzle is essential to minimize the industrial process time and cost. Another important consideration is the kind of flow (laminar or turbulent) to be transported into the nozzle. So, more than one criterion is required in the nozzle design, and then, a multi-objective optimization problem (MOP) arises.
In a traditional way of obtaining a design solution, the MOP is converted into a mono-objective problem through a weighted sum strategy [4]. In [5] and [6], the weighted sum strategy is used. The former is related to the scantling design of a ship and the latter with a centrifugal pump. Several executions of the mono-objective algorithm with different weight combinations are required to find the highest number of solutions in the optimization problem. In this way, the designer can select the most suitable one for the particular application. Fifty solutions are found in [5], and depending on the solver (optimization algorithm) in [6], a range from 4817 to 9553 solutions are obtained. However, in this manner of tackling the MOP, there are some drawbacks [7], such as a uniform distribution of weights does not produce a uniform distribution of solutions in the Pareto front (PF) [8]. Hence, due to the above nonlinear relationship, the selection of suitable weights to find the desired trade-off is a challenging task, even a specific trade-off may not be reached. On the other hand, different possible design configurations (reconfigurability) are meaningful for the designer. So, multi-objective optimization methods [7] are useful to find a more suitable Pareto front approximation in one single optimization run.
Research studies about multi-objective optimization concerning applications of fluid dynamics are available. Study areas include heat transfer, power generation, combustion engines, and transportation. Some examples of the heat transfer area in fluid dynamics are registered in Table 1. It is observed that the minimum number of design variables is two, and the maximum is seven. Also, only one presents three objective functions, and the rest employ two objectives. The Nondominated Sorting Genetic Algorithm II (NSGA-II) or a variant of it, is used by half of the investigation. All of it, except one paper, use one algorithm to find the solution to their MOP. In [6], the Optics Inspired Optimization algorithm (OIO) is employed through the weighted sum strategy with different weights. This monoobjective strategy is compared to the Multi-Objective Particle Swarm Optimization (MOPSO) algorithm. About 11.5 times more nondominated solutions are reached with the OIO algorithm than with MOPSO. However with the weighted sum approach, not all solutions can be found in certain Pareto fronts, such as nonconvex ones [8]. Also, the OIO requires ten thousand executions to find all solutions, in comparison with the single execution needed by the MOPSO algorithm. Nonetheless, other multi-objective search approaches are not tested to guarantee an efficient search in the objective function space. Research belonging to the power generation area in fluid dynamics is shown in Table 2. The minimum number of design variables used in that classification is two, and the maximum is fifty-eight. Two objective functions are used as a minimum and four as a maximum. NSGA-II is used by half of the research too. All of them use only one algorithm to solve their stated MOP. Investigations on the combustion engines area in fluid dynamics are presented in Table 3. Three to seven design variables and two to three objective functions are used. All research employs the NSGA-II to find the solution to their MOP.
Examples of the transportation area in fluid dynamics are included in Table 4. One paper uses two design variables and the other, eight. Both works employ two objective functions. One of them uses NSGA-II, and both utilize only one algorithm for solving the corresponding MOP. Taking into consideration the investigation reported in Tables 1-4, different highlights are presented in Fig. 1. The most used number of design variables is two tied with four, each one representing 18% of the total research. Regarding objective functions, the most used number is two, with 73%. The most used algorithm to optimize is the NSGA-II or a variant of it. It is found in 59% of the works.
As can be seen from Tables 1-4, NSGA-II is the most popular optimization algorithm in the research areas of heat transfer, power generation, combustion engines, and transportation, all of them related to fluid dynamics. Besides, in all cases, only one multi-objective optimization algorithm is employed to solve the MOP. This implies some drawbacks. The ''No Free Lunch theorem'' [30] establishes that it does not exist an algorithm that performs better than other algorithms in all problems. For this reason, the convergence and distribution of the Pareto front can be strongly influenced by the optimizer. So, a wide variety of multi-objective problem solvers has been developed in the last decade. Particularly, evolutionary algorithms are considered problem solvers who find feasible solutions under challenging domains [7].
Among multi-objective evolutionary algorithms, different search approaches for approximating the Pareto front (PF) can be found. In [31] and [32], different search approaches are compared in a defense-related application and the four-bar mechanism speed regulation problem, respectively. They provide some insights about the importance of the search approach to improve the Pareto front. In research of Tables 1-4 related to fluid dynamics, the effect of different search approaches of solutions in the Pareto front (PF) is not explored, which impacts on the design's reconfigurability.
This paper presents a nozzle design for pouring a viscous liquid with a series of bores at the exit. In these outlet bores, the optimum parameters of diameter, length, and temperature are sought through the solution of a MOP subject to the governing equations of the fluid dynamics. Also, multiobjective optimization algorithms are used to find suitable solutions to the above MOP. In this way, different search approaches based on Pareto dominance, performance metrics, decomposition, and hybridization are studied in the multiobjective design of a nozzle for the beverage industry. As a means to improve the reconfigurability in the design, a couple of evolutionary multi-objective optimization algorithms are proposed and compared with other optimizers through the Spacing (SP) and Hypervolume (HV) performance metrics. So, different designs with different trade-offs are obtained, benefiting the decision-making process of the designer. Thereby, the contributions of this work are: i) The design proposal of a nozzle for a viscous liquid as a multi-objective optimization problem (MOP), where the flow rate and the bore length are the two conflicting design objectives. The solution of the design approach finds configurations that a traditional design (mono-objective strategy based on the weighted sum approach) cannot obtain. ii) The proposal of two multi-objective variants of the Differential Evolution algorithm, which promote distribution and convergence of solutions in the PF. This helps to achieve more reconfigurability of the nozzle design, and hence, the decision-making process of the design for a specific application can be improved.
iii) The study of different search approaches in the multiobjective nozzle design. This study statistically reveals the importance of using different optimizers to find more design reconfigurability.
The rest of this paper is structured as follows. In Section II, the nozzle is modeled through fluid dynamics equations, and the design is established as a MOP. In Section III, seven different multi-objective optimization algorithms are described. Five of them are taken from the literature, and the last two are the proposed algorithms. In Section IV, the results of the algorithms in the MOP are compared based on statistical analysis. The practical implications of the design, and its validation are shown. Finally, conclusions are presented in Section V.

II. OPTIMUM NOZZLE DESIGN
Pouring a viscous liquid as fast as possible is desirable. The volumetric fluid flow rate (flow rate) is used as a measure of that goal. Another goal is to achieve simple nozzle manufacturing of the pouring system. The nozzle's length is indirectly used here as a measure of this second goal. The aforementioned two goals, i. e., maximizing flow rate and minimizing the nozzle's length, with a fully developed flow, are conflicting. In optimization, conflicting goals implies that achieving the optimum for one objective requires some compromise on the other objective [33]. For this reason, the nozzle design is addressed as a MOP and presented in the next subsections.
Obtaining an appropriate behavior in the liquid flowing through the final part of the system used for pouring is required. The pressure-driven flow and the type of flow in a long circular tube [34] are considered in the nozzle design. The latter consideration implies small values of the diameter D compared to the length L. So, several bores are included in the nozzle outlet. Nevertheless, the larger amount of bores, the more difficult the manufacturing. Taking into account a maximum of twenty optimally distributed identical bores, nineteen bores provide the best relation of the pouring area regarding the outlet diameter [35], i. e., nineteen bores can cover as much area as possible in the nozzle outlet diameter.
The layout of the proposed nozzle in an ''open position'' is shown in Fig. 2, where T is the temperature of both the fluid and the nozzle's body, D is the diameter of each cylindrical outlet bores, and L is the length of that bores. The liquid can be pumped through the horizontal tube at the left side of the nozzle, then the liquid turns down and flows through the cylindrical bores located in the bottom. The nozzle must be taken to a ''closed position'' to cease the flow of the fluid. In that state, the plungers fill the bores. Thus, the liquid flow finishes. As the nozzle is composed of identical bores, only one of them is considered in the MOP, assuming that the same fluid flow conditions remain in the others.

A. DESIGN VARIABLES
Three parameters describe the fluid flow across one of the bores. Those are chosen as the design variables and represented in Fig. 2. They are the diameter D and the length L of the bore, as well as the temperature T of both the liquid and the nozzle. Design variables are grouped in the design vector x, and they are displayed in (1).

B. OBJECTIVE FUNCTIONS
Two objective functions are proposed and detailed next.
The first objective function J 1 is the flow rate through the bore. The following assumptions are made to find the flow rate: i) the velocity field is steady, ii) the flow is axisymmetric, iii) the fluid is Newtonian, with constant density, iv) body forces are negligible [34], v) the flow is fully developed, and vi) the fluid does not slip at the bore wall. The honey is selected as the viscous liquid to be poured. The following equations are considered to calculate the flow rate through the bore: a) the mass balance equation, b) the momentum balance equation, c) the strain rate for a Newtonian fluid, d) the honey constitutive equation, e) the flow rate obtained through the mass balance and momentum balance equations, and f) the energy balance. In the following, those equations are presented.

a: MASS BALANCE
The mass balance is represented in (2), where the cylindrical coordinates r, θ, z are shown on the left side of Fig. 3. In this case, the fluid's velocities v i in the cylindrical coordinate directions i ∈ {r, θ, z} are assumed to be zero except the velocity in the z direction.
The momentum balance equation for the z direction is considered in (3) [34], where ρ is the fluid's density, t is the time, p is the pressure, and τ ij is the extra stress associated with the fluid deformation in the j ∈ {r, θ, z} direction, acting on an area normal to the i direction.
With the above commented assumptions, (3) leads to the simplification denoted in (4), where p = p 2 − p 1 . The pressure p 1 is located at the cylinder inlet, p 2 is the pressure at the cylinder outlet, as shown on the left side of Fig. 3. Then, (4) is valid for any viscous fluid in steady, fully developed conditions in an axisymmetric tube flow [34].
To get a useful expression for τ rz in (4), it is necessary to find the strain rateγ of the fluid, given in (5), where the rate-ofdeformation tensor components D ij in the j direction acting on an area normal to the i direction are those in (6).
Due to the initial assumptions, (5) has the only nonzero components denoted by D zr = D rz . In this way, the strain rate is simplified as in (7).
The viscosity function η of the honey is described by (9), where w is the water content percentage. This model is suitable over the interval of temperature [10,30] (10).
By integrating (11), (12) is obtained, where R is the maximum radius of the circular bore.
The total flow rate Q (13), passing through a bore is given by the integral of the velocity (v z in (12)) over the cross section of the bore [34].
In the presented nozzle design problem, it is assumed that the fluid and the nozzle are at the same temperature before the fluid begins its travel through the nozzle. Moreover, no heat is added or subtracted to the fluid at the nozzle. Nevertheless, at this point, the temperature-dependent viscosity and viscous dissipation remain as factors that could impact the energy balance.
In a viscosity function such as that in (9), the coefficient of the variable T is important to the extent that it is related to the temperature change that would cause a significant change in the viscosity [34]. For the honey viscosity function in (9) this temperature value is 11.764K. Taking the above into account, simulations in specialized computational fluid dynamics (CFD) software are carried out, considering the conditions at the beginning of this subsection and the temperature-dependent viscosity, as well as the viscous dissipation. They indicate that the temperature variations are small compared to the aforementioned variation of 11.764K. For those reasons, the energy balance equation is neglected here.
Then, substituting (9) in (13), and using the design variables in (1), the first objective function J 1 , which is related to the flow rate in the nozzle design, is obtained and displayed in (14).
The second objective function is related to a small nozzle length. The variable L determines the length of the plungers depicted in Fig. 2, contributing to the dimensions of the nozzle. So, the second objective function of the optimization problem is the bore length as can be observed in (15).
Three constraints are used in the optimization problem. Those are stated below.

1) MAXIMUM REYNOLDS NUMBER
The Reynolds number Re is used as a criterion to distinguish between the laminar and turbulent flows. It is a ratio of two forces on a fluid element: the inertia force and the viscous force. The Reynolds number for the viscous flow through a tube [37] is given by Re = ρVD η , where V = Q/A is the mean fluid velocity, and A = πr 2 is the transversal area of the tube. It is desirable to keep the fluid flow laminar at the nozzle's bores; then, the constraint g 1 (16) is included to provide this kind of flow. With this constraint, the Reynolds number is intended to be less than 2100 [37].

2) ENTRANCE LENGTH
When the liquid is entering in a bore of the nozzle, its velocity is modified. As can be seen on the right of Fig. 3, the shape of the velocity profile in the tube depends on the entrance region length. The fully developed velocity profile is reached at the entrance length l e [37], i. e., the velocity profile remains constant beginning at the entrance length l e . For the laminar flow, the Reynolds number Re and the entrance length l e are related as l e = 0.06 ReD [37]. Then, the constraint g 2 (17) is used to ensure that the bore length L is larger than or equal to the entrance length l e .

3) MINIMUM REQUIRED FLOW RATE
A minimum flow rate equal to 3.7 × 10 −4 m 3 /s is required.
As the nozzle has nineteen bores, the minimum flow rate is divided by the number of bores. With this in mind, the constraint g 3 , in (18), assures that the flow rate Q accomplishes the required minimum.

4) BOUNDS ON THE DESIGN VARIABLES
An interval for each design variable defines the search domain. The maximum and minimum values of such variables are displayed in Table 5. Hence, bounds on the design variables are stated as constraints, as is shown in (19). In the case of the diameter D, those bounds are selected based on the total outlet diameter of the nozzle to hold all nineteen bores. This total outlet diameter includes the interval [7.2, 40] mm. In the case of the temperature T , this interval is stated for the validity of the honey viscosity function [36].
The limit values of the bore length L maintain convenient dimensions for manufacturing.

D. STATEMENT OF THE MULTI-OBJECTIVE OPTIMIZATION PROBLEM
The multi-objective optimization problem (MOP) of the nozzle design aims to find the optimal design vector x * = [x 1 * , x 2 * , x 3 * ] that maximizes the flow rate J 1 (14) and minimizes the bore length J 2 (15), subject to the laminar flow of the fluid (16), the completely developed fluid flow (17), and the minimum flow rate to be accomplished in each bore (18). The MOP is formally formulated in (20).

III. MULTI-OBJECTIVE OPTIMIZERS
The MOP depicted in (20)  The progress towards the PF in NSGA-II and the SPEA2 is based on the Pareto dominance. In the case of SMS-EMOA, solutions are selected through a performance metric. MOEA/D employs a decomposition approach to divide the multi-objective problem into several scalar subproblems. A hybrid approach involving dominance, decomposition, and performance metrics is used in NSGA-III. The search approach of the two proposals, MODE-HVR and MODE-HV R, falls in the metric-based classification. The first one employs the Hypervolume (HV) metric. The second one uses in addition to the HV metric, the -dominance as a technique to induce the solution spreading.

A. BASIC MULTI-OBJECTIVE OPTIMIZATION CONCEPTS
Before presenting the multi-objective optimizers, some useful definitions [7], [8], [38], [39], [40] for the proper understanding are presented next. Only the minimization problem is taken into account, and in the case of considering the maximization of an objective function, the negative of it is considered for a minimization problem [41].
. , x n ] T is an n-dimensional design variable vector from the space (the decision space). It is noted that g i ( x) ≤ 0 and h j ( x) = 0 represent constraints that must be fulfilled while minimizing J ( x) ∈ Y , and contains all possible x that satisfy them. Y is named the objective space.
Definition 4 (Pareto Optimal Set): For a given MOP, the Pareto optimal set P * , is defined as: Definition 5 (Pareto Front): For a given MOP and P * , the Pareto front PF (also known as the true Pareto front) is defined as: The final Pareto front obtained with the usage of evolutionary multi-objective algorithms is called approximated Pareto front PF A or known Pareto front.
Definition 7 (Pareto Epsilon Optimal Set): For a given MOP, the Pareto epsilon optimal set, P * , is defined as: Definition 8 (Pareto Epsilon Front): For a given MOP, and P * , the Pareto epsilon front, PF , is defined as: The Nondominated Sorting Genetic Algorithm II (NSGA-II) [42] is an evolutionary multi-objective optimization algorithm that uses a nondominated sorting (NDS) approach.
The NDS is a procedure of sorting a population into different nondomination levels. Several Pareto fronts from the population are identified according to the constraineddomination level of solutions. Using constrained-domination ensures that any feasible solution has a better nondomination rank than any infeasible solution. Among feasible solutions, with different constrained-nondomination rank, the bestranked solution is preferred; otherwise, the solution in a lesser crowded region (regarding the crowded-comparison operator) is chosen.
The main loop of NSGA-II is as follows. Initially, a random population is created. Parent individuals are selected based on the constrained-nondomination rank obtained by the NDS with the binary tournament selection. Then, these are used to generate the offspring population employing the two following operators: simulated binary crossover (SBX) with a crossover probability p c and the polynomial mutation (PM) with a mutation probability p m , and distribution indexes η c and η m , respectively.
The population of the next generation is obtained by selecting the same number of individuals as the initial population from de best-ranked solutions of the merged current-offspring population regarding NDS.
The process can continue by creating a new offspring population derived from the last parent population, mixing the parents and offsprings, applying the NDS to the whole solution set, and so on. An approximation to the true PF is found in the last population.

C. SPEA2
SPEA2 [43] stands for the improved version of the Strength Pareto Evolutionary Algorithm (SPEA) [44]. SPEA2 permits good convergence and diversity in the approximated Pareto Front (PF A ) by incorporating an improved fitness assignment scheme, a density estimation technique, and a different archive truncation method.
First of all, the algorithm needs a population size, an external archive size N ext , and the maximum number of generations. Next, it generates an initial population and creates an empty external archive. Fitness values are calculated both in the population of the current generation and the external archive. The fitness assignment uses the number of dominated individuals by each individual as well as the number of individuals that dominate it, with the incorporation of a density estimation technique (an adaptation of the k-th nearest neighbor method [45] with k = 14) to guide the search process.
After that, nondominated individuals of both the current population and the external archive are copied to the external archive of the next generation. The size of the external archive must always be kept fixed through generations by using archive update operation, which also contains a truncation method that avoids boundary solutions being removed. The truncation method completes the external archive with the best-dominated individuals in both the current population and external archive, whenever the size of the external archive is small than the current population. On the contrary, whenever the size of the external archive is larger than the current population, the individual with the minimum distance to the neighbor individual is removed.
At this point, if the current number of generations is equal to the maximum number of generations, the process is stopped. If not, it makes a binary tournament selection with the union of the current population and the archive, with the replacement on the external archive of the next generation.
Then, recombination operator (e. g. SBX with p c and η c ) and mutation operator (e. g. PM considering p m and η m ) are applied to the resulting set of individuals to make up the population of the next generation.
The process continues repeating the steps mentioned above after the empty external archive creation.

D. SMS-EMOA
The S-metric Selection (SMS) Evolutionary Multi-objective Optimization Algorithm [46] employs the HV measure, or S-metric, as a grade indicator of the solutions' convergence to the true PF and their characteristic distribution along with it.
The algorithm is based on both the NDS (as in NSGA-II) and in the HV to eliminate an individual for maintaining the same population size through generations.
The procedure of the algorithm is explained next: a population with µ individuals is randomly generated. Then, only one offspring individual is created through the operators SBX (with p c and η c ) and PM (with p m and η m ). After that, the new individual is merged with the current population, and NDS is applied to identify the worst-ranked PF.
Next, the individual with the poorest exclusive contribution to the S-metric in this PF is eliminated. The sequence is repeated, generating a new offspring through SBX and PM, applying NDS, and discarding the individual again with the poorest exclusive contribution to the S-metric in the worstranked front. In this way, an approximation to the true PF is found in the last population.

E. MOEA/D
MOEA/D is the acronym for the Multi-Objective Evolutionary Algorithm Based on Decomposition. As can be noticed from its name, it decomposes a MOP into several scalar optimization subproblems [47], [48]. The algorithm obtains a solution for all subproblems at the same time in a single run.
A series of aggregation functions containing a vector of evenly spread weights to all the objective functions are formulated to decompose the MOP. For this purpose, whichever decomposition strategy can be used. Solving each VOLUME 8, 2020 aggregation function is considered as a single optimization subproblem. Solutions of two of that subproblems usually are very similar whether their vector weights are close in an Euclidean sense. A neighborhood of vector weights is defined by several vector weights W with the shortest Euclidean distance between them. A neighborhood of a subproblem is determined by several subproblems whose vector weights are in the same weight neighborhood.
An initial population is randomly generated. A variation of MOEA/D is used here, the MOEA/D-DE [48]. In MOEA/ D-DE, the population is evolved through a Differential Evolution (DE) [49] operator and mutation is done using PM (with p m and η m ). The crossover between individuals in the weights' neighborhood of each individual is worked out through the DE/rand/1/bin variant with a crossover probability C r [32].
Better-generated individuals substitute one or more individuals in the neighborhood; in this way, the best individuals are selected according to the aggregation functions. Neighboring solutions are updated according to the new-better solutions.
The above procedure is repeated through the given generations, and the approximated Pareto front (PF A ) is obtained from the last population.
F. NSGA-III NSGA-III [50], [51], is the acronym of the Nondominated Sorting Genetic Algorithm III. It is a successor of the NSGA-II, and in the same manner, it employs nondominated sorting (NDS).
Initially, a population of N parents is generated and evaluated. That population is sorted through the NDS and is evolved through a desired number of generations. In each generation, offspring individuals are created through the operators SBX (with p c and η c ) and PM (with p m and η m ), then they are evaluated. A new set is created as the union of current and offspring populations and is sorted by NDS.
The worst-ranked Pareto front (PF) in this sorting is removed. If the size of the remaining solution set is still greater than N , then the next worst PF must be removed. The above is performed until the population size is equal to or smaller than N . If the size is equal to N , the remaining solution set is used as the new parent population. Otherwise, in the case in which the size is smaller than N , choosing individuals from the last removed PF in some manner is required. For this, solutions in the remaining combined population are normalized and then associated with H predefined reference points regarding closeness.
Next, solutions that belong to the last removed PF are chosen considering their closeness to the less crowded reference point regions and added to the combined population to complete the N individuals [32]. Finally, the population of N individuals is used as the new parent population. The above process goes on, and the true PF is found in the last obtained population.

G. PROPOSED MODE-HVR
The ''Multi-objective Differential Evolution Algorithm with Hypervolume based Reconfigurability'' (MODE-HVR) is proposed in this paper, and it is based on the Differential Evolution variant DE/rand/1/bin. The MODE-HVR requires two phases. Each phase requires a number of generations to evolve the population. The first phase includes a percentage n s of the total number of generations. The rest of the generations are associated with the second phase. In the first phase, the individuals are randomly mutated into the population. This promotes the exploration of the search space in the initial generations. Subsequently, the exploitation of solutions in potential good regions of the generated PF A is the objective in the second phase of the MODE-HVR. This is done in the information shared in the mutation operator. In this case, elite individuals obtained from a percentage b m of individuals in the PF A with more contributing hypervolume C HV [46] are required to make the mutation process. This provides a metric-guided search. In addition, the PF A is updated at each generation and stored in an external archive. Through this combined search, based on two different ways of combining the individuals in the mutation process, the algorithm centralizes on both the diversity and the convergence of solutions in the approximated Pareto front [52].
The pseudocode of the MODE-HVR is given in Algorithm 1 and described next. The external archive X ext is set to an empty population. A random initial population called parent population X g=0 with N individuals is generated. In the initial population, the random bound constrainthandling method [53] is used to keep the design variables of the individuals into the corresponding boundaries i. e., the variables outside of boundaries are replaced by random values within it. Through generations g, the external archive is updated with only feasible nondominated solutions of the population X g and the previous version of the external archive. The ''continuously updated'' approach [8] for finding nondominated solutions from the feasible individuals of the population X g and the external archive is employed.
As said before, the individuals mutate in two different ways depending on the generation number. This will provide mutant individuals V. In the first generations given by the percentage n s of the generations (g < n s % of G Max ), the mutation is done by using (25) as in the original DE/rand/1/bin mutation process, where F is the scale factor randomly generated in the interval [0, 1], x A r k are the r k -th design vectors randomly obtained from the parent population X g . The indexes r k imply that r 1 = r 2 = r 3 = i, i ∈ {1, . . . , N }. In the second phase (in the rest of the generations i. e. when g ≥ n s % of G Max ), the mutation process is also given by (25), but this time, individuals are randomly taken from the percentage b m of solutions with larger contributing hypervolume C HV in the external archive X ext . In the case that there are not enough members in the external archive to implement the mutation in the second phase, the individuals are randomly taken from the parent population X g i. e., the mutation of the individuals is the same as in Algorithm 1 MODE-HVR 1: X ext ← ∅. 2: g ← 0. 3: Generate an N individuals population randomly X g . 4: while g ≤ G Max do 5:

X ext
← feasible nondominated solutions of X g ∪X ext . 6: if g < n s % of G Max then 7: A ← X g . 8: else 9: Sort X ext in ascending order according to J 1 . 10: for all x i ∈ X ext do 11: Calculate C HV ( x X ext , sort i ) with (26).

12:
Elimination of x X ext , sort i with C HV ( x X ext , sort i ) ≈ 0. 13: end for 14: Sort X ext in descending order according to C HV ( x X ext , sort i ). 15: A← First b m % of X ext . 16: end if 17: for all 1 ≤ i ≤ N do 18: Calculate r 1 , r 2 , r 3 randomly for x A r i where (r 1 = r 2 = r 3 ) = i from the population A. 19: Calculate i-th mutant individuals v i of V with (25). 20: Random bound constraint-handling method in v i .

21:
Generate i-th offspring individuals u i of U with (27). 22: end for 23: Select individuals between X g and U (based on Deb feasibility rules) to set X g+1 . 24: g ← g + 1. 25: end while the first phase. Once the mutant individuals are generated, the random bound constraint-handling method [53] is used to keep the design variables of the mutant individuals into the corresponding boundaries. The contributing hypervolume for the i-th individual in the external archive is computed as in (26) where the archive must be sorted in ascending order regarding the value of the first objective function J 1 . The superscript ''sort'' indicates the i-th individual of this sorted population. The contributing hypervolume for the extreme solutions in the external archive is set to the largest value. Also, the individuals with contributing hypervolume around to zero (≈ 1E − 15) are eliminated.
Once the population is mutated, the binomial crossover [54] represented in (27) is used to generate the child population U. The fittest individuals between the parent and the child populations are maintained for the next generation in X g+1 . The fitness of individuals in constraint optimization problems requires techniques for handling constraints [55].
One of the most popular and effective constraint-handling techniques is the Deb feasibility rules [42]. So, this is implemented to select the fittest individuals that will conform to the population of the next generation X g+1 . This is similar to the ''constrained tournament selection operator'' [8], however, the selected constraint handling technique states the following: 1) If both solutions are feasible, the nondominated one is preferred. If those solutions do not dominate each other, any of them is chosen with the same probability. 2) A feasible solution is preferred over an infeasible solution. 3) Between two infeasible solutions, that one with fewer constraint violations is preferred.
These processes are repeated until a maximum generation number G Max is reached. An approximation to the true PF is found in the last obtained external archive. Different from the algorithm in [56], in MODE-HVR, the design vectors for the second mutation process are chosen based on the contributing hypervolume C HV of their corresponding solutions in the external archive X ext . The above promotes the convergence of the obtained solutions to the true PF and also the diversity of solutions. On the other hand, unlike the HV-MODE in [32], which is convenient for online optimization problems (where the convergence time is a crucial factor, and a reduced number of generations is required for the dynamic environment), the MODE-HVR is focused on offline optimization. In the case of the proposed MODE-HVR, the reinforcement of the diversity and convergence depends on the cumulative nondominated solutions created through generations, while in HV-MODE, they rely on the fulfillment of a percentage of nondominated solutions in the external archive (it does not rely on the generation accumulation).

H. PROPOSED MODE-HV R
The proposed ''Multi-objective Differential Evolution Algorithm with Hypervolume and -dominance-based Reconfigurability'' (MODE-HV R) is based on the proposed MODE-HVR optimizer presented above. Instead of using only the concept of dominance at each generation (i.e., to update the external archive with the feasible solutions in X g ), the -dominance is incorporated in line 5 of Algorithm 1 to maintain a Pareto epsilon optimal set P * into the external archive (see Definition 7).
In MODE-HV R, the -dominance based archiving strategy for maintaining a Pareto epsilon optimal set [40] is used. The strategy consists of including into the external archive solutions which are non -dominated by the other solutions. With this strategy, the algorithm is capable of keeping a welldefined number of nondominated solutions in its external . . , m} then 5: if J ( x 1 ) J ( x 2 ) then 6: return true 7: else 8: if !( J ( x 2 ) J ( x 1 )) then 9: if (the Euclidean distance between J ( x 1 ) and box x 1 [i]∀ i ∈ {1, . . . , m})<(the Euclidean distance between J ( x 2 ) and box x 2 [i]∀i ∈ {1, . . . , m}) then 10: return true 11: else 12: return false 13: end if 14: end if 15: end if 16: return false 17: else 18: if box for the hyper-rectangles in each objective direction is employed. The term J i,Max, represents the maximum value of the i-th objective function selected by the designer experience on the problem at hand. In this strategy, only one solution per hyper-rectangle is left. As a consequence, a defined spread of the approximated Pareto front is attained. The algorithm of -dominance for feasible design vectors x, which helps maintaining a Pareto epsilon optimal set, is detailed in Algorithm 2 [40], where, for shortness, the hyper-rectangle is named ''box''.
So, besides the attention in convergence and diversity thanks to the metric-guided search through the Hypervolume, the proposed MODE-HV R focuses even more on the spread of the solutions in the approximated Pareto front. This, because the -dominance concept implementation promotes the spread of solutions on the PF A [40].
Taking into account the strategy described above, the MODE-HV R algorithm is also given in Algorithm 1. Only the line 5 in the algorithm must be changed by ''X ext ← feasible non -dominated solutions of X g ∪ X ext ''.

IV. RESULTS
In this section, the performance of the algorithms presented above and applied for the multi-objective optimization problem of the nozzle design is statistically analyzed. Also, the results obtained with the proposed multi-objective design are compared to the obtained solutions achieved by traditional design. The best of the proposed algorithms is used in this comparison. The traditional design is considered to be based on a scalarization method where an aggregate objective function is used in a single objective optimization problem.

A. DESIGN OF ALGORITHM TESTS
Parameters related to the MOP for the nozzle optimum design problem are defined as follows: w = 29 is chosen as the greatest percentage of water in honey. A common industrial pneumatic pressure, p = 600 × 10 3 Pa is chosen. The honey's density [57] is defined as ρ = 1.3785 × 10 3 kg/m 3 . The MOP is solved through seven algorithms from four different solution search approaches. The parameters used for the algorithms are described next: 100 individuals in the population and 2000 generations, except for SMS-EMOA, which requires 200000 generations in order to fulfill the same number of objective function evaluations. With the above parameters, the same number of objective function evaluations is performed in all tested algorithms for making fair comparisons. Specific parameters for every algorithm can be seen in Table 6, where rand(0,1) means a uniformly distributed random number between 0 and 1. Parameters were taken from the original versions of the algorithms, stated in the corresponding papers or reports of their authors. In the case of the proposals MODE-HVR and MODE-HV R, the parameters n s and b m were systematically changed with different values from lower to upper values through an empirical procedure. It is suggested that such parameters have the same value with a small percentage (around 25%) to generate a more uniform Pareto front with better trade-offs.
Thirty independent executions per each algorithm are performed. All algorithms are programmed in MATLAB R in a personal computer with Windows platform, 4 GHz Processor and 16.00 GB RAM.

B. DISCUSSION OF THE ALGORITHM PERFORMANCE
A set of independent executions is needed per each algorithm to gather information for statistical analysis [52]. Since each execution of an algorithm makes up a specific approximated Pareto front (PF A ), a metric to measure the quality of all of them is required. The ''Free Leftovers'' theorem states that there is not a single metric to fully evaluate the quality of a Pareto front (PF), and consequently, the quality of the algorithm, which generates that PF [30].
Two representative performance metrics are the Spacing (SP) and the Hypervolume (HV) [52]. The SP metric indicates the distribution of solutions, which is related to the constant spacing between them along an PF A . The smaller the SP metric, the better the distribution of solutions. On the other hand, the HV metric obtains the dominated volume of the optimal solution set regarding a reference point. The larger the HV metric, the more convergence, and diversity of solutions. Convergence measures the degree of proximity between the solutions in an PF A to those in the true PF [52]. Diversity indicates the distribution and spread of solutions in the PF A . Spread implies how well the solutions in an PF A involve the extreme points of the PF.
With the way above of measuring the quality of all the obtained PF A , two goals in the multi-objective optimization are considered: a) finding a set of solutions as close as possible to the true Pareto front and b) finding a set of solutions as diverse as possible [43].

1) DESCRIPTIVE STATISTICS
In Table 7 and 8, descriptive statistics of the independent executions for the SP and the HV metrics are reported, respectively. Better values in each column are written in italics. The best algorithm regarding the mean value (''mean'' column) is written in boldface. The standard deviation values are displayed in the ''σ '' column. Also, the fourth and fifth columns show the best and the worst behavior obtained within all executions of each algorithm, respectively.
The summary of the descriptive statistics is the following: i) The best behaviors regarding the mean values of the SP metric are obtained by the MODE-HVR and MODE-HV R algorithms, in that order. For the case of the HV metric, the best mean values are obtained by MODE-HVR tied  with the MODE-HV R and followed by the SMS-EMOA, SPEA2, and NSGA-II. ii) The MODE-HVR provides the smallest value of the standard deviation in the SP metric, followed by the SMS-EMOA and the MODE-HV R. Regarding the HV metric, the lowest standard deviations are obtained by NSGA-II and SMS-EMOA in that order. The standard deviation indicates that those algorithms present similar behaviors in the algorithm performance through executions. iii) The best performance among all executions is given by MODE-HVR with 0.4832 × 10 −4 for SP metric and by NSGA-III with 0.5255 for HV metric. iv) The worst behavior regarding both metrics is achieved by NSGA-III.

2) INFERENTIAL NONPARAMETRIC STATISTICS
Descriptive statistics only summarizes the characteristics of a given data set. Nevertheless, inferential statistics must be carried out to show the confidence of the algorithm performance.
Considering that the seven algorithms have a stochastic basis, which frequently violates the independence, normality, and homoscedasticity assumptions, nonparametric statistical tests are used to know whether an algorithm outperforms the others [58]. First, the multiple comparison Friedman test is carried out. This test detects if at least two samples possess different median values. For this, two hypotheses  are considered, the null and the alternative. The alternative hypothesis establishes that the medians of the data set are different. Meanwhile, the null hypothesis indicates that the medians of such data set are the same.
The rejection of the null hypothesis, (i.e., the acceptance of the alternative hypothesis) is based on a p-value of the data set and the significance level α = 0.05. The smaller the p-value regarding α, the stronger the evidence against the null hypothesis. Table 9 and 10 present the Friedman test results about the SP and HV metrics. The obtained p-values are in the order of 1 × 10 −35 and 1 × 10 −32 for the SP and the HV metrics, respectively. These strongly indicate the existence of significant differences among the performance of the seven compared algorithms.
Once a significative difference in the data set is found, the Holm, Shaffer, and Bergmann-Hommel post hoc tests are performed. In these tests, the p-value is adjusted by taking into account the accumulated error in the data set of the compared algorithms.
The adjusted p-values (APVs) are shown in Table 11 and 12 for the SP and HV metrics, respectively. The victory between the two algorithms is established through the z-value column. For the SP metric, negative/positive z-values denote that the first/second algorithm is better than the other, once the p-value is less than the significance level α. On the other hand, for the HV metric, positive/negative z-values denote that the first/second algorithm is better than the other, once the p-value is less than the significance level α too. The z-value is obtained by using the ranks of Table 9 and 10 [58].
The italic letter in Table 11 and 12 indicates that the unadjusted p-value or the APVs are less than α = 0.05. Also, the winner of the specific pairwise comparison is written in boldface, whether all the post hoc procedures satisfy the significance level.
The victory counting summary of the compared algorithms is presented in Table 13 for the SP metric and in Table 14 for the HV metric. The overall winner is written in boldface. Based on the nonparametric inferential statistical test given above, the following findings are obtained: • The MODE-HVR outperforms the behavior of the other algorithms with fifteen victories for the SP metric and also fifteen victories for the HV metric. The second best algorithm is the MODE-HV R since it presents twelve victories for the SP metric and fifteen for the HV metric. The SMS-EMOA and the SPEA2 are tied in the thirdbest behavior by summing their victories for both the SP and the HV metric. As said above, the SMS-EMOA is based on the use of metrics and the SPEA2 is based on Pareto dominance. The best performance of the proposed algorithms is attributed to the search approach based on metrics. This approach guides the search towards promising regions of the Pareto front based on the contributing hypervolume in such a way that elite individuals are mutated and recombined. Hence, more reconfigurability in the nozzle design is obtained, i.e., the Pareto front presents a uniform distribution with a suitable convergence and diversity of design solutions.
• The worst behaviors are for the NSGA-III and MOEA/D for both the SP and the HV metrics. The above is attributed to the user influence in the predefined targeted search present in these two algorithms [50]. A set of reference points and weight vectors is required in NSGA-III and MOEA/D, respectively. Optimal points are found corresponding to each of the targeted search directions or regions in the Pareto front (PF). The search is not done in the entire search space, decreasing the computational cost but maintaining diversity among the solutions. The number of reference points in NSGA-III and the number of neighbors of each weight vector in MOEA/D are prerogatives of the algorithm's user. This strategy is useful when the quantity of objective functions is large because solutions increase exponentially with the number of objectives. As a consequence, the handling of nondominated solutions becomes computationally expensive. In the case of the MOP for the nozzle design, this strategy does not improve the SP and HV metrics.

3) FILTERED PARETO FRONTS
Filtered Pareto fronts coming from the PFs of each one of the thirty independent executions of the seven algorithms are shown in Fig. 4. The SP and HV metric values for the filtered PF are displayed in Table 15. The best values of both metrics are highlighted in boldface. Then, the next finding is noticed: The best behavior related to the SP metric is shown by the SMS-EMOA algorithm, followed by the MODE-HV R. According to the HV metric, the best behavior is achieved by the MODE-HV R algorithm, followed by the SMS-EMOA and the MODE-HVR. These facts confirm the competitiveness of the proposed algorithms.
On the other hand, the filtered PF involves 3760 nondominated solutions for the MODE-HVR, 3057 for the MODE-HV R algorithm, 2152 for the SMS-EMOA, 1568 for the SPEA2, 1563 for the NSGA-II, 780 for the MOEA/D, and 659 for the NSGA-III. As can be seen, the largest amount of nondominated solutions is obtained by MODE-HVR, MODE-HV R, and SMS-EMOA in that order. On the opposite side, the smallest amount of nondominated solutions is achieved by the algorithms NSGA-III and MOEA/D. This has some implications. The HV metric judges how well the solutions in an PF A arrive at the extreme points of the true PF. Algorithms that achieve more solutions involve better extreme points. Besides, the HV metric obtains the dominated volume of the optimal solution set. Algorithms that obtain more solutions accomplish a larger dominated volume. As a consequence, the better distribution, spread, and convergence of the MODE-HVR and MODE-HV R algorithms are attributed to their larger number of nondominated found solutions.
It is important to note that the number of found solutions greatly influences the design of the nozzle by allowing the designer to choose a wide variety of configurations. The dependency between the design space and the objective one can be explored here. In this case, the algorithm which provides the highest number of nondominated solutions is selected, among the comparative analysis in the filtered Pareto front. In Fig. 5(a)-(b), the Pareto optimal set (in the design space), and the Pareto front (in the objective space) regarding the aforementioned algorithm, the MODE-HVR, are displayed. The Pareto optimal set is sorted in ascending order regarding the second design variable x 2 (temperature) and split up into three equal parts. Vectors with higher temperature values are displayed in green, middle values in blue and lower ones in red. Besides, a close-up of Fig. 5(b) is shown in Fig. 5(c). It is observed that the Pareto optimal set, thanks to the sorting, is uniformly distributed in the design space (see Fig. 5(a)) whilst the solutions in the Pareto front present a nonlinear distribution (see Fig. 5(c)). Hence, the nozzle design problem presents a nonlinear relationship between the Pareto front and the Pareto optimal set, which reveals the importance of the study about different search approaches in the multi-objective optimizers presented in this paper.
The last discussion point in this subsection is related to the test of different instances of pressure and density in the problem. It is clear that different values of pressure ( p) and liquid density (ρ), will result in different Pareto fronts. Under such changes in the optimization problem for the nozzle design, the performance of the best algorithms regarding certain well-behaved metric in the different instances is not guaranteed [30]. Nevertheless, we empirically observed that large sets of design configurations are obtained by using the proposed algorithms with different instances in the problem (with different pressure and liquid density-viscosity values). This indicates that the proposals provide appropriate design solutions with a uniform Pareto front, promoting the reconfigurability (synergy) in the nozzle design. Hence, the proposed algorithms can be used as promising alternatives to optimize the nozzle design when different instances are required. It is important to remark that the proposed design of the nozzle includes nineteen bores, so the total flow rate is 6.916 × 10 −3 m 3 /s which results 18.69 times larger than the minimum required flow rate (3.7 × 10 −4 m 3 /s).
Having determined the appropriate design results for the application, at the designer's discretion, a comparative study is also made. This comparison confronts the proposed multiobjective design for the nozzle and a traditional design to show the advantages of the former. The traditional design transforms the MOP into a mono-objective optimization one based on a scalarization method (weighted sum approach). The traditional design considers the aggregate function J 2,Max , the same constraints given in (16), (17), and (18), and the same design variable bounds described in (19). The coefficients w 1 and w 2 are weights assigned to the corresponding objective function terms. The larger w 1 or w 2 is selected, the more privileged the flow rate J 1 or the bore length J 2 is, respectively. The maximum values found for J 1 and J 2 are J 1,Max = 7.176 × 10 −4 m 3 /s and J 2,Max = 1 × 10 −1 m, respectively. These values are determined by maximizing each objective function separately.
One thousand independent combinations with randomly distributed weights such that w 1 + w 2 = 1 are selected in the weighted sum approach. A nonlinear programming method called Sequential Quadratic Programming [59] (SQP) is applied to the above optimization problems with random initial conditions.
Based on the results, only three design trade-offs in the PF are obtained by SQP and can be observed in Fig. 4(e) as filled triangles. This number of design trade-offs is very small compared with the 3760 trade-offs in the multi-objective design obtained by the MODE-HVR, indicating that the diversity is promoted better in the latter algorithm.
Two . These solutions are also obtained when only one design objective function is considered, (i.e., when one weight w 1 or w 2 is set to zero) and hence, the synergy between the flow rate and the bore length is not fulfilled.
The third obtained solution by SQP, is found in the neighborhood of the PF's center, ([J 1 3, SQP , J 2 3, SQP ] = [3.028 × 10 −4 m 3 /s, 4.22 × 10 −2 m]). For this solution, a Reynolds number of Re = 1.38 × 10 2 is obtained. Also, this trade-off is near to the point [J 1 * , HVR , J 2 * , HVR ] corresponding to the selected design variable vector x * , HVR obtained by the proposed best algorithm MODE-HVR. This indicates that convergence of the SQP versus the convergence of the selected solution in the MODE-HVR is very similar.
A high flow rate J 1 is beneficial because the product pouring time of the nozzle is reduced. Otherwise, having a small bore length J 2 can positively impact in the assembly and manufacturing costs. Then, the main drawback of the traditional design with the statement of a weighted sum strategy is the lack of a wide variety of configurations (only three solutions are found) in the obtained design.
The above confirms that a uniform distribution of weights does not imply a uniform distribution of the solutions in the PF [7], and also indicates that only the 0.079% of the Pareto solutions of the MODE-HVR are found. As a consequence, the main advantage of the proposed multi-objective design for the nozzle is that a broad set of configurations with different trade-offs can be obtained. The designer can work out a more suitable decision for a particular application. Then, with the proposal, a better decision-making process is favored.

D. OPTIMAL DESIGN VALIDATION
In this section, the selected optimum solution is validated by using Ansys Fluent R Computational Fluid Dynamics (CFD) software. A fluid with characteristics of honey is taken and put to travel into a tube for the simulation. The diameter and the length of the bore correspond to x * , HVR . The conditions described in Table 16 are considered. Fig. 6 shows a visual representation of the obtained results, and different findings are observed: • It is confirmed that the flow rate corresponding to the obtained design matches the flow rate achieved by the CFD software with an approximation of around 92%. The above shows the confidence of the obtained results by the proposal.
• The visual representation of the fluid velocities showed in Fig. 6 implies that the streamlines (lines that start at the inlet and end at the outlet) are almost parallel and straight at the output. A flow with concentrically increasing velocities can be seen, i.e., low velocities are near the wall of the bore compared to the velocities in the center of the bore. Then, the fulfillment of the constraint g 1 , which is related to the Reynolds number and with the laminar flow, is guaranteed. The computer-aided design of a possible implementation of the proposed nozzle design, which considers the obtained results, is presented in Fig. 7.

V. CONCLUSIONS
In this paper, the nozzle design for a viscous fluid is established as a multi-objective optimization problem (MOP) where the flow rate and the nozzle length are simultaneously considered.
The study of different search strategies in the multiobjective optimizers to solve the MOP is developed with approaches based on Pareto dominance (NSGA-II and SPEA2), decomposition (MOEA/D), metrics (SMS-EMOA), and hybridization (NSGA-III). Two different optimizers based on the Hypervolume performance metric are proposed (MODE-HVR and MODE-HV R), one of them also includes the -dominance concept (MODE-HV R).
A comparative statistical analysis among the different multi-objective optimizers is performed. This analysis indicates that the proposed algorithms (MODE-HVR and MODE-HV R) can improve the Spacing and the Hypervolume indicators of the obtained Pareto front. Then, more solutions with better convergence as well as diversity (distribution and spread) are obtained in consequence, and the reconfigurability (synergy) in the design is promoted with these proposals.
That favorable behavior is attributed to the included explorative and exploitative features of the proposed algorithms. The explorative feature is based on the storage of solutions corresponding to different regions of the Pareto front in the first phase of generations. The exploitative feature uses elitism to guide the search towards promising regions of the Pareto front in the second phase of the generations, i.e., towards design solutions of the Pareto front with suitable Hypervolume metric. Moreover, the use of the concept of -dominance in MODE-HV R, positively influences the amount of found nondominated solutions, (i.e., more solutions are obtained) and the size of the hyper-rectangles contributes in the convergence to the true Pareto front. An adequate selection of the size of the hyper-rectangles can provide a similar performance than the other proposal MODE-HVR.
From a general point of view, the multi-objective search approach based on metrics provides the most competitive algorithm performance in the nozzle design.
On the other hand, in the Traditional Design, where both design criteria are considered as an aggregate function (scalarization method), only three different design solutions are found by selecting a broad set of different weights. Compared with a Traditional Design, the proposed multiobjective design strategy for the nozzle can obtain more solutions (from 659 to 2302 solutions, depending on the optimizer). These different design trade-offs represent more design reconfigurations obtained by the best proposal. The above allows the designer to have a wide range of solutions and to choose the most suitable one for a particular application.

ACKNOWLEDGMENT
Víctor Darío Cuervo Pinto acknowledges for a license with salary enjoyment authorized by the COTEBAL for the Ph.D. studies in CIDETEC, IPN.