Digital Twin-Based Optimization for Ultraprecision Motion Systems With Backlash and Friction

A digital twin-based optimization procedure is presented for an ultraprecision motion system with a flexible shaft connecting the motor to the (elastic) load, which is subject to both backlash and friction. The main contributions of the study are the design of the digital twin and its implementation, assuming a two-mass drive system. The procedure includes the virtual representation of mechanical and electrical components, non-linearities (backlash and friction), and the corresponding control system. A procedure for digital twin-based optimization is also presented, in which the maximum absolute position error is minimized while maintaining accuracy with no significant increase in the control effort. The optimal settings for the controller parameters and for the backlash peak amplitude, the backlash peak time, and the hysteresis amplitude are then determined, in order to guarantee an appropriate dynamic response in the presence of backlash and friction. The surface quality of certain manufactured components, such as hip and knee implants, depends on the smoothness and the accuracy of the real trajectory produced in the cutting process that is strongly influenced by the maximum position error. The simulations and experimental studies are presented using a real platform and two references for trajectory control, and a comparison of four digital twin-based optimization methods. The simulation study and the real-time experiments demonstrate the suitability of the digital twin-based optimization procedure and lay the foundations for the implementation of the proposed method at an industrial level.


I. INTRODUCTION
Nowadays, Industrial Cyber-Physical Systems (ICPS) lead to new production concepts that call for seamlessly integrated simulation models and different abstraction levels for increasing competitiveness [1]. In this context, the Digital Twin (DT) approach has emerged as a key concept for modeling, simulation, and optimization of ICPS. Indeed, the main rationale behind a DT is its capability to integrate multiphysics and multiscale systems and their heterogeneity. It does so by making use of the best available representations (physical and virtual models) for perfect emulation and mirroring of the operating conditions of the corresponding real systems [2], [3]. Any hardware or software prototype that can be used to emulate real performance and thereby real-time behavior The associate editor coordinating the review of this manuscript and approving it for publication was Sotirios Goudos.
can therefore be considered a DT and not necessarily a unique one. Some simulation aspects of DTs, taking into account optimized operations and failure predictions, were analyzed in [4]. Important model properties such as model scalability, interoperability, expansibility, and fidelity were analyzed in [5] through a reference model for the DT in design and production engineering. The study pointed to the main differences between a conceptual model and the corresponding virtual representation for the DT. The similarities, differences, and complementarities between big data and DT and the extent to which both can be integrated to promote smart manufacturing and Industry 4.0 were thoroughly reviewed in [6]. Recent studies have focused on ways to produce and to use big data in ICPS throughout the product lifecycle on the basis of a method for product design and manufacturing, service driven by the DT procedure [7]. The case-study in [8] connected the simulation tool to the factory database, in order to demonstrate one possible solution that moves towards a semantic web and a linked data approach for factory systems.
New research working towards shop-floor DT systems was reported in the literature as well as key components ranging from physical to DT data [9]. However, three open issues still limit the practical implementation of DT: the bottleneck of communicating physical and virtual spaces to support interaction in real-time; the physical space and its variability, uncertainty, complexity and ambiguity, which complicate highly accurate and high-fidelity mirroring of the physical system; and, the different time scales of discrete, virtual, and continuous physical spaces. A DT-based model for individualized design of a hollow glass production line by combining the custom design theory, basic synchronization technology, and a hierarchical multi-objective optimization algorithm was proposed in [10]. The potential applications of DTs in design, system integration, diagnostics, prediction and advance services were likewise analyzed in [11]. A joint optimization model was proposed for coordinating micro-punching system and staggered process using DTs in [12]. Co-simulation during runtime of DTs, aiming at ''Plug-and-Simulate'' behavior is a challenging and as yet unresolved issue [13]. Communications and the required cybersecurity technologies for developing DTs are key topics that go beyond the scope of this paper [14]. In a recent review of state-of-the-art DT industrial applications, both the quick growth and the demand for suitable and efficient DT implementations in response to the main industrial technical challenges were corroborated in [15].
The integration of manufacturing data and sensory data into DTs of virtual systems to improve their accountability and capabilities for cyber-physical manufacturing is a key factor with very promising solutions to improve the accuracy of machine tools and their capabilities [16]. Prognostics and health management in the lifecycle monitoring of a product using DTs was explored in [17], to improve both the accuracy and the efficiency of complex equipment functioning in harsh environments.
Likewise, promising results have recently been reported for the evaluation of process plans with dynamic changes of machining conditions and DT-related uncertainties [18]. Along the same lines, on-going research into modelling DTs and their application framework for machine tools equipped with Computerized Numerical Control (CNC) that use unified modeling language and mapping strategies was reported in [19] with very promising initial results. Similarly, the main components of a DT for a machine tool including the finite element models of the structure, the model of the cutting process, and the model of the transmission chains and the control systems was reported in [20].
The main limitations of the above-mentioned approaches are their weak focus on control system performance and the unclear DT-based optimization procedures [21]. The CNC of a machine tool center is to date still the cornerstone of the manufacturing process, the quality and the efficiency of which rely on the efficient performance of the cascade control system. Cascade control is configured as two nested loops, the basis of which is that the fast dynamics of the internal loop will allow a more rapid attenuation of any disturbance and will minimize its possible effects, before it affects the primary output, which is the variable of interest that is controlled. In CNC machine tools, this variable is the position signal that generates a trajectory that must be followed during cutting. Large manufacturers of control systems such as Siemens, Heidenhain, and Fagor continue to provide a cascade P-PI solution for their machine tools, due to its robustness, low cost, and relatively simple tuning rules [22]. Many model-based control strategies have been explored, such as model predictive control [23], and robust control [24], but with very limited impact in real industrial setups.
However, tuning all required parameters using frequency analysis and experimental de-coupled rules is a slow, cumbersome, and inefficient procedure. The cross-correlation of parameters and the cross influence of the control parameters and feedforward components in the presence of hard nonlinearities, such as friction and backlash, limit the optimal setting of control and compensation parameters. In this paper, a DT-based optimization procedure for whole motion systems in the presence of backlash and friction is presented. The systems are assumed to be two-mass drive systems with a flexible shaft connecting the motor and the (elastic) load. This topic has received great attention from the scientific community, because the identification of mechanical parameters of the two-mass drive systems is not straightforward [25]. To the best of the authors' knowledge, the key point is the design and implementation of the DT for the whole system, including the virtual representation of all mechanical and electrical components including the load, the main nonlinearities (backlash and friction), and the corresponding control system. Moreover, the application of a DT that improves the control system behavior on the basis of optimization, as in this case study, is another approach with many applications, among which machine tools. The main optimization objective is to minimize the maximum position error while maintaining accuracy and with no significant increase in the control effort. Novel aspects of this work also include simulation and experimental studies on a real platform using different trajectories for tracking control and the comparison of four methods in DT-based optimization in real experiments.
The paper is organized as follows. Following this introductory section, the DT implementation and validation will be presented. The third section will describe the DT-based optimization of the ultraprecision motion system, which will include the experimental validation. Finally, the concluding remarks will be summarized in the fourth section.

II. DIGITAL TWIN IMPLEMENTATION AND VALIDATION A. PHYSICAL SYSTEM DESCRIPTION
Our study is focused on producing a DT for a real ultraprecision motion system globally available in many machine tool centers with CNC. The platform, shown in Fig. 1, consists of  a spindle-screw system for longitudinal movement of a carriage. The spindle-screw system and the carriage are mounted on a platform that can be rotated with respect to the base. This industrial platform also has a rigid system for locking each position, in addition to a shock absorber that prevents the impact of the pivoted assembly against the bench in case of unrestrained movement. The bench was arranged in a horizontal configuration, in order to carry out the tests.
The  Table 1.

B. DIGITAL TWIN DESCRIPTION AND IMPLEMENTATION
A DT was implemented, in order to analyze the behavior of the system. It is composed of two main components: the electromechanical model of the system and the model of the P-PI cascade controller (Fig. 2). The representation of the mechanical part is inspired in a system with two masses that has a spring, in order to represent three clearly differentiated elements: the motor, the shaft, and the load. The parameters of this model are: the axis torsional stiffness, K ; damping, B; the engine inertia moment, J M ; the load, J L ; the electromechanical moment applied by the engine, M M ; the moment of the load, M L ; and the torque moment of the axis, M S . In the proposed model, the angular velocities of the motor masses, ω M , and the load, ω L , and the axis torque, M S , will be used as the state variables [26].
If the resonance and antiresonance frequencies, ω 01 and ω 02 , are defined as: (1) and, the damping coefficients, D 1 and D 2 , are: then, the transfer function will be: By using the previously defined values of, ω 02 , D 1 , and D 2 , the second transfer function becomes: In addition to the mechanical model of the motor-load assembly, an electric model that relates a control signal (voltage or electric current) to the torque developed by the motor is required. In practice, the dynamics of the electrical part are much faster and are perhaps neglected in relation to the mechanical part. It involves reducing the electric model of the motor to a constant and to inertia. All the more so as the identification of the moment of inertia, the viscous friction coefficient and the load torque and setting their values is very challenging. The DT is an alternative method for representing the two-mass drive system [25].
It is also necessary to represent friction, backlash and noise, through computationally efficient models. Friction is a phenomenon inherent in any electromechanical system that impairs its functional operation. The most basic and the most widely used friction model in the industry is the Coulomb model, where the friction force, F, is constant with an F C value and dependent on the direction of the velocity. By adding a small viscous friction component, F V , that depends on the relative velocity between the surfaces, v, the conventional model can be expressed as shown in (5): A hysteresis block is also added for solving the problem of discontinuity in the zero crossing. This friction model, with experimental results near to real friction behavior, is very simple and effective at a computational level. Some research effort has sought to build models that are a combination of both approaches [27]. The conventional model only takes mechanical hysteresis into account by means of a dead-band zone centered on the offset equilibrium point.
Finally, the influence of unmodeled dynamics on the plant is represented by disturbance in the form of noise in the load position signal. In this work, a development in the Fourier series from the acquired real signal means that the main harmonics of the real signals acquired from a machine tool can be identified.
The P-PI control structure is defined in a cascade with feedforward components (speed and acceleration), and the set of ''plant + nonlinearities'' are modeled and represented using (3) and (4). A friction model based on (5) with hysteresis and the well-known dead zone model were used, respectively, for the definition of the nonlinearities and for the definition of backlash.
Leadscrew backlash compensation and friction compensation are included in the DT diagram shown in Fig. 2. The anticipative component creates either a positive or a negative discrete pulse, depending on the change of displacement in one direction or another. The reversal peak backlash compensation is therefore performed by increasing the motor speed (backlash peak amplitude, PP 2 ) for a time period (backlash peak time, PP 3 ), so that the exponential compensation of the backlash, due to movement reversal peak, will be: This additional command pulse is used to recover the possible spindle backlash in the motion reversals. Every time the motion of the axis is inverted, the CNC applies the set point corresponding to the movement plus the additional set point indicated by the above parameter.
Another important parameter is the hysteresis amplitude, f H , to solve the zero cross discontinuity problem in the Coulomb friction model plus viscous friction. It also controls when to start exponential compensation (6), due to the peak of the movement inversion, after detecting an inversion of the direction of movement. In this way, no exponential compensation is started each time an inversion command is received.
Therefore, in this work, six tuning parameters are considered that strongly influence the dynamic behavior (transient response and accuracy) of the motion systems in the DT of the whole system: where, K pos p is the proportional gain of the outer loop (position controller); K vel p , K vel i represent the proportional and integral gain, respectively, of the inner loop (speed controller). PP 2 is the backlash peak amplitude and PP 3 is the backlash peak time, which are the compensators for the backlash, and f H is the compensator for the friction hysteresis.

C. DIGITAL TWIN VALIDATION
Machine motion corresponding to a test trajectory or reference position (see Fig. 3) was simulated and compared with actual measured values, for DT validation and implementation.
The values of the adjusted parameters were obtained from a standard method proposed in the literature and applied in industry, called the Fine Tune (FT) method [28]. The FT method is a proprietary tool that can be loaded directly on the open CNC or on a personal computer. This auto-tuning method serves to perform a fine servo-performance tuning one axis at a time, or all axes automatically by means of a combination of experimental studies and frequency response diagrams. Within the graphics mode, the Bode frequency response diagrams can be displayed that allow the user to interpret the dynamic behavior of each axis and make decisions for later readjustments of the axis control loop. Within the operating mode, the user takes measurements and develops a more detailed display of the various diagrams. The results create an information bar that displays the data regarding the cutoff frequency, the gain margin or the phase margin of the Bode diagram before and after the auto tuning. It thereby produces a helpful visual improvement chart for final decision-making by the user. This iterative process is very costly and a conservative estimate of the average parameter tuning time would be 5400s, even though it could take days, depending on the machine tool. Although further analysis of this method is beyond the scope of this paper, additional details are available in [29].
The corresponding values were obtained using the FT method, by combining experimental data processing and frequency analysis: The position values corresponding to the reference were simulated using the proposed DT. After that, real values were experimentally obtained on the test platform. The error was computed as the difference between the simulated and actual values. In contrast, after the initial damping, the error had a steady-state behavior (see Fig. 4), with low values. A remarkable aspect was the relationship between the error and the speed, with higher error values at higher speeds. Peaks also occurred at speed changes. This behavior was caused by the dynamic characteristics of the DT that include friction and backlash.
The maximum absolute error of the simulation study was 12.58 µm; the mean absolute error, 1.04µm; and the root mean squared error; 1.59 µm. It can be concluded that the position error was remarkably low, within the intervals usually considered for motion systems used in CNC machine tool centers. Therefore, using the reference shown in Fig. 3 (reference 1), the simulation of the DT depicted in Fig. 4 reflected the behavior of whole system very well.

A. PROBLEM DEFINITION
There are several figures of merit or cost functions, widely used in the industry in general, which are applied both at the design stage and for the evaluation of the control systems. In particular, the maximum absolute error (i.e., absolute value of the maximum path tracking error) was selected during the reversal of the axes, E pk : Considering that the P-PI cascade control loop controllers are defined by control laws, which relate their input variable (the error) with the control action, and that v is a vector of the parameters, it can be expressed in the time domain in the following way: thereby defining an interval [t 0 , t F ] that corresponds to both the temporal and the dynamic responses of the control systems before changes in disturbance, Y Z , or a reference trajectory, r(t).
From the temporal behavior of the error within the defined interval, it is possible to define a certain cost function that evaluates the dynamic behavior of the control system, by means of a figure of merit: where, E pk is defined as the peak in the error that occurs when the direction of the path or trajectory changes. E pk is influenced by the nonlinearities that deteriorate the transient response and the accuracy, which can ultimately affect the quality of the manufactured components. Once the cost function is chosen, it is therefore a matter of minimizing the maximum peak in the position error:

B. DIGITAL TWIN-BASED OPTIMIZATION HEURISTICS
Nowadays, an industrial procedure called the Fine Tune (FT) method, previously described, is applied to perform the optimization process. Three gradient-free procedures were selected: Simulated Annealing (SA), Genetic Algorithms (GA) and the Cross-Entropy method (CE). As previously explained, these heuristics require no strict mathematical prerequisites which are mandatory for analytic and numeric optimization methods [30]. The previously described and well-established industrial tuning method, the FT method, has two main drawbacks: it is both costly in terms of the time required for tuning and inefficient in terms of the optimality of the solution obtained. Genetic algorithms (GA) are one of the most popular gradient-free heuristics for solving optimization problems. As with other evolutionary algorithms, they are inspired in the evolution of biological species. Nevertheless, the main distinctive characteristic of GA is the encoding of each individual solution into a string of information (called a chromosome) [31]. This codification makes the algorithm problem-independent, so they can be considered robust in nature.
Several approaches have been proposed for each operator. Selection can be carried out, among others, by using roulette, tournament or ranking. The main approaches for crossover are single-point, multi-point, uniform, half uniform, partially matched and heuristic-based. Finally, uniform, non-uniform, Gaussian and supervised are the most widely used mutation methods [32]. Some concepts such as elitism, which guaranties the survival of better solutions in the next population, have also been incorporated to enhance the performance of GA [33].
The cross-entropy (CE) method is a population-based heuristic which solves optimization problems by transforming them into associated stochastic problems with very low probabilities using a variance minimization technique [34], [35]. The foundation of CE is the construction of a random sequence of solutions that converges in a probabilistic way towards an optimal or a near-optimal solution [36]. The CE algorithm [37] starts by initializing the mean and variance of the distribution that should be used for generating the working population. This initialization has a stochastic component. Following that process, a loop is inaugurated until the preset stopping conditions are reached, either by reaching the maximum number of iterations or through a convergence of the solutions. The mean and the variance are updated in each iteration from the so-called elite population, composed of the most suitable individuals from the working population.
Simulated annealing (SA) is another well-known gradientfree optimization heuristic. Based on metallurgical cooling processes [38], slow cooling in the metallurgic annealing process aims to obtain a global minimal energy state in a metal, giving it a stable structural state, and avoiding the metastable states with higher energy. In a similar way, the simulated annealing method targets the global optimum of a mathematical function, avoiding the local optima [39]. SA works by dealing with a single solution point, that is randomly selected. An algorithmic parameter is also initialized, which determines the probability of moving from one state to another. The optimization process takes place in a cycle, which ends when some conditions are achieved: usually, when the temperature parameter reaches some prescribed value and after a maximum number of iterations.
Indeed, many swarm intelligence algorithms have also demonstrated very good results when solving optimization problems including parameter tuning problems [40]- [42]. However, the assimilation of these techniques in industrial informatics is not expanding as might be expected, basically due to the number of parameters and the lack of precise procedures for setting these parameters. New methods such as PSO based on quantum mechanics (QPSO) do not require velocity vectors to move the particles, and the number of adjusting parameters is less than the standard PSO [43]. QPSO has demonstrated the high potential for setting parameters of optimization methods [44]. However, a comparative study among QPSO and CE, which have similar complexity and convergence speed, is beyond the scope of this paper. New hybrid meta-heuristic methods based on cross-entropy and swarm intelligence are under exploration [45].
DT-based optimization for the ultraprecision motion system with backlash and friction is focused on optimal settings for the three P-PI controller parameters and the compensating parameters for backlash and friction. The procedure combines an optimization method (i.e. either GA, either CE, or SA) with the DT that is developed (see Fig. 5). Regardless of which heuristic is used, the whole procedure is the same. Firstly, a random population (i.e. a set of solutions) is created VOLUME 7, 2019  and then evaluated. After this evaluation, the end conditions of the corresponding method are checked and, if any of them are fulfilled, the optimization process ends and an optimal (or near-optimal) solution is computed and updated in the real system. Otherwise, a new population is created from the current one.
The optimization heuristic is linked to the DT through the population evaluation. Initially, the DT is configured by specifying the proper real technical information and a real trajectory is defined. For each solution in the population, a simulated trajectory is then obtained by the DT and the performance index (10) is computed, based on the difference between both real and simulated trajectories.

C. OPTIMIZATION FOR THE TESTING REFERENCE
Equations (1)-(6) and the procedure depicted in Fig. 5 were initially implemented on a desktop computer with Intel Core i7-4790 CPU 3.6GHz, 64bits processor (8GB RAM), for the sake of fast implementation. The prototyping of the whole digital-twin procedure was implemented in MatLab/Simulink R R2018, while the updating of parameters (12) obtained with DT-based optimization, shown in Table 3, was automatically performed in the Fagor 8070 CNC.
The control and compensating parameters obtained by the four selected methods are shown in Table 3. Simulations with the DT enabled the computation of position errors and control signals for each optimization method (see Fig. 6 and Fig. 7).  The DT-based optimization procedure previously described is applied by using GA, CE and SA. The setting parameters are shown in Table 2. A detailed analysis of a velocitychange point, when it is applied the reference position shown in Figure 3, corresponding to the interval (4.8. . . 5.0s) is shown in Fig. 7.
The sudden increment in velocity led to a higher position error. However, the three optimization methods produced lower maximum position error values than those given by the FT method. Nevertheless, there was no noticeable increment in the control effort, even when the velocity values increased. Two performance indices were calculated for position error, in order to obtain a more reliable comparison: the maximum absolute error (i.e., absolute value of the maximum path tracking error) that occurs when the direction of the path or trajectory abruptly changes (E pk ) and the Integral Time Absolute Error (ITAE). Moreover, the Integral of the Absolute Control Signal (IAU) was also computed to quantify the control effort. Finally, the execution time was also determined for each optimization heuristic, in order to evaluate the computational cost of applying those techniques. Analyzing the values that were obtained (see Table 4), a remarkable improvement could be noted both for E pk and for ITAE when the optimization heuristics were applied. Interestingly, the IAU obtained for the four methods showed similar values in relation with the real physical process and constraints on the motor current available and power. It also corroborated the very good resemblance between the simulated control signals and the actual current signal of the motor.
Finally, by comparing the execution times it was concluded that the cross-entropy method required much shorter times than the alternatives for obtaining the corresponding optimal values.

D. EXPERIMENTAL VALIDATION
The real position error and the control signal were analyzed on the basis of real experimental results obtained from the real system, in order to validate the procedure for DT-based optimization on the basis of the simulation results.
The four position errors graphs using the reference position shown in Figure 3 (reference 1), are shown below in Fig. 8. The maximum absolute position errors were lower than 16µm in every case. It should be noted than the higher error values take place at higher velocities.
The behavior of the control signals both from the DT simulation and the real experiments revealed no relevant differences between the four methods (FT, GA, CE, SA) under consideration in this study and, for that reason, are not represented. For the sake of clarity, Fig. 9 only shows close-up graph sections of the position errors and control signals in the interval (4.7. . . 5.4 s). Similar improvement trends, can in the first place, be remarked, but different values for position errors with the proposed method, and, secondly,  a high similarity of control signals, therefore only a slight increase in the control effort can be inferred.
Improvements in the maximum absolute errors of 28%, 26% and 20% for GA, CE, and SA, respectively, with respect to the FT method are clearly demonstrated when the performance indices are compared (Table 4). Considering the integral time absolute error instead, those improvements were 21%, 23%, and 19%, respectively. On the contrary, the parameters yielded by the optimization heuristics caused no increment of over 3% in the control effort represented by the IAU.
Finally, let us consider a new very demanding reference trajectory in terms of amplitude and changes in the reference velocity, as shown in Fig. 10. The real-time results  of applying the same optimized setting parameters shown in Table 3 using this new reference (reference 2) and the DT are depicted in Fig. 11. This figure represents a close-up graph section of the position errors and control signals in the interval (0.5. . . 1.2s). Table 5 shows the corresponding performance indices, in order to visualize the real impact of the proposed method on the reduction of the maximum absolute position error. It corroborates the generalization capability of the digital twin-based optimization procedure, regardless of the shape of the reference that is used.
The swing test using this new high reference corroborated the benefits of applying the DT-based optimization.
The improvement in the main performance indices depicted in Table 5 is evident and quite remarkable: up to 50.9% in maximum error reduction for SA and up to 24.8% in ITAE for CE, with no significant increase in the control effort. Overall, CE slightly outperformed SA, FT, and GA with 50% and 24.8% of improvement in E pk and ITAE increasing the control effort (IAU) by only 0.3%.

IV. CONCLUSION
In this paper, a digital twin for modelling the behavior of ultraprecision motion systems with backlash and friction has been presented. The digital twin emulates the whole motion system including friction and backlash and the control system. It is applied in the proposed procedure for optimal setting of the parameters of the emulated two-mass drive system.
In the case study, the parameters of a P-PI cascade control system and the compensation gains were adjusted using the FT method. Three gradient-free optimization strategies were considered in the DT-based procedure. Initially, the results were compared by considering the simulated data for a reference, showing the high precision of the DT.
The effectiveness of the proposed digital twin-based optimization method was also evaluated in real time trajectory control experiments using a real platform with an open CNC capable of interaction with the DT. The improvements in accuracy in terms of the maximum position error and integral time absolute error were significant. This remarkable improvement was achieved with a slight increase in the control effort, quantified by the integral of the absolute control signal. It should be noted that the cross-entropy method required a remarkably shorter time than the other optimization approaches for almost similar outcomes. Further studies to analyze the influence of other optimization methods will be conducted, as well as the hybridization of gradient free methods such as quantum-based particle swarm optimization and cross entropy.

ACKNOWLEDGMENT
This work has been completed within the framework of projects DPI2017-86915-C3-1-R ''Cognitive inspiration navigation for autonomous driving'' and the European Project Grant 826417 ''Power2Power: The next-generation silicon-based power solutions in mobility, industry and grid for sustainable decarbonisation in the next decade''.
RODOLFO HABER GUERRA received the Ph.D. degree in industrial engineering from the Universidad Politécnica de Madrid, Spain, in 1999. He is currently the Vice Director of the Center of Automation and Robotics-CAR, UPM-CSIC. In particular, he is now involved in three European projects related with cyber-physical systems: Power2Power, PRYSTINE, and IPAE: Industry 4.0 for production and aeronautics. His current research interests include intelligent systems, modelling, control and optimization methods, artificial cognitive systems, and cyber physical systems. He authored three books, 20 book chapters, and more than 60 articles in indexed journals and dozens of conference papers. He is a member of IFAC's TC 3. RAMÓN QUIZA received the Ph.D. degree in manufacturing engineering from the Universidad de Matanzas, Cuba, in 2005. He is currently the Director of the Study Center for Advanced and Sustainable Manufacturing, University of Matanzas, and also a Titular Member of the Cuban Academy of Sciences. He has published several papers and books contributions in the fields of applied artificial intelligence and modeling and optimization of manufacturing processes. He has participated in the development of several software products. He did postdoctoral and stays in several universities and research centers (in Germany, Spain, Suriname, and Venezuela) carrying out research or academic task. He is a member of Editorial Board of three scientific journals. His current research interests include optimization, artificial neural networks, artificial intelligence, and manufacturing processes. where he is currently a Researcher working on several research projects of European Commission, Spanish National Plan and four private contracts with companies. He is currently a Postdoctoral Researcher with the Center for Automation and Robotics, Madrid. He is currently the Technical Coordinator from part of the CSIC in a European project ECSEL and Power2Power. His research interests include artificial intelligence, modeling and simulation of embedded systems, sensor networks, the Internet of Things, intelligent transport systems, monitoring, and supervision of physical processes. He has published several papers on these topics. He is a member of the reviewer board of few international journals.