Digital Twin Application in Ground Simulating Space Debris System With Laser-Driven Flyer Technology

In recent years, with the continuous development of spacecraft, the demand for experimental and analytical simulation of space debris impact is increasing. Therefore, the simulation of space debris technology such as laser-driven flyer (LDF) is becoming increasingly strategic. However, in traditional LDF experiments, the experimental environment is difficult to control and the number of experiments is constrained by cost, resulting in only a small amount of valid data available. This makes it difficult to evaluate the sample damage. In this paper, thus, a method of LDF digital twinning with two phases and its experimental equipment is proposed. During the experimental phase, it provides a standardized experimental environment, while its experimental equipment facilitates automated, remote experiments and virtual training. In the evaluation phase, it is possible to use neural networks to enrich a small amount of experimental data into high-throughput data containing physical damage parameters, and finite element simulation to obtain microscopic damage processes and related data. Both methods have been validated by additional experiments, and the average error rate is 8.13% for the neural network and 3.1% for the simulation, using the fragmentation zone diameter as the predicted object. Again, both of these are much smaller than the 10% error rate, which is defined by the analysis of the experimental data. In addition, this paper proposes a full-cycle experimental path that is more compact, saves experimental costs, and reduces experimental cycle time compared with the traditional experimental path.

the cost, and facilitating the operation, which opens up a great opportunity for development.
In traditional experimental phases, LDF experiments need to be repeated several times due to the lax control of the experimental environment. In this process, it takes a lot of time to repeatedly switch on and off the vacuum chamber and to calibrate and set the parameters. In the traditional evaluation phase, only a small amount of experimental data is obtained due to the high cost.
For the analysis of LDF high-speed impact experiments, Geng [7], Wang et al. [8] used finite element to virtualize the impact effects and obtained results that match the real experiments. This also brings the high-speed LDF impact experiments into the virtual space. However, there is still a gap in the creation of virtualization for full-cycle high-speed LDF impact experiments.
In recent years, digital twin (DT) technology has become a popular method for building up real and virtual spaces [9]. It can automate, remote, and visualize the control operation of traditional equipment [10] as well as virtual training of operation [11]. It can also help to process and test a narrow range of samples [12]. And it can use existing samples for machine learning to train models and achieve fast prediction results [13].
During the experimental and evaluation process of LDF high-speed impact, a standard and convenient experimental environment need to be provided in the experimental phase, and the tester must follow the experimental operation specifications and procedures. Adequate experimental results and the corresponding microscopic damage processes need to be obtained in the evaluation phase.
DT is an approach that can face multiple scales and scenarios [14], so the phases of twinning are also divided into two. Machine learning methods are usually needed to drive the DT [15], however, the LDF impact experiments produce too little experimental data, and this small sample data may lead to overfitting when simulating and predicting parameters by using machine learning. And there is a problem with concentrating the parameters in a small range in these data, so it lacks the confidence interval and applicability range for the data to be valid. To solve this problem, the method of synthesizing data by supplementing them with physical models has become a solution for modeling DT from a small amount of data [16].
In this paper, two interrelated digital twin phases of LDF's whole life cycle are introduced to improve the reliability and practicability of micro-debris ground testing in space. The DT established in the experimental stage provides a standardized experimental environment; The DT established in the evaluation stage can obtain a small amount of experimental data through the virtual model, and obtain high-throughput macroscopic physical damage parameters and microscopic damage mechanism simulation data. In the high-speed impact study of quartz glass, the data obtained include the fracture diameter and the microscopic damage process simulated by finite element method. It also focuses on the method and process of using synthetic data for the problems of small samples in space debris impact experiments and the concentration of parameter ranges in the data during the evaluation phase. At the end of the paper, the full cycle path of utilizing this LDF digital twin experiment equipment is given. This study will provide the application of DT for the evaluation of reliability engineering in space environments.

II. DIGITAL TWIN IN THE EXPERIMENTAL PHASE A. DIGITAL TWIN FRAMEWORK
The DT in the experimental phase serves three purposes. 1) To provide a standardized experimental environment, which is very important for space environment reliability evaluation experiments. 2) To provide an experimental platform with automation, visualization, and easy operation.
3) To provide a virtualized training platform.
The architecture of the DT solution is based on the fivedimensional model proposed by Tao [17]. The architecture consists of five main components: real space, virtual space, service system, twin data, and connections between real and virtual components. In this study, the real physical entities are located in the real space at the bottom left of Fig. 1. In the virtual space at the bottom right of Fig. 1 lie the virtual models, which are formed by mapping the real physical entities and can also be considered as a visual representation of the real physical data processed by the principal computer.
As shown in Fig. 1, in this cycle, the commands given by the principal computer are controlled by the Programmable Logic Controller (PLC) to drive the experimental equipment in real space, for example, lasers, slide tables, etc. At the same time, the sensors used to detect the experimental status will transmit real-time data to the principal computer, which will not only visualize this data in the virtual space but also compare the data with the experimental criteria to decide whether to proceed with further experiments.
The operator's commands are given in the service system. The service system is a platform for optimal resource allocation and integration through the synchronization of the elements between the virtual and the real. It can be driven by interaction with the principal computer to achieve automated control, online status monitoring of each instrument, and virtual space visualization. Besides, connecting the service system to the Internet allows remote operation and monitoring.

B. CONSTRUCTION OF PHYSICAL ENTITIES
The physical entities in the LDF digital twin are the physical experimental devices of the LDF. As shown in Fig. 2, in addition to the six core components of laser, focusing mirror, target, sample, precision positioning stage, and vacuum chamber, there are also auxiliary components such as the mechanical pump, the electronically controlled valve, and the sensor that monitors the inside of vacuum chamber. It is possible to realize multiple single-point impact experiments of specimens without opening the vacuum chamber during the experiment, which is the automated control of LDF.  In addition, there are many sensors installed in the LDF physical devices. With the help of sensors, the detected data can be obtained in real-time, which is the key to realizing the data twin.

C. CONSTRUCTION OF VIRTUAL MODELS 1) BUILDING A PHYSICS MODEL
The key to virtual model construction is how to realistically and accurately map the real space of the physical model to the virtual space. Modeling is done strictly under the drawing data by using computer-aided design (CAD) software such as SolidWorks, UG, etc. for equal scale modeling. To enhance the visual sensory experience, the built model will be imported into 3D Studio Max software for rendering. The final model file is generated in fbx format and imported into Unity 3D software. And then, additional C# scripts need to be written to implement the logical actions of the laser-driven flyer for the hypervelocity impact process and to set up API interfaces to call the data deposited in the MSSQL database and the cloud database. The steps are shown below: (1) Modeling: Carry out equal scale modeling in SolidWorks, UG, and other 3D modeling software, according to the actual equipment model and experimental body design data. Complete assembly of some components and store them in wrl or stl file format.
(2) Mapping: Process photos of equipment appearance with PS, GIMP, and other image processing software.
(3) Rendering: Import the model file in wrl or stl format into 3ds MAX software, apply surface mapping to the model with the above-processed photos, and perform overall rendering to enhance the appearance of the model consistent with the appearance of the physical device and improve the visual experience. Finally, the processed model is converted to fbx format and exported.
(4) Scene building: Import fbx format model files into Unity 3D software environment. Reproduce the information in the virtual scene regarding the physical world equipment layout state, adjust the scene lighting, add sound effects, and use the sky box and shader.
(5) Model physical property setting: Set up the corresponding model parent-child relationship in Unity according to the working condition and motion law of each device in the actual system, add joints, rigid bodies, collision bodies, and other components to the key structure of the device, and give the twin body physical properties.
(6) Interface creating: Write C# language scripts and apply relevant Unity components for UI interface design. According to the movement pattern of the actual production experiment activities of the equipment, prepare for the joint process coordination simulation of the equipment.
(7) Data sharing: Design a generic virtual-reality data interface. Using the Visual Studio 2017 development platform, acquire virtual system operation commands from data acquisition cards through the use of their dynamic link library functions. Then, take the collected manipulation instructions and store them in the MSSQL database and cloud database with SQL. Finally, call the database control command data through the API interface in Unity with ADO.NET from .NET Framework.
It not only shows the status of the experiment platform in real time but also contributes to the construction of the virtual experiment system.

2) BUILDING MATHEMATICAL MODELS
Maintaining a standard and accurate experimental environment is the key to obtaining stable flyer velocity and impact results. In this paper, the stability of experimental environment parameters such as vacuum degree and temperature is realized by the service system control decision. (See Supplementary files for relevant codes). The fluctuation of laser energy is also another important factor affecting the velocity stability of the laser. Therefore, the relevant DT model is established to further improve the accuracy of flyer speed control. As shown in Fig. 3, a logarithmic function can be fitted based on the relationship between laser energy and flyer The laser energy for the next experiment is considered to be the energy magnitude of this experiment plus the energy difference that needs to be corrected. The mathematical model to control the energy of the next laser can be obtained by associating equations (1) and (2).
In (2) and (3), E i+1 is the corrected laser energy for the next experiment, v t is the velocity of the target, v i is the velocity magnitude measured in this experiment obtained from the LDF digital twin system, and E i is the laser energy of this experiment. The correction is stopped when the difference between the target velocity and the true flyer velocity is less than 0.01 km/s, and the value of this experimental velocity is considered equal to the value of the target velocity.

D. ASSOCIATION BETWEEN PHYSICAL ENTITIES AND VIRTUAL MODELS
The core of the connection between physical and virtual components is the use of sensors for data acquisition. The set of sensor acquisition parameters currently available for this device is shown in Table 1. Some of the sensors are fixed in the whole LDF equipment, such as vacuum gauges, temperature sensors, and position sensors. The real-time data transmitted by them are fed back to the principal computer for real-time display and data processing. Another part of the sensors, such as laser energy meters, are used for the calibrations of equipment parameters and are not fixed instruments in the main body.
The devices with serial communication functions are connected to the self-developed edge-side devices in a pointto-point connection. The edge-side device converts analog signals such as RS485, and RS232 according to their respective protocols, and then connects to the router via fiber optic devices for cloud upload. The edge computing mode and point-to-point independent communication lines largely ensure the stability of physical communication and reduce the corresponding latency. For example, the laser driver requires RS232 serial conversion to connect to the principal computer.
With the sensed data, the service system can give the corresponding control commands. Its commands can act on electronically controlled gas valves, coolers, etc. to ensure that the environment of the experiment is constant and standard.

E. VIRTUALIZATION TRAINING
In the virtual training process before the experiment, the LDF space debris simulation equipment requires the use of highenergy lasers for hazardous experiments, and the operation of this equipment requires some experience. Therefore, the training manual is usually used for self-directed learning and requires several practical operations led by experienced experimentalists.
As shown in Fig. 4, the use of the DT provides the trainer with an interactive and immersive training experience. The system uses a UNIT virtual engine with real physical fields and experiment-driven logic to score operators based on their virtual operational actions. It realizes zero-loss, repeatable, and full experimental training. Significantly improves training efficiency and saves training costs.

III. DIGITAL TWIN IN THE EVALUATION PHASE
The purpose of digital twinning in the evaluation phase is to enrich a small amount of experimental data. By twinning the damage data and the micromechanisms, high throughput simulation results are obtained to maximize the usefulness of experimental results.

A. VIRTUALIZATION TRAINING 1) EQUIPMENT AND PRINCIPLE
The schematic of the LDF process is shown in Fig. 5. the main structure of the LDF device contains six parts: laser, focusing mirror, target, sample, precision positioning stage, and vacuum chamber [2], [18]. Among them, the Nimma series Nb: YAG pulsed solid-state laser (produced by Beamtech Optronics Co., Ltd.) is selected, with maximum output energy of 900 mJ, output wavelength of 1064 nm, pulse width of 7-8 ns, frequency of 1-10 Hz flat-top laser, and laser parallel light spot diameter of about 9 mm. The laser beam is guided into the vacuum chamber, which guarantees a vacuum of 1 × 10 −3 Pa. This laser beam can change the spot size when passing through the focusing lens by changing the distance between the focusing lens and the target. The focused beam passes through the glass substrate of the target and irradiates VOLUME 10, 2022 the surface of the aluminum foil film. The irradiated portion of the film surface instantly evaporates, vaporizes, and ionizes, forming a high-temperature, high-pressure plasma. The pressure gradient between the plasma pressure on one side and the vacuum zero pressure on the other side leads to an acceleration of the exfoliated layer, which causes the aluminum layer to fly away and hit the sample at a hypervelocity of up to about 6 km/s.
There are various ways to connect metal films to glass substrates including magnetron sputtering, deposition, or bonding, but the connection must allow for the highest possible stress and the ''fracture'' stress should be a function of the yield strength of the metal rather than the interfacial bond strength [19]. The method used in this experiment is silicone oil bonding. As shown in the enlarged view of the sample on the left side of Fig. 5, the sample formed a ''petal-like'' impact crater due to the impact of the flyer, where the crater diameter of the fragmentation zone is the distance between the two farthest points on the surface of the quartz glass crater.
For the measurement of flyer velocities, the three main existing velocity measurement methods are optical interferometric velocimetry [20], photomultiplier tube and piezoelectric sensor velocimetry [21], [22], and high-speed photographic velocimetry [23]. Based on piezoelectric sensing, a simple speed measurement method [24] is adopted, as shown in Fig. 5. The speed of the flyer can be calculated by the flight distance and the corresponding time needed. The vertical length from the flyer target and impacted target is the fight distance. The flight time of the flyer is calculated by using polyvinylidene fluoride (PVDF) piezoelectric film to sense the shock wave on the target and the sample, respectively. This approach has been used for this LDF experiment.

2) MATERIALS
LDF equipment can perform a variety of experiments and analyses of materials used in spacecraft, such as AlN/Si2 laminated composites, aluminum alloys, polyimide films, quartz glass, and so on. Among them, quartz glass is a kind of special glass with excellent properties, such as good thermal stability, permeability and electrical insulation, and strong corrosion resistance, which is widely used in spacecraft optical equipment. Therefore, quartz glass is commonly used as a sample to study the impact damage effect of micro fragments on optical glass.
To investigate the performance of LDF devices applied to DT, this experiment uses a common experimental combination of a 16 µm aluminum flyer to impact a 1.5 mm thick quartz glass.

B. CONSTRUCTION OF VIRTUAL MODELS 1) MACRO DATA TWIN
First, according to the experimental method in Section III-A-1, the experimental material prepared in Section III-A-2 was used to conduct multiple sets of impact experiments with uniform distribution of the independent variables, and the parameters of the experiments were obtained from the LDF digital twin device, and the diameter of the fragmentation zone Dc was measured by microscopy after the experiments and marked in the data table, as shown in Table 2.
There were two problems with the data simulation of experimental results: 1) the amount of data for the experiment was too small due to the limitation of the experiment volume and cost. Machine learning might not be able to perform the statistics and adaptation well and might be severely overfitted. 2) the parameter range of the experiment would determine the effective range of prediction and simulation. If there is a concentration of parameters in the experiment, the role of machine learning will become very limited, because the use of machine learning can only be trained within the parameters of the training set to get accurate prediction results. For example, in Table 2, it would be difficult to predict results for velocities greater than 4.13 km/s or less than 2.24 km/s. The above problems can be effectively solved by using the synthetic data approach. Synthetic data can be generated from virtual simulation models or other validated physical models. Then it is combined with real experimental data to form a dataset. Finally, it is used to train machine learning models [25].
Therefore, a theoretical equation with physical significance needs to be derived and a synthesis generated based on this model. These data will form a dataset together with the actual quantities in Table 2, and from this dataset, machine learning will be performed. The specific physical model derivation process is shown below.
The dimensionless parametric impact equation for a spherical projectile under high-speed dynamic conditions [26] is: where D c is the diameter of the fragmentation zone; d p is the diameter of the spherical projectile; ρ p and ρ t are densities of spherical projectile and target; ρ p and ρ t are intensities of  spherical projectile and target. The inertia ratio √ Y t /ρ t of the target is also used to measure the velocity of the projectile, and the dimensionless velocity The dimensionless impact damage law equation for fitting laser-driven flyers by adding the parameter of the shape factor of the projectile to the original dimensionless parameter is: According to the theory of high-speed impact dynamics, it is known that D c /d p has the following equation: Here A is a material parameter only, and A is: To obtain the multi-characteristic parametric impact equation for the flake particles, it is first necessary to solve for its impact parameter g. The parameters of the material properties used in Table 3 are selected and substituted into the equation, and the impact parameter is obtained as g = 1.8 by linear fitting based on the data in the experimental data, as shown in Fig. 6. Equation (9) was obtained after rectification. The data generated according to (9) are shown in Table 4.
The data from the experiments and the synthetic data obtained using the equation are collated together to obtain a dataset. This dataset can be used for trend fitting using neural networks to make predictions of the data.
In this machine learning process, the obtained dataset is used to train the model in the Neural Net Fitting toolbox of Matlab, with the velocity and diameter data matrix of the flyer as input and the data matrix of Dc as output, using the levenberg-marquardt (L-M) algorithm divided according to the ratio of 60% training, 20% validation, and 20% testing. Since the number of hidden units should not exceed twice the number of units in the input layer [27], a hidden layer is set and contains 10 neurons. Its neural network structure is shown in Fig. 7.
The L-M algorithm is a modified form of the Gauss-Newton method and is much faster than the gradient method [28]. It is done by transforming the vector w k , consisting of k iterations weights and thresholds, into the vector w k+1 , consisting of new weights and thresholds. That is, w k+1 = w k + W . the weight increment W is calculated as follows: where I is the unit matrix, and J(w) is the Jacobian matrix, as follows: When the real data of the experiment is composed of the synthetic data generated by (9), a dataset with about 1300 samples will be obtained. After training this data set, as can be seen in Fig. 8, the best validation performance is 5.5283 at epoch 217.

2) MICROMECHANICAL TWINS
By using the LS-Dyna software, the physical mechanism is simulated twinned in the SPH way, and this method is widely used in simulation experiments of space debris [29].
For the numerical simulations, AL 6061-T6 was chosen as the aluminum flyer, the equation of state was the Shock equation, and the strength model was the Steinberg-Guinan model. For the simulation of quartz glass, the adopted intrinsic model is the new quartz glass material model (12), as shown at the bottom of the page, and the quartz glass strength (13), as shown at the bottom of the page, used in [30] and [31].
In (12), P crush and µ crush are Hugoniot elastic limits; P crush is the pressure corresponding to the termination point of the transformation phase; µ TH = 1.02 is the parameter obtained by P lock calculation;μ = (µ − µ lock ) / (1 + µ lock ); K3, K2, K3 are polynomial coefficients. Equation (13) is an orthogonal dimensionless equation. The parameters in the equation are derived from the embedding of the corresponding material model.

C. ASSOCIATION BETWEEN PHYSICAL ENTITIES AND VIRTUAL MODELS
The diameter of the fragmentation zone, a key parameter, is used to verify whether the prediction of its virtual model is accurate. Based on the previous experimental results, the relative standard deviations (RSD) of the experimental results with the same parameters are derived, as shown in Table 5. It can be seen that the experimental deviation value of about 10.14% may occur in the experimental results due to the fluctuation of laser energy and the presence of small perturbations. Therefore, the deviation of the results within 10% is considered credible data.
To verify the accuracy of the model trained by machine learning, a set of data that is not involved in the previous physical equation construction and model training is prepared. Due to the stochastic nature of machine learning, the average error rate predicted by this set of data is the error rate of its model. As shown in Table 6, the average error rate trained by machine learning is 8.13%, which is much smaller than the 10% defined above and even smaller than the error rate predicted by the empirical (9). Therefore, it can be shown that the data prediction by machine learning has correctness and is equally applicable in the case of small samples and a small range of values of experimental parameters. In the future stages, the error rate of prediction will be further reduced with the increase of experimental samples and the optimization of the structure and parameters of the neural network.
To verify the validity of its finite element simulation model, a set of experiments is used for validation. The parameters of this set of experimental data are aluminum flyers of 200 µm diameter and 16 µm thickness, impacting quartz glass at a speed of 5.25 km/s. As shown in Table 7, taking the value of the diameter of the fragmentation zone as the comparison standard, the error rate of the simulation is 3.1%, which is also less than the defined 10% error rate. Therefore, it can be shown that the finite element simulation is numerically reasonable. As shown in Fig. 9, a large impact crater was formed on the surface when the flyer came into contact with the sample during (a); the sample in (b) and (c) was gradually damaged radially and along the impact direction by the surface wave and rapid crack expansion; the cracks on the sample surface continued to expand radially with the impact point as the center during (d), and the depth of the impact crater and fragmentation zone kept increasing, while the back of the quartz glass sample produced Laminar cracking, which was caused by the stretching wave reflected from the free surface at the back of the target plate. The simulation is identical to the real impact and damage process.
The simulation state of quartz glass impact is shown in Fig. 10. Observing the impact pressure and temperature distribution, it can be seen that during the contact between the flyer and the quartz glass target, a transient high temperature and high-pressure state is generated at the front end of the target plate. At this impact velocity, the molten state distribution of quartz glass appears in the impact zone (green part), while the state of other unimpacted quartz glass is always solid (blue part).
When the impact craters formed on the quartz glass are observed with the Keyence VHX body microscope, the impact craters on the quartz glass can be directly imaged while being observed because the body microscope system consists of a metallographic microscope and a macroscopic camera stage to form an optical imaging system. The experimentally obtained damage morphology is shown in Fig. 11. By comparing the simulation results in Fig. 9 with the real scan in Fig. 11, it can be seen that the damage state of the quartz glass is basically the same. Therefore, the correctness of the microscopic simulation can be proved.
By synthesizing the data through the theoretical equation and processing the results with machine learning, which is a statistically significant method, the virtual data can become closer to the real experimental data results. The simulation method has real physical significance, which allows the tester to see the impact process of the flyers and the microscopic damage mechanism of the material more intuitively, which is important for the calibration of the experiment and the analysis of the material damage. Therefore, this virtual model can effectively build up a DT architecture with macroscopic numerical quantities and microscopic mechanism simulation, as shown in Fig. 12. Thus, it makes a small amount of experimental data rich and maximizes its value.

IV. MICRO-DEBRIS GROUND SIMULATION TEST CYCLE
The American Institute of Aeronautics and Astronautics [32] defines the digital twin: it is a decision that uses a continuously updated virtual space of real physical data in the full life cycle to realize value. When the DT is applied to the production of products, its value relies on increased production efficiency throughout the life cycle; when the DT is applied to LDF experiments and analysis, it increases efficiency through easier experiments and access to highthroughput data.
The whole life cycle of LDF experiments and analysis can be divided into two parts: (1) experimental phase: simulated space debris impact experiments on the sample; (2) evaluation phase: analysis of the damage mechanism for the sample.  The process is shown in Fig. 13. These two phases are related to each other. The impact samples generated in the experimental phase will be analyzed in the evaluation phase, and the results of the analysis will also contribute to the selection of parameters in the subsequent experimental phase. For example, the diameter of the fragmentation zone mentioned above can be determined by the speed and diameter of the flyer.
In the experimental phase, the traditional process is that the tester needs to be trained in the experimental operation before the experiment, and then set the experimental parameters of all relevant devices during the experiment, such as lasers, vacuum pumps, precision positioning platforms, etc. . . And after each single-point experiment, the vacuum chamber needs to be opened for re-adjusting the sample position and setting the instrument parameters.
However, using the DT approach, all parameters of the whole experimental program can be set uniformly in the twin system after the virtual training is completed.
Afterward, multiple impact experiments are performed using the automated LDF equipment without opening and closing the vacuum chamber, and the twin system keeps the experimental environment constant and standard. After the best experimental samples are obtained, the evaluation phase proceeded.
In the evaluation phase, the traditional evaluation process has a long time cycle, mainly because multiple material analysis tests have to be performed, as shown in Table 8. The subsequent macroscopic and microscopic damage analysis, numerical simulation, and damage characterization are all based on the results of material testing, which leads to waiting and stagnation in the evaluation.
More importantly, the number of experimental results depends on the number of experiments performed. Whereas, using the DT approach, a small number of experimental samples can be used to obtain high throughput experimental damage data by machine learning methods, which will also contribute to the subsequent parameter setting in the experimental phase. Meanwhile, the simulation data generated by the finite element will allow the tester to understand the damage process of the flyer impact more intuitively. After the critical testing and analysis, compare the test results with the virtual simulation results for validity, and the corresponding experimental report can be obtained. At the same time, the experimental phase and the evaluation phase are relatively independent and separate in conventional flyer impact experiments. To obtain the damage model of the sample it is necessary to obtain impact samples at different velocities. Therefore, a huge experimental volume is necessary to complete in order to obtain a reliable sample law. After the DT model is established, the tester can obtain high-throughput experimental simulation data based on the DT of the evaluation phase, and then just verify it by selecting a small number of impact velocity points.

V. CONCLUSION
In this paper, by improving and refining the traditional LDF experimental equipment and methods, a DT is proposed to be applied to LDF simulated space debris impact experiments by dividing them into two phases: experiment and evaluation.
During the experimental phase, the established DT provides a standardized experimental environment and allows for dynamic testing. During the experimental process, the automated experimental equipment can perform multiple consecutive experiments remotely. It also provides testers with online virtual experiments, which shortens the training time and reduces the training cost.
In the evaluation phase, the established DT can take a small amount of experimental data through the virtual model and obtain high-throughput macroscopic physical damage parameters and microscopic damage mechanism simulation data. In this paper, LDF experiments on quartz glass materials are used as a case study to verify its ability to generate data on macroscopic physical damage parameters by using the parameter of the diameter of its fragmentation zone. Meanwhile, its ability to respond to microscopic damage is demonstrated by comparing the microscopic images of the validation group with the simulation results of finite element.
In the process of simulation and prediction of the diameter of the fragmentation zone of quartz glass, the experimental data are enriched by the method of deriving theoretical equations and generating synthetic data. Thus, the problems of small samples and a small range of experimental parameter values in LDF experiments leading to the inability to apply machine learning for prediction are solved.
Eventually, based on this DT method and experimental experience, a set of standardized experimental procedures for simulating the full-life test cycle of space microdebris is proposed. It makes the experimental process more standard, scientific, and compact. High throughput and accurate data are obtained with a small experimental cost.
With the application of digital twin technology in the field of space, the ground simulation technology of space environment will not only become more standardized, but also gain new development in the spatial and temporal evolution model of the space environment, high-throughput evaluation, equivalent accelerated test method, and material evaluation database. It can be predicted that this will promote the relatively independent research between space environment ground simulation technology and material-device-system to gradually form an organic whole. ZHUANGZHI MAO is currently pursuing the master's degree with the Hebei University of Technology. His research interests include the application of digital twin technology and the study of nonlinear ultrasonic detection.
YUKUN LI is currently the General Manager of Tianjin Huaxuan Technology Development Company Ltd., and a member of Tianjin Hydraulic Engineering Society. He has been working in the field of air purification and sewage treatment for 39 years, since 1983. He has been devoted to the research and development of intelligent control of HVAC, and the digital transformation of purifying workshops. He has promoted and applied the technology of water ecological treatment through microorganisms, mainly for domestic sewage treatment, industrial sewage treatment, and river regulation.
TINGTING ZHANG received the Graduate degree from the School of Materials Science and Engineering, Tianjin University, in 2018. Her research interests include the study of space micro-debris impact effects and ground simulation techniques.
HAORUI LIU received the Graduate degree from the School of Materials Science and Engineering, Tianjin University, in 2014. His research interests include the study of space micro-debris impact effects and ground simulation techniques.
NING HU is currently the Vice President of the Hebei University of Technology, the Dean of the School of Mechanical Engineering, the Director of the National Engineering Technology Research Center for Technology Innovation Methods and Implementation Tools, and the Academic Leader of the State Key Laboratory of Reliability and Intelligence of Electrical Equipment jointly established by the Ministry of the Province. He is also a National Overseas High-Level Talent, a recipient of the National Outstanding Youth Fund (B) and a ''special allowance of the State Council,'' the Director of the Chinese Society of Mechanics, an Executive Director of the Chinese Society of Composites, the Director of the Japanese Society of Composites, and the Director of the Hebei Mechanical Society. In the past six years, he has been listed in Elsevier's list of China's highly cited scholars (Mechanics of Materials) and in the list of top 100,000 scientists in the world published by Stanford University. His research interests include high-end equipment materials and structure intelligent monitoring and testing, advanced functional composite materials design, preparation and evaluation, high-end equipment key structure design, analysis and evaluation technology, clean energy materials and devices development, extreme environment simulation and micro and nano manufacturing, additive manufacturing, and smart manufacturing.
ZEQING YANG is currently a Young Top Talent in Hebei and a Candidate of the third level of the Hebei 333 Talent Project. Her research interests include digital integrated measurement and control and digital twin operation and maintenance monitoring, online inspection and error compensation of CNC equipment, and visual inspection and pattern recognition.