Carbon Emission Reduction of Tunnel Construction Machinery System Based on Self-Organizing Map-Global Particle Swarm Optimization With Multiple Weight Varying Models

The impact of greenhouse gas emissions from construction activities on the environment is becoming more and more obvious. It is imperative to adopt corresponding techniques and management measures to restrain carbon emissions from construction industry, especially construction of infrastructures such as transportation tunnels. Through the Life Cycle Assessment theory and carbon emission factor method, we established a carbon emission model which can be applied to quantify the carbon emission of the mechanized construction system of tunnels constructed by drill-and-blast method. This model exhibits considerable non-convexity after incorporating various constraint conditions, and thus its task of finding the optimums becomes an NP-hard problem. Then, we propose a self-organizing map-global particle swarm optimization algorithm which incorporates multiple weight varying strategies and self-organizing mapping networks for adaptive adjustment of particle trajectories to improve the ability of searching optimums. The feasibility and advantages of the proposed algorithm were verified through a series of experiments, and finally it is combined with a quantitative model and applied to the optimization of the construction machinery unit combination for the F4 section of the Wushaoling Tunnel, and quite good optimization results are achieved, optimized configuration reduced the cycle duration from 660 minutes to 432 minutes while the total emission of carbon equivalent was reduced by 88.4% compared to original machinery system configuration, which offers reference and inspiration to other researches of carbon emission reduction in the construction of tunnels or other infrastructures.

Engineering construction emits a large amount of greenhouse gases. In 2009, production activities in the construction sector contributed up to 23% (5.7 billion tons) of global carbon emissions [1], especially in the tunnel construction process. Moreover, by quantifying the carbon emissions of the tunnel construction process through Life Cycle Assessment (LCA) method, researchers have found significant differences in the contribution parts of the construction process to the total GHG (greenhouse gas) emissions. For example, Li et al. [2] found that 93% of the equivalent carbon emissions per meter of the shield tunnel construction comes from building materials by using the carbon coefficient method. Besides, Xu et al. [3] studied the construction process of tunnels using the drill-andblast method and simulated the tunnel carbon emissions hierarchically according to the excavation process, which ultimately found that the construction process of weak rock enclosures requires higher levels of rock support because lining have higher carbon emission levels. Most studies generally agree that, within the framework of the LCA theory, the carbon emissions of tunnels to the environment are more concentrated on the upstream of the supply chain, especially for the manufacture of concrete and steel, and that the impact of the operation of construction machinery is relatively small. However, this does not mean that absolute share of the construction process in the international and domestic economic construction activities is low. Previous study [4] pointed that the twin-bore tunnels for the Channel Tunnel Rail Link (CTRL) in the United Kingdom(UK) accounted for 2.1% of all emissions associated with the UK construction industry in 1999, which is only 7.5 km long in total. For such a highly mechanized project, even if the percentage of greenhouse gas emissions from fossil fuel and electricity consumption of the construction machinery system is small (compared with upstream emissions), the converted absolute emission is still very impressive because its contribution is immediately after the upstream emissions. Meanwhile, the progress of carbon emission monitoring technology represented by carbon emission remote sensing satellite technology has made it easier to monitor regional and short-term carbon emission behavior, all of these objective realities place considerable demands on the control of overall GHG emissions from mechanical systems at tunnel construction sites.
Given such reality and need, this paper focuses on the configuration optimization of machine systems in tunnel construction. For this combinatorial optimization problems, the Pareto solution of the model needs to be searched in the feasible domain space with the help of certain search strategies. Heuristic algorithms are commonly used to search for the optimal feasible solution of combinatorial optimization problems. The main heuristic algorithms which are widely used in this problem are: Genetic Algorithm (GA) [25], Simulated Annealing Algorithm (SAA) [26], Ant Colony Algorithm (ACA) [27], Artificial Neural Network (ANN) [28] and Particle Swarm Optimization (PSO) [29][30][31][32][33]. In practical applications, GPSO is preferred for searching the ideal solution of the combinatorial optimization represented by the economical dispatch (ED) problems. However, multiple factors make this method easy to fall into local minima or perform slow convergence rate of iterations in the actual solution process.
The integration of machine learning techniques with particle swarm algorithms is also more commonly used by scientific researches and in engineering practices, and here a machine learning algorithm called the self-organizing map (SOM) network [29,34,35] is focused on. The SOM is an unsupervised artificial neural network based on a competitive learning strategy, which generates a low-dimensional discrete map by learning data in a high-dimensional input space [36].
In this paper, a GPSO algorithm using self-organizing map networks for particle motion trajectory adjustment is proposed, which is named Self-Organizing Map-Global Particle Swarm Optimization (SOMGPSO), to overcome the defects of traditional GPSOs. The SOM network generates a visualization of the node-particle structure by unsupervised clustering of the global optimal particle values after a certain number of iterations. Network's nodes automatically identify and correspond to particles with different degrees of aggregation and generates weights based on each particle assigned to the node. When the SOM network reaches a certain number of iterations, this weight would be theoretically close to each particle falling in the same node with the smallest variance or even exactly equal to the whole particles, which would be used as the projection origin. When the node becomes an anomalous node with a large number of particles falling in, the new particles with equal number of old particles are projected to the original feasible domain space with the reference of the uniform design schedule and replace the particles in the anomalous node. Finally, the reorganized particle population is put into the iterative process of the particle population in the new weight varying model, the iteration would be stopped adaptively according to the multiple termination criteria.
The SOMGPSO is compared horizontally with the linear decreasing inertia weight global particle swarm algorithm (LDIWGPSO) and the global particle swarm algorithm with adaptive inertia weight (AWGPSO), and it is found that the proposed SOMGPSO has advantages in terms of robustness, stability and accuracy. Finally, the coding and decoding technical measures are integrated with the carbon emission calculation model of the tunnel construction mechanization system to optimize the mechanical system of the F4 section of the Wushaoling Mountain Tunnel excavation by the bench method.

A. PROBLEM FORMULATION
It is more straightforward and easier to optimize the configurations of construction machinery system to maximize the reduction of greenhouse gas emissions than to improve the performance of a single type of machinery in the system. Through the carbon emission factor method, the emission type, emission process and volume of each machine unit in the system can be clarified and quantified to generate the linear sum of equivalent carbon emission of each machine unit. On this basis, the quantifiable engineering requirement such as duration is introduced into the multi-objective combinatorial optimization model.
In this paper, tunnels constructed by drill-and-blast method are selected as the research objects, which is the mainstream construction technology nowadays [37]. Drill-and-blast method is a construction process which maintains the stability of the excavation surface and excavation process as engineering purposes. Advancing in circular processes is one of the features of drill-and-blast method. According to the researches [38][39][40] in recent years the conventional drilling and blasting method of tunnel boring within single cycle of the process mainly includes: drilling and excavation, controlled blasting, initial support construction, and secondary lining construction.
In accordance with the idea of energy flow path tracing and previous tunnel construction engineering and modeling practice [41], the tunnel construction machinery system can be divided into four subsystems, i.e., the drilling and blasting system, anchor spraying support system, excavation and transportation system, and energy supply and drive system. The four subsystems in the construction process of energy flow relationship are shown in Fig.1, and it should be noted that this energy source flow relationship diagram does not take into account too many other systems in addition to the excavation and transportation system with their own capacity mechanical equipment caused by the power and fuel consumption.

1) OBJECT FUNCTION
The modeling is guided by the existing tunnel construction machinery scheduling configuration model, and the number of all possible machinery and equipment components is denoted as a -dimensional vector , , , ⋯ , . In the mechanized construction process of tunnels, if no heavy oilconsuming large machinery and equipment (such as TBM.) is used, the main types of energy required are electricity, diesel and gasoline, which can be converted into carbon dioxide equivalent values for direct assessment of carbon emission levels, i.e., carbon emission calculation using the carbon emission factor method. And the total amount of time consumed and carbon emission in the construction process can be regarded as the functions of , which are denoted as and . The total carbon emissions have the following expressions: where, is the direct working time of the corresponding -th machinery group on the specified workplace of the current process, while is the CO equivalent value converted from one of the -th kind of machinery's total unit emissions, and the expression is: where, 、 and are the carbon emission factors of diesel, gasoline and electricity, respectively, while 、 and are the total amount of diesel, gasoline, and electricity consumed per unit shift of the current machinery, respectively. The optimization model of the tunnel construction machinery system configuration (normalized) has considered the duration as well as the carbon footprint which can be written as: where, the and are the preferred weights for emission and duration, respectively; While and are the original configuration's emission and duration, respectively. It should be noted that carbon emission factor refers to the unit carbon emission of various energy sources from production, to transportation and full energy conversion. Considering the regional and time-domain nature of carbon emission factor, in this paper the Chinese life-cycle base database (CLCD) and China Emission Accounts and Datasets (CEAD) are chosen to calculate the carbon emission factor of three energy sources in an integrated manner.

2) OBJECTIVE CONSTRAINTS
Inflexible Constrains: In tunnel boring, any workplace in a certain period of time cannot be unlimited to accommodate a type of mechanical equipment, thus there is an inflexible constraint expressed as follows: where, is the upper limit of accommodation of the -th workplace for the -th machinery. Flexible Constrains: For dynamic mechanical units, the capacity of the workplace where they operate is reflected in a pattern with no clear boundaries, and the working time is additionally accumulated by a penalty factor named interference time, while for different workplaces, there are flexible accommodation thresholds for each mechanical unit, and the interference effect only appears when the number of such machines is greater than the threshold.
This threshold is defined as , and then the disturbance duration can be expressed as: (5) where, the is the extra duration which should be calculated in the objective function, meaning the disturbance effect for the -th machinery in -th workplace; is the disturbance factor for the -th machinery in -th workplace. Hierarchical constraints: The number of mechanical units which can be accommodated in each workplace is constrained by its own dimensional factors, while the size of the mechanical units available on site also has an upper limit. If the mechanical scheduling of the tunnel construction process is considered as a dynamic time-phased scheduling, it must be considered that this scale constraint has the hierarchical effect: where for one type of machinal units, the total number of such mechanical units distributed on all workplaces at the same time cannot exceed the maximum scheduling size of such mechanical units. Balance constraints: Supply and demand balance constraints are present throughout the entire process of tunnel mechanized construction, strictly speaking, from the stream balance of various types of energy to the supply of water, compressed air and even lubricants. Here the relationship between the supply and demand balance of energy and compressed air with the specific form of the model are focused on according to the energy stream framework shown in Fig.1. For one type of energy , the balance constraints relationship is below: For the compressed air, the balance constraints relationship is below (assumed that the driven energy of compressed air comes from energy type ):

3) ENCODING AND DECODING DESIGN
When applied to solve combinatorial optimization problems, particle swarm optimization algorithms need to incorporate the appropriate encoding and decoding mechanisms, which need fit with the need for realistic problems [33,42,43]. The characteristics of the drill-and-blast method of tunnel construction are analyzed, and it is roughly concluded that there is a distinct and one-to-one subjective and objective correspondence between various machine units and workplaces, and then a workplace-machine unit particle coding and decoding method applicable to this model is proposed by analogy with the machine-PPR coding technique used in the parallel machine scheduling model (PMS) in the field of scheduling problems, which is combined with several practical cases [42]. Define a two-dimensional particle as follows, the first dimension in the two-dimensional particle with natural numbers 1, 2, ⋯ , denotes the -th machinery under the current machinery type, the second dimension denotes the position of the particle, and the length of the particle is the total number of similar machineries dispatchable. The generation process of configuration scheme is the decoding process of above codes, and the specific operation is the downward rounding of , so that the rounded particle position is the workplace corresponding to the current mechanical unit or no workplace is assigned (i.e., 0) to this unit. The configuration scheme of current kind of machinery is generated by assigning workplaces to all mechanical units in this way, and by splitting the whole process of the work steps into several configuration problems, the remaining mechanical configuration schemes are generated and aggregated in parallel independently, resulting in the overall mechanical configuration scheme. Since the above-mentioned particle coding and configuration scheme generation method is oriented to the workplaces, this particle representation is called the workplaces-based particle coding in this paper. It is worth noting that this coding approach focuses on the workplace response of each scheduled machine unit, which causes the disadvantage of high computational expenditure. But it is very helpful for the subsequent accurate flow tracking of machine units and maintains the usability of the computational model when there is a significant difference in the working capacity of different machine units. The configuration of energy supply and drive system performs a different calculation method: after the particles of mechanical configuration size of other subsystems are generated and decoded into the configuration scheme, the minimum size of corresponding energy supply or drive machine is calculated directly according to its required drive energy and necessary redundancy. The calculation is rounded upward to obtain the minimum number of corresponding machines, while the cycle duration is determined by recalculating the main process duration corresponding to each workplace which adopts the optimized configuration and add it with the remaining main process duration. In order to minimize the probability of particle generation for infeasible solutions, the operation of randomly generating particle positions in the specified domain should be randomly re-valued in the domain when it crosses the specified domain boundary instead of assigning upper or lower limits to the particle positions [43].

1) GPSO PROCESS
The core mechanism of GPSO algorithm is the effective and benign information exchange of particles distributed in the feasible domain space: by arranging the particle swarm randomly distributed in the feasible domain space, each particle in the population is guided by its own best fitness scale , and the best fitness scale of the whole population . The spatial position of each particle is constantly modified until the ideal fitness value of all particles converges, the effective control of the information exchange rate and particle motion intensity of the particle swarm is achieved with fewer user parameters. Considering the swarm population with particles searching for an optimum solution in -dimensional search space, where 1. Then, the following shows the process of GPSO: Algorit hm Global Particle Swarm Optimization

Input
Necessary hyperparameters Output , Min . 1 Randomly initialize the velocity and position vectors of particle ( 1, 2, ⋯ , ) and the vectors can be presented as: Calculate the objective function by the position vector 0 for each particle . 3 Initialize the position vector: , 0 , , , , ⋯ , , 0 4 Search the global best position vector 0 in 0 . Update Renew the iteration, 1, 2, ⋯ , 5 In iteration , the velocity vectors of particles are updated as follows: 1 And the position vectors are updated as follows: 1 6 For each particle , the renewed objective function is calculated by the position vector . 7 The , and are updated as follows: arg Min

2) INHERENT DEFECT OF GPSO
The GPSO algorithm has significant advantages of requiring fewer set parameters, fast convergence rate in early stages, and depend not on the gradient mechanism. But it also has the drawbacks of losing some information in the solution process, relatively slow convergence in later stage, and easily falling into local optimal solutions, which are more significant when applied to solve computational tasks in discrete spaces, such as combinatorial optimization or further economic scheduling optimization [30]. The special particle motion principle and iteration mechanism are the root causes of the above unfavorable performance of GPSO in the calculation of complex functions (e.g., multi-peaked functions). Each moving particle has two tendencies which are its own optimal position and velocity, and the collective optimal position and velocity. The results of linear superposition of two velocities and positions brings the risk of particles being pulled into local minima or causing particle trajectory oscillations [32]. In high-dimensional space, when the search process enters the final stage, the trajectory oscillation will eventually manifest itself as a chaotic state of particles, and the majority of particles cannot converge through the benign two-track system, which makes the search process inefficient or even invalid, thus it is necessary to propose an improved GPSO process to address its main drawbacks.

C. SOMGPSO ALGORITHM
Here, the details and full process of the proposed SOMGPSO algorithm and explanation of the recognition and reorganization for adverse convergence of particle swarm are provided. Before explanation, it is necessary to give a brief description of the knowledges and technologies required to constitute SOMGPSO process.

1) SELF-ORGANIZING MAP NETWORK
Self-organizing map (SOM) network was first proposed by Korhonen [36], which can automatically find out the similarity between the input data and configure similar inputs in close proximity to each other on the network, or even place them to the same neuron. It is called selforganizing map network because the topology of neurons is also forced to be recombined during the activation of neurons by input data, and this recombination is also fully reflected during the activation of neuron nodes with high-dimensional input data on discrete low-dimensional structures (Shown in Fig.3), which also provides a convenient way to observe and describe the dispersion of continuous high-dimensional spatial data. If the input space is -dimensional (e.g., there are input units), the input pattern can be written as : 1, ⋯ , , and the connection weights between input unit and neuron at the computational layer can be written as : 1, ⋯ , ; 1, ⋯ , , where in here is the total number of neurons. The squared Euclidean distance between the input vector and the weight vector of each neuron is evaluated as follows: The neuron with the weight vector closest to the input vector (i.e., most similar to it) is declared the winner-takesall neuron in one certain time step.
Next, the neighboring neurons are adjusted by the quantified topological neighborhood definition, and the expression is as follows: where, is the lateral distance between neurons and on the neuron grid; is the index of the winning neuron; and is a variable which decays with time, and a common timedependent relationship is exponential decay.
With the help of topological neighborhoods, the weights of each neuron are adaptively adjusted: where the learning rate: The effect of each learning weight update is to move the weight vector of the winning neuron and its neighbors toward the input vector . Over this process, the iteration results in a topologically ordered network.
If the parameters ( , , , ) are chosen correctly, a completely unordered initial state can be started from and the SOM algorithm will gradually make the activation pattern representation obtained from the input space ordered.

2) UNIFORM DESIGN
Uniform design, also known as uniform design test method, was proposed by Fang [44], initially to solve the practical problem in which the trial number of orthogonal tests increases substantially with the number of factors and levels, which can also generate more dispersed combinatorial forms in the feasible domain space with the same number of factors and levels.
The practical application is realized by consulting the uniform design schedule, and there are mainly two methods, namely, the good lattice point method and the integer congruent power method to generate the uniform design schedule. The former is used in this paper, and its specific operation rules are as follows: 1. Determine the trial number , which is the initial population particle number in the initial population generation problem, find all integers ℎ which are mutually prime with in the integer domain of 1, , and compose the set ℎ , ℎ , ⋯ , ℎ . 2. Determine the recursive values of each column of the uniform design schedule with each subsequent column of the row by congruence calculation with the following formula: The generated uniform design schedule can be noted as . where the is the number of levels. 3. It should be noted that according to the Euler function deduction in the field of number theory, it is known that the maximum number of factors under a certain number of trials or initial population particles is a fixed value, which means that particles of finite size cannot be completely uniformly distributed in the high-dimensional solution space. But compared with orthogonal design, the uniform design has achieved a more efficient distribution form, and the maximum number of factors allowed under a certain number of trials or initial population particles can be further determined by the Euler function , i.e., they can be uniformly scattered in the highest dimension of the corresponding solution vector space, as follows： for is the power of a prime number or the product of square powers of different prime numbers, , … , are prime numbers less than . (17) for is a prime number. 4. If the maximum number of factors of the uniform design schedule which can be generated by a specified number of dimensions is less than the number of target dimensions, a compliant uniform design schedule can be generated by intercepting a uniform design schedule of higher dimensionality, and to distinguish it from conventionally generated uniform design schedule, which is noted as * .

3) SOMGPSO PROCESS
As shown in the explanation of SOM network, this method can reduce the dimensionality of the original data and fully preserve the topological relationships and distances between the original data, which provides the possibility to analyze and visualize the closeness between particles of long length in the particle population and the distribution pattern of all particles, and to determine whether the particle population is aggregated with increasing number of iterations for the adoption of a timely adjustment strategy. In this paper, a SOMGPSO algorithm is proposed to improve the performance of the GPSO algorithm. The purpose of the SOMGPSO algorithm is to solve the global minimum of fitness functions in a -dimensional feasible domain. The SOMGPSO algorithm can be decomposed into three main phases: the linear decreasing inertia weight (LDIW) phase, the SOM recognition and reorganization phase, and the adaptive inertia weight and acceleration coefficients phase. First, let the -dimensional particle swarm with population size iterate a certain number of times in the process of LDIW particle swarm algorithm, mark and record the trajectories of in the order of iterations, import the trajectories into the SOM neural network with specific parameters to automatically cluster and generate the low dimensional patterns (2 dimensions), determine whether abnormal aggregation of particle trajectories occurs in previous phases based on the difference in excitability of neuron nodes, and use the generated pattern weights as the basis, combine with the uniform design schedule of the same size as abnormal particles, reorganize the particle trajectories to generate the adjusted particle swarm, and submit to the adaptive particle swarm process to complete the remaining iterations. The steps involved in SOMGPSO algorithm are given below. Assume adimensional objective function needs to be optimized: Algorit hm SOMGPSO: LDIW Phase

Output
, Min , trajectory (raw data sample of size.). 1 Randomly initialize the velocity and position vectors of particle ( 1, 2, ⋯ , ) and the vectors can be presented as: Calculate the objective function by the position vector 0 for each particle . 3 Initialize the position vector: , 0 , , , , ⋯ , , 0 4 Search the global best position vector In iteration , the velocity vectors of particles are updated as follows: Put trajectory samples into SOM network with preset parameters. 10 Output the sample hit frequency of each neuron nodes, the weight of the neuron nodes, and the neighborhood distance of the neuron nodes. According to the output, the nodes corresponding to the abnormal particle aggregation can be identified by determining whether the sample hit reaches a preset threshold and whether the node is an abnormal node. 11 Once a node called is determined to be an anomaly, all particles (assume in all) corresponding to the node are replaced according to the following results: / , where is the abnormal node's weight and new swarm is a matrix with size. 12 The reconstituted trajectory data samples are combined with the remaining part of data samples, and data samples are randomly selected from the reorganized samples as the initial population for the next round of GPSO process according to the preset population size while retaining the final global optimal particles of the previous phase. It can be noted the selected populations as . . Correspondence between particles and nodes, with red node representing anomalous node which gathers more particles, whose surrounding nodes (dark blue) tend to counterpart to certain particles as well, and both node weights and particles have high closeness.

Algorithm SOMGPSO: Adaptive Phase Input
Adjusted population , Necessary hyperparameters. Output , 13 Randomly initialize the velocity vectors of particle ( 1, 2, ⋯ , ) and the vectors can be presented as:

14
Calculate the objective function by the position vector 1 for each particle . 15 Initialize the position vector , 1 as step 3. 16 Same as step 4.

Update
Renew the iteration, 2, … , 17 In iteration , the velocity vectors of particles are updated as step 5. While the and follow the varying rule, if the , has been modified in this iteration: Otherwise: Same as step 6. 19 Same as step 7.

Stop
where is the total number of iterations, which is guided by users or automatic stop mechanism such as tolerance value. The explanation figure of SOMGPSO algorithm can be shown as Fig.5.
The number of competing layer SOM neuron nodes determines the granularity and size of the model, which has a significant influence on the accuracy and generalization ability of the model. To ensure that the finest possible granularity is obtained with adapted input samples, and to construct a square output layer for uniform observation and analysis, the competing layer nodes' number is taken as √ . The threshold for judging whether the node is abnormal should also be defined and customed for different situations: around the main excited node, there are a certain number of accompanying activated nodes due to neighborhood regulation as shown in Fig.4, and these accompanying nodes tend to gather particles of considerable quantity, the weights of which are also very close to that of the central excited node. When the certain size of SOM network and the input population of sample is obtained, the judgement threshold can be defined by the ratio of the above two. And according to the observation, 0.1 can be chosen as the threshold when the competing layer nodes obey the size above.
It is worth noting that the mechanism of adaptive phase in SOMGPSO is different from that for integrate weight in [45], which is also named as Adaptive Particle Swarm Optimization. For distinguishing, the latter is called as Adaptive Weight Global Particle Swarm Optimization (AWGPSO) in the below and is chosen as one of the control groups in some trail projects.
It is easy to find that the additional adjustment mechanism and the multiple weight varying modes are the features which distinguish the proposed GPSO from other improved GPSOs. The reason for choosing two inertial weight change mechanisms is to eliminate the "mode memory" in the previous motion mode as much as possible, which aims at avoiding the particle trajectory to overlap with the original trajectory, and avoiding repeated searches for particles or falling back into local minimums. Further observing the specific mechanism of SOM network in the process, after being activated by the inputs, each node will adjust its own position with reference to the spatial position of the particle, and its topological distribution will contain the information of the distribution of the particle after reaching the terminational iterations. It is feasible to analysis the aggregate form of particles via the SOM neighbor weight distances figures.
As shown in Fig.6, it can be found that the nodes have been divided into several parts (at least 3 parts), of which the same parts would gather more similar particles, and the particles would hold different while they are on the nodes with long weight distances. Therefore, the weight distances figures would give the direct distribution performance of particle swarm.

FIGURE 7. Connection pattern input in each dimension ( =30).
In addition, the excitation effect of particles on neuron nodes in different dimensions is also significantly different. Taking =30 as an example, there are also 30 types of competing layer inputs, which is also the number of their weight scores. The connection pattern of each input is observed by weight plane, if the connection patterns of them are very similar (such as Input3 & Input5 and Input1 & Input6 shown in Fig.7, and further if there are multiple connection patterns which are highly similar and obey the same distribution, it indicates that the possibility of undesirable aggregation of the particle population is greatly increased, thus the weight plane figure also provides a possibility to analyze and evaluate the form of particle aggregation.
It is worth noting that the incorporation of coding and decoding measures makes certain difference between the pure SOMGPSO flow and which used in application of carbon reduction, the encoding and decoding of mechanical configurations cut the original flow and insert themselves into the intermediate process of SOMGPSO.  The design of computational flow of the SOMGPSO algorithm proposed in this paper is an iterative validated process. A major and a relatively isolated works in the design of computational flow are introduced below, and the improvement of SOMGPSO over the traditional improved GPSO algorithm in different dimensional computational tasks is initially demonstrated. A full analysis and comparison are presented in the next chapter in the benchmark function trial section.
The SOMGPSO algorithm programs of all the trials in this paper is implemented by MATLAB R2021a in the personal computer with the following specifications: AMD Ryzen 7 4800H with Radeon Graphics 2.90 GHz. RAM of 16 GB and Windows 10 H.20 64bit.

4) NECESSITY OF RETAINING THE GLOBAL BEST PARTICLE OF LDIW PHASE
Regarding the retention of the global best particle of LDIW phase, it can be first noted that according to the neighborhood weight update mechanism of the SOM network, the difference in the weights of neighboring nodes around the anomalous neural node is very small, and further direct observation shows that when the iteration number reaches a certain value, the weights of neuron nodes are very close to those of the trajectory particles. If the trajectory particles aggregate in the middle and latter stages, the global best particle will have enough "trace particles" to map itself, and if the trajectory particles aggregate in the early stages or do not aggregate throughout the process, the possibility of the existence of trace particles for the global best particle decreases dramatically, even the global best particle is isolated from other particles because of the lack of trace particles, and there is a risk that it will be completely erased in the reorganizing operation. In order to continue the calculation results of LDIW phase, it is necessary to keep the global best particle of LDIW phase exclusively when reorganizing the generation of modified populations. It is worth mentioning that this is not contrary to the desired goal of avoiding the final result of GPSO operation to fall into a local optimum.
In order to fully recognize the effect of the complete erasure of particles on convergence form of the full process and final computational results, an extreme control approach by adopting this operation for the control group is taken: no corresponding recognition and reorganization operations are performed, and the corrected populations distributed in the space of feasible domains are generated directly by a uniform design schedule.
Four test functions are selected for the control test, = 30, and every test function would be tested =20 independently under two models.  Fig.9 shows a result comparison of the Rastrigin benchmark function, and it is easy to see that the control group (shown by the blue line) processed more iterations and also showed a deterioration in convergence. The unusual elevation of the convergence curve in the middle indicates that the previous premium fitness value solution computed in the LDIW stage is completely discarded and latter fitness value is never converged to the previous level in the subsequent iterations.
Twenty independent control trials are conducted on each of the four benchmark functions in both modes, resulting in the following statistical results listed in Table 2.
Where, WFV (Worst Fitness Value), BFV (Best Fitness Value) and MFV (Mean Fitness Value) are the worst fitness value, best fitness value and the average of the fitness values among the 20 independent tests respectively, and MFEV (Minimum Function Error Value) is the absolute value of the difference between the theoretical minimum value of the benchmark function and the MFV, as follows: where, WI (Worst Iteration), BI (Best Iteration) and MI (Mean Iteration) represent the largest iteration, the smallest iteration and the mean of the number of iterations among the 20 independent trials. From the two tables, the adverse effects brought by the erasure of the global best particles in the LDIW phase to the final calculation results can be more comprehensively perceived: firstly, the degradation of average performance level, in the calculation of the Griewank benchmark function, the MFV of the two differs by an order of magnitude, and there are significant differences in the MFV calculation results of the remaining three benchmark functions. In addition, the stability deterioration is also an adverse effect brought by the erasure of the global best particles in the LDIW phase, and this effect can be confirmed by the difference in MSE.
The effect of erasure on the iterations is not as pronounced as that on the fitness values, but the adverse effect of the removal of global best particles in the LDIW phase on the stability of the iterations number is observable in the work on the computation of several benchmark functions, indicating that the retention of the global best particles in LDIW phase is indispensable for maintaining the robustness of the algorithm throughout the process.
From the perspective of maintaining the coherence of search process, the LDIW phase's global best particles can be regarded as the control point linking the two phases of the swarm search process before and after reorganization operation. If the valid search information of the previous phase is completely eliminated, the search processes of the two modes are mutually isolated swarm search optimization processes, rather than a complete continuous. The above control experimental results and process analysis conclude that the protective retention of LDIW phase's global best particles is an extremely important and indispensable part of the SOMGPSO algorithm.

5) ITERATION'S SELECT OF SOM
The closeness of the weight of activated node to corresponding particle vectors on the node is directly linked to the SOM network's iteration. If the iteration is too small, it is difficult for the weight to converge to the true value of particle vectors, and if the iteration is too large, the computational overhead will be significantly increased. The iteration of SOM is a predefined parameter which is completely independent of other operations and parameters in the whole process, and it only plays a direct role in the clustering analysis and adjustment efficiency of the SOM network, thus it is necessary to evaluate and determine the optimal value. The mean Eulerian distance between the node weights and the particle vector values is calculated within the nodes under different iterations, and the mean iteration times ̅ is recorded as the basis for the evaluation of the optimal number of iterations, as follows: ∑ (20) where, , represents the -th component of the weight vector of the activated node ; , , represents the -th particle vector on the activated node of the -th component; is the total number of particles on activated node ; is the total number of activated nodes. The Schwef function ( 10,20,30 ) is selected as the benchmark function for the experiment, and the values between 10 and 140 at intervals of 10 are examined. Each value would be tested five times independently in different dimensions, and its value is recorded along with the MSE of the . The results of the trials are listed in Table  4. The iteration time of the SOM network with its corresponding MSE are recorded in the same way, and the results are listed in Table 5. From the above two tables, it is easy to see that the accuracy and stability of the SOM network weights' closeness on the particle vectors within the fitted nodes are significantly improved at the beginning with the growth of iteration. However, this improvement becomes less significant after a certain number of iterations, and even performance degradation occurs due to overfitting. While abnormal fluctuations in computation time develop in an approximately linear trend, which is fully reflected in all three dimensions. Considered the results together, iteration=100 is sufficient to achieve the required fitting accuracy in a short time and with high stability for the computation tasks under 30

6) COMPARISON OF PERFORMANCE IMPROVEMENT IN DIFFERENT DIMENSIONS
In order to get a preliminary understanding of the advantages of the proposed SOMGPSO algorithm over the traditional improved GPSOs, three non-convex functions are first selected for benchmark trials with dimensions of 10, 20, 30, and each algorithm is tested 20 times independently with a fixed number of dimensions. The iterations of the whole process obey the following rules: firstly, the number of iterations of the SOMGPSO algorithm is discussed, the fixed number of iterations of the LDIW phase is taken as Max , 2.5 as required above guidelines and recommendations, and the iteration termination mechanism of the AW phase is controlled jointly by the adaptation value tolerance and the maximum number of iterations. The fitness value tolerance is taken as 10e-6 (10e-3 for Rosenbrock), and the maximum iteration is 20 (only for the AW phase). Acting as the control group, the number of iterations of AWGPSO and LDIWGPSO is Max 10 , , while is the total iteration of SOMGPSO's processes, and its performance results are counted in Table 6. Overall, the search performance of SOMGPSO is much higher than that of AWGPSO and LDIWGPSO, the two improved GPSO algorithms. But when =10, the search performance advantage of SOMGPSO over LDIWGPSO is not significant, and even the optimal value searched by SOMGPSO is much larger than that by LDIWGPSO in separate tests. But with the increase of dimensionality, this performance difference is significantly magnified, and the difference of MFV is often reflected in the order of magnitude.
Further to observe the change of stability with dimensionality. Although all the three algorithms show an elevated MSE in the high-dimensional computation task, SOMGPSO presents the smallest increment, especially in the process of the optimization-seeking calculation for Rosenbrock function, while the MSE of AWGPSO and LDIWGPSO always maintain a high level, which represents the optimization-seeking fluctuations of the two .The MSE of the proposed SOMGPSO remains within 10.0000 in all cases, except for the MSE=19.3976 at =20, and thus has relatively high stability. In summary, the proposed SOMGPSO will have higher viability and superiority when the dimensionality of computational model is higher.

D. APPLICATION OF SOMGPSO TO BENCHMARK FUNCTIONS
The carbon emission objective function of the tunnel construction machinery system under various constraints becomes non-convex and possesses multiple local optima. Meanwhile, in order to provide a comprehensive and equal comparison to demonstrate the validity and usability of the proposed SOMGPSO, ten benchmark functions were selected from the CEC benchmark function set. Here, a brief description of these benchmark functions is given and more comprehensively the performance of SOMGPSO compared with AWGPSO and LDIWGPSO are investigated.

1) PERFORMANCE MEASURE
In order to verify the advantages of proposed algorithm in terms of computational accuracy, stability, and convergence characteristics, the following performance metrics are considered, some of which have been mentioned previously: 1. For a more reasonable evaluation, the number of function evaluations (NFE) are chosen in the next experiments to measure the computational complexity of the algorithm, which is the number of times the fitness function is evaluated in a single run of the algorithm, as follows: (21) where, is the population of particle swarm and is the full process iteration of the algorithm. the MFEV of each function falls below the "threshold error". SR is used to measure the stability of the algorithm, as follows: 100% (22) 7. Reliability rate (RR): the average of SR in all benchmark functions for one algorithm.

2) BENCHMARK FUNCTIONS
In this study, the 10 benchmark functions listed in Table 7 are used for performance comparison of single-objective global optimization algorithms and are all for minimization tasks. Except for f1, f2 and f3, all benchmark functions are nonconvex, and the typical representatives are: Rosenbrock function (f4, f5), Griewank function (f6), Ackley function (f7), Levy function (f8), Rastrigin function (f9), and Schwefel function (f10), which are also named as multimodal functions. The "threshold error" value of each function is available in the table too, the solution of each function is judged successful, when the algorithm reaches to a value smaller than "threshold error", in other words, the algorithm can be seen that it passes the trail. The stop iterations also obey the above rules, while the iteration of LDIW phase of SOMGPSO obeys the rule of Max 2 , to ensure the sufficiency of executed particles.

4) COMPARISON IN TERMS OF FITNESS VALUES
From Table 8, it can be seen that in the AWGPSO and LDIWGPSO algorithms, BFV, and MFV have quite differences from the theoretical minims of the ten benchmark functions. Whereas in the proposed SOMGPSO, the final results fit well with the theoretical minims. In AWGPSO and LDIWGPSO, most of the MFEV of both fail to be smaller than the threshold error of the benchmark functions, and only 3 MFEV values fit with the threshold error for AWGPSO and 2 for LDIWGPSO. While 9 MFEV values reach to the threshold error of benchmark functions, which represents higher reliability in performance. However, it should be noted that all algorithms, including SOMGPSO, do not perform well on f9 (relatively) and f10, which deserves further analysis.

5) SUCCESS RATE & RELIABILITY RATE
Based on the two metrics of SR and RR, the stability differences of the three algorithms are studied and discussed further. The SR and RR of three GPSO algorithms can be seen in Table 10 (20 independent runs).  Table 10, it can be seen that except f10, SOMGPSO achieves considerable success in these benchmark functions, while AWGPSO and LDIWGPSO only have non-zero SR in part of these functions. In terms of RR values, SOMGPSO is significantly higher than the latter two methods. There is now further evidence of the higher performance stability of SOMGPSO relative to conventional modified GPSO.

6) CONVERGENCE CHARACTERISTICS
Since the mechanisms for terminating iterations of the three algorithms are highly linked, the WFV levels of the three algorithms are the same on all of these benchmark functions. To verify whether SOMGPSO has a faster convergence rate, the f1-f3 benchmark functions are chosen such that on the relatively simple convex optimization solution task, the algorithms can complete the solution with a higher probability within the maximum iteration of the separate processes, which in turn magnifies the gap for observational evaluation. From the solution results of f1-f3, it is not difficult to find that SOMGPSO can have less NFE in reaching the optimal solutions. When it comes to the nonconvex tasks, such as f6-f8 and f10 (although it has not reached the goal but shows better solutions relatively), this advantage has been maintained even the stop iterations setting may be little unreasonable for this control group test. Continuing the discussion of the reasons why all three algorithms fail on f10: the Schwefel function independent variable presents epistasis; Thus, its gradient direction does not change along the axis direction, which has a higher difficulty of finding the best value. Moreover, all the algorithms reach their setting ultimate iteration before they converge to the minimums within the range of threshold error, which mentions that the stop iteration mechanism should be adjusted to be better.
In summary, the proposed SOMGPSO has higher accuracy, stability and faster convergence rate compared with the traditional improved GPSO algorithm.

1) CASE OVERVIEW
The F4 fault section of Wushaoling Tunnel is located in the south of Mountain Wushaoling, its main fault zone is 200 meters long, and both sides of the main zone are the fault influence zones, with the total length of the whole section of 450 meters. The influence zone is dominated by fractured rock, the stability is poor, and the palm face is prone to collapse during excavation, but there is no obvious groundwater. The main fault zone is dominated by faulted mud gravel, breccia, which is judged as VI-V level surrounding rock, and the stability is poor. The main influence zone of the fault is dominated by faulted mud gravel and angular gravel, judged as grade VI-V surrounding rock, poor stability, which is easy to collapse during excavation, and there is strong high ground stress extrusion, but no obvious groundwater existed. After the long bench method used for the construction of No. 5 inclined shaft and led to a series of engineering problems such as abnormal convergence deformation and support damage, the main cavern was changed to the horseshoe-shaped section with the elevation arch applied as the three-benches method for construction [46]. In this paper, the configuration of the machineries in this section of the circular construction process is optimized in order to obtain a balanced configuration which achieves the schedule, and carbon emission.

2) CASE ANALYSIS
According to the excavation and support process flow, the main processes which control the single cycle time can be further divided into over-support works, drilling and blasting works, excavation and ballast works, loading works and support works.
After the completion of the upper section overrun support, drilling and jacking operations, the drilling and explosives loading operations of the upper, middle and lower sections are carried out at the same time, and then the light surface blasting and smoke removal operations are carried out, which directly require rock drills and air compressors respectively.
The first digging and loading operation are in a single part of cycle, digging ballast for the upper and middle sections are at the same time, the second digging and loading operation in the lower section are after the completion of digging and ballast work is in the upper and middle sections, the machineries required for this phase are excavator (digging and loading machine), and dump truck.
The spray concrete operations for upper section start at the end of the cycle, and the machineries required at this time are concrete spraying machine, concrete transfer pump truck, and automatic metering mixing machine.
The over-support for upper section parallels to the concrete spraying operation of lower section, which needs to use the concrete sprayers, and concrete transfer pump truck. The digging ballast and loading operation is divided into two successive parts, which are parallel to the installation of the steel frames, anchor rods and reinforcing mesh, and need to mobilize the drilling machine and grouting pump. The upper section for the concrete spraying operation parallels to the lower section for the steel frame and anchor rod installation operation, which needs to mobilize the drilling machine and grouting pump. To keep safe, after light surface blasting, the ventilation work also needs to use the axial flow fan.

3) COLLECTION OF NECESSARY DATA
After deconstructing all the steps in the cycle and confirming the corresponding machines and equipment to be put into, the machinery and equipment available on site are counted, and their quantitative consumption performance is obtained based on consultation with the engineering staff involved in the project or upstream suppliers involved in equipment service support, as well as by reviewing organizational process assets (OPA) and relevant literature [46], as shown in Table 11. The carbon emissions of mechanical energy consumption should be considered from three perspectives: energy production, transportation and use. Since the transportation and production of energy in different regions causes differences in carbon emissions, in this paper the Chinese localized life cycle basis database (CLCD) is mainly considered which is embedded in the software. Since the geographical differences of carbon emissions from energy use are not significant enough, the default values from CEAD are considered. The results of carbon emission factors for gasoline, diesel, and electricity can be obtained by integrating and aggregating the three main GHGs, as shown in Table 12.

III. RESULTS
With the incorporation of coding and decoding measures, the SOMGPSO algorithm process becomes more complex, with an increase in the number of loop iterations nested within it, which in turn increases its computational complexity. It is necessary to select and implement adapted parameter setting rules: for the whole process, the swarm population =50, which is sufficient for the search task when the particle length (dimension) is below 40. For LDIW and adaptive weight (AW) phases, iterations and are both set as 50, in which and are also set as 2.05 when they are under the inflexible control, and the weights varying can be still under the rule as above.
For this case, 35 objects of 5 types of machineries were selected as adjustable variables directly, and the supply and drive machines followed the other adjust rule as mentioned, which include 2 types, thus the particle length is 35. Considering the optimization model mentioned, the preferred weights can be determined by trial-and-error method, and the final selection was =0.15 while =0.85. 20 independent calculations were performed and the configuration which corresponds to the best performance was selected as the final optimal configuration. It is worth noting that under selected condition, the fitness values of objective function in 20 independent trials maintain continuous fluctuation rather than convergence to a fixed or a relatively fixed state. Actually, when the preferred weights configuration was selected as (0.1,0.9) or (0.2, 0.8) and other configurations, there would be several similar values and even all the fitness value keeping similar to each other, which indicates that the local optimum has been avoid as far as possible with current preferred weights configuration of (0.15,0.85).  Table 15 shows the statistical results of independent trials, which furtherly indicate the fluctuation effect existing in this condition, however, it still cannot prove that the best fitness value and idealist configuration in theory were certainly obtained via this computing process, which is beyond the ability of all heuristic computing algorithms, while the final target in this paper is to reduce the carbon emission as far as possible, so the calculation results reached the expectation.
The optimized configuration combination is as follows:

IV. DISCUSSION
According to the proposed configuration scheme, the theoretical cycle duration is 432 minutes, while the original cycle duration is 660 minutes, and the carbon emission decreases by 88.4% in per cycle compared with the original cycle, which is a significant improvement. The highly compact energy distribution pattern is the main reason for the effective reduction of carbon emissions from the recommended scheme, and by strictly controlling the direction and flow of energy stream and timely shutting down the extra internal combustion engine drives in the system, the direct carbon emissions in the process can be weakened. Besides, the optimization of consumption time is also an important method to control carbon emissions, which appropriately reduces the transport load in the tunnel to eliminate the impact of the blockage effect, and improves the efficiency while avoiding the accumulation of carbon emissions in both ways: the quantity of units and durations. Actually, for the most tunnel constructions used drill-andblast methods in China, the environmental effectiveness of mechanical systems for excavation and construction are always ignored, this optimization case can be selected as reference for subsequent similar studies, while only one case (excavated by three benches methods) was considered in this study, which is insufficient to establish a universal framework for all the reduction of carbon emission based on drill-and-blast methods.
When it comes to the program, SOMGPSO's code performs slowly under the situation that combines encoder and decoder modules, and causes extra consumption on computing duration due to the nested loops. What is more, the procedure of SOMGPSO is a little tedious compared with other mainstream improved GPSOs, more adaptive strategies and less parameters should be considered in subsequent improvements. It is worth considering that let SOM network recognize and change the particles' trajectories dynamically and the recognize process can parallel to the GPSO process without interruption and re-engagement. The proposed SOMGPSO requires users to determine the first stage's iterations by the priori experience, which is extreme inconvenient in the application, the above-mentioned improvement idea would eliminate this defect, while the improvement idea has certain challenges in the design of algorithm's procedure and programming.

V. CONCLUSION
Further and comprehensive discussion of the main work in each section and findings of this paper is presented: By setting the control trail of whether retaining the global best particle of LDIW phase or not in SOMGPSO, the necessity of retaining is more significant in terms of final fitness values.
For the parameters' section of full process of SOMGPSO, except some can be guided by the previous work, the remaining part should be defined by trial-and-error method and SOM's iteration is selected to be presented in this paper, combining aspects of accuracy and time consumption ,100 is selected as the default value to solve the search task for dimensionality is below 30 with an ideal accuracy performance (0.3634, 0.3265 , 0.4797 respectively for =10,20 and 30 in the mean of ) and time consumption (1.9719s, 1.8930s and 2.4305s for =10,20 and 30 in the mean of CPU time for calculating in SOM process).
Comprehensive comparisons have been executed in aspects of robustness, accuracy and reliability for proposed SOMGPSO and other improved GPSOs, which finally shows that SOMGPSO holds obvious superiority in above aspects especially in higher dimensional tasks.
When it comes to the case and the final result of optimization for case machinery system, the feasibility of results has been ensured and validated with compact supplement of energy and accurate allocations of machinery system, the proposed configuration is fully achievable according to the analysis, because of the fineness of modeling. However, the more universal framework to quantify and optimize the carbon emission for drill-and-blast tunnels should be proposed in further study.