A Convolutional Neural Network Approach to Predicting Network Connectedness Robustness

—To quantitatively measure the connectedness robustness of a complex network, a sequence of values that record the remaining connectedness of the network after a sequence of node-or edge-removal attacks can be used. However, it is computationally time-consuming to measure the network connectedness robustness by attack simulations for large-scale networked systems. In the present paper, an efﬁcient method based on convolutional neural network (CNN) is proposed to train for estimating the network connectedness robustness. The new approach is motivated by the facts that 1) the adjacency matrix of a network can be converted to a gray-scale image and CNN is very powerful for image processing, and 2) CNN has proved very effective in predicting the controllability robustness of complex networks. Extensive experimental studies on directed and undirected, as well as synthetic and real-world networks suggest that: 1) the proposed CNN-based methodology performs excellently in the prediction of the connectedness robustness of complex networks as a process; 2) it performs fairly well as the indicator for the connectedness robustness, compared to other predictive measures.

For a complex network, its connectedness is guaranteed by a sufficient number of edges that properly connect the nodes.The connectedness is necessary for the network to perform its fundamental tasks such as controllability and synchronizability, although the specific measures of these functions are mostly different.Since random failures and malicious attacks are indispensable in real-world applications, which typically destroy the connectedness of the network, it has becoming a major concerned issue to strengthen the network connectedness against such destructive failures and attacks [5]- [10].Typically, destructive failures and attacks take place in the form of node-or edge-removals, which cause significant consequences to the network functioning or even lead to complete network crashing.In these scenarios, the ability of a network to maintain its connectedness against failures or attacks is referred to as the connectedness robustness, or simply the robustness, in this paper.
Network attacks can be classified as random and targeted attacks, which can be modeled and analyzed in computer simulations.Targeted attacks aim at removing some intentionally selected objects (e.g., a node that has the largest degree), while random attacks do such removals at random.Here, for targeted attacks it is presumed that a targeted node or edge is more crucial than other nodes or edges in maintaining the connectedness.However, evaluating the importance of nodes or edges is computationally intensive, and often practically intractable especially for large-scale networks.Conceptually, this requires to quantify the importance by some centrality measures, such as degree, betweenness, closeness, and eigenvector [11].A selected centrality measure is used as the indicator of (nodal or edge) importance in an attack or defence strategy.Among the existing centrality measures, degree and betweenness are the most frequently used two [12], [13].Besides centrality, other commonly-used measures of importance include neighborhood similarity [14], branch weighting [15], and structural holes [16].
From an attacker's point of view, the module-based attack strategy [17], [18] is particularly effective, which selectively attacks the inter-community nodes and edges, which are demonstrated important to maintain the connectedness among communities.Also, the damage-based attack strategy [19] uses a measure of damage to describe the destruction level of an attack, where the damage is defined as the change of the largest connected component (LCC) size before and after the attack.Along this line, the (normalized) size of LCC is widely used as a measure for connectedness robustness [7].Furthermore, it is observed that the attack-and-defend iteration process can enhance the network robustness in an evolution manner [20].It is commonly known that onion-like structured heterogeneous networks are robust against attacks [7], [21]- [23].In this research direction, there are extensive studies on various issues regarding network robustness, including the robustness of other types of networks such as a network of networks [24], [25] and multiplex networks [26], which found some encouraging real-world applications in e.g.power grids [27], [28].
Given fixed numbers of nodes and edges, the network robustness against various attacks can be improved by rewiring [21], [27], [29]- [33].If there is no restriction on the number of edges, quite intuitively adding extra edges properly can enhance the robustness [34].Spectral measures offer easy-toaccess indicators for detecting the network robustness, with which meta-heuristic algorithms can be applied to optimizing the robustness [33], [35]- [39].
Regarding robustness optimization, deep neural networks provides a useful tool, which has shown powerful capability in image processing.Successful applications of deep learning techniques include network controllability robustness prediction [40]- [42] and critical node identification [43].As a kind of effective deep neural networks [44], convolutional neural network (CNN) is able to automatically analyze inner features of a dataset and output desirable results with respect to classification or regression, without human interference.
Traditionally, the network robustness is evaluated by attack simulations, which however are extremely computationally time-consuming, especially for large-scale complex networks.The major computational cost includes: 1) searching for the node to attack, e.g., the node with maximum betweenness; 2) calculating the connectedness measure, e.g., the LCC.Both have to be calculated iteratively therefore consuming a large amount of computing resources and time.To deal with such technical problems so as to improve the computational efficiency, in this paper a CNN-based robustness predictor (CNN-RP) is proposed.The CNN-RP is used to predict the network robustness through the entire process of attacks, by computing and visualizing the size curve of (normalized) LCC against node-removal attacks, However, edge-removal attacks are very different in nature therefore will be studied elsewhere.
The design of CNN-RP is motivated by the following observations: 1) although some features and indicators (e.g., spectral measures) are reliable to describe the overall robustness, they cannot reflect the sequential details throughout the entire attack process; 2) the detailed robustness information about the process against sequential attacks may be obtained via attack simulations, which however are very time-consuming and even infeasible; 3) complex networks can be equivalently converted to gray-scale images, and CNN techniques have proved efficient in processing such images.Here, the designed CNN-RP follows the same CNN structure used in the controllability robustness predictor [40], [42], but with different objectives and functions.Compared to the controllability robustness prediction, it is more challenging to predict the connectedness robustness, since the variation of the connectedness could be higher than that of the controllability.Thus, an additional filter will be designed and used, as detailed in Subsection III-B.Extensive experimental studies demonstrate that 1) the designed CNN-RP can well predict the evolving size curves of LCC against sequential node-removals for both directed and undirected, synthetic and real-world networks, with a good generalization ability; 2) the CNN-RP not only approximates the entire attack process, but also provides a good (or even better) predictive measure compared with the classical spectral measures.
The reminder of this paper is organized as follows: Section II reviews the measure of network connectedness robustness against destructive attacks.Section III introduces the new CNN-RP.In Section IV, experimental results are presented with analysis and comparison.Finally, Section V concludes the investigation.

II. NETWORK ROBUSTNESS
In this paper, the network connectedness robustness is measured by the normalized LCC [7].The LCC of a directed network is the largest weakly connected subnetwork, where a directed graph is weakly connected if it remains to be connected after all the directed edges are changed to be undirected.Two LCC-based robustness measures are used, one for the attacking process and the other for the resultant network.The former is represented by a real vector (a normalized LCC curve) while the later is represented by a real value.
Specifically, the measure of the network robustness in terms of a normalized LCC curve (NLC) is calculated by where N LCC (i) represents the number of nodes in the LCC, and s(i) is its normalized value (NLC) obtained after a total number of i nodes have been removed from the network; N is the original number of nodes in the network before being attacked.
The overall measure of the network robustness is then calculated by With the above measure, for two given complex networks under the same sequential attacks, the one with a larger s value is considered having better connectedness robustness.Now, given two NLCs, ], the difference between the two curves is calculated by where ξ = [ξ(0), ξ(1), • • • , ξ(N −1)] represents the sequential differences (or errors) between the two curves, where Finally, the average error ξ is calculated by Thus, the vector ξ can be used to measure the errors of the NLC predictions throughout the attack process; while the scalar ξ measures the overall error of the NLC prediction.

III. NETWORK ROBUSTNESS PREDICTOR
Different from the predictors for the network controllability robustness against destructive attacks [40], [42], in this paper CNN is used to predict the connectedness robustness, which turned out to have a greater variation than the controllability robustness.An illustrative example of the connectedness robustness and controllability robustness will be given later in Subsection IV-D.To deal with the large variation in the prediction, a filter is useful, which is installed in CNN-RP following the CNN output, as detailed below.Fig. 1: The framework using CNN to predict network robustness.The input is an adjacency matrix converted image; the output is the predicted NLC curve.

A. Convolutional Neural Network
The general framework of the proposed CNN-RP is shown in Fig. 1, where a CNN is trained for network robustness prediction.As can be seen from Fig. 2, the framework of this CNN-RP is relatively simple, which consists of several groups of convolutional layer, rectified linear units (ReLUs) and max pooling layer, where ReLU is the activation function.The structure of the CNN-RP is shown in Fig. 2. The detailed parameter settings are given in Table I.The VGG architecture [45] is employed, which incorporates a greater network depth and a smaller kernel size.The 7 feature map (FM) processing layers are denoted as FM 1 to FM 7 respectively.
In simulations, for input of size around 1000×1000 as in the experiments reported below, the number of FM groups is set to 7, which should be set to be greater for input of larger sizes.
Each FM consists of a convolutional layer, a ReLU, and a maxpooling layer.Convolutional layers are adopted here because of their efficiency in dealing with large-sized images.ReLU (with f (x) = max{0, x}) is a widely-used activation function for 2D data [46].The pooling layers reduce the dimensions from the input to the next layer.Since the interest of images in this work is only in the lighter pixels, max pooling is used, which works well especially when the image background is dark.Following the 7 FMs, two fully-connected layers are configurated to process the output.
The mean-squared error between the predicted NLC and the true NLC is employed as the loss function, as follows: where ŝ(i) is the i-th value of the predicted NLC, and s(i) is the i-th value of the true NLC by simulation; || • || represents the Euclidean norm.The training process for CNN-RP aims to minimize Eq. ( 5).

B. Filter for LCC-curves
Due to the nature of data-driven algorithms, it is possible that CNN outputs some logically unreasonable data.For instance, the number of nodes of LCC in a network under attacks must be monotonically non-increasing, but the output of CNN-RP may violate this principle.To regulate the output of CNN-RP, a filter is used, which is designed based on existing prior knowledge.In configuration, the upper and lower bounds of the LCC size are imposed onto the output of CNN-RP, and logically unreasonable data are replaced by interpolated values.The filter consists of two parts, the first part limits the upper and lower bounds while the second regulates the monotonic non-increase feature, as formulated by Eqs. ( 6) and (7), respectively.
After each attack, the number of nodes in LCC of the reminder network will be greater than or equal to 1, but less than or equal to the current (temporal) network size.Thus, each LCC value must be constrained by the following conditions: where N LCC (i) represents the number of nodes in LCC, as in Eq. (1).
Regarding the local increase in the size of LCC, if there is any position in the LCC curve, returned by CNN-RP, where the value is greater than its left-neighboring value (local increase), then an interpolation formulated by Eq. ( 7) is applied.Specifically, suppose that it is detected as N LCC (k) > N LCC (i) (k ≥ i + 1), which violates the monotonically non-increasing condition.In this situation, the algorithm will continue to search along j = k + 1, k + 2, . .., until N LCC (j) < N LCC (i) is detected.To that end, an interpolation is applied as follows: FM 3 FM 5 FM 6 FM 7 where the integers i, j, and k satisfy k ≥ i + 1, k ≤ j − 1, and i ≤ j − 2.An example of interpolation is shown in Fig.

3.
Note that 1) the filter does not check the correctness of the predicted data, but only deals with the logically unreasonable data; for example, it does not check whether N LCC (k) in Fig. 3 is overestimated or underestimated, since the true values are unknown to the filter.2) Only the size of LCC is monotonically non-increasing during sequential attacks, but the normalized LCC curve, as shown in Eq. (1), is not so.

IV. EXPERIMENTAL STUDIES
The performance of CNN-RP is demonstrated by extensive numerical experiments.
Four representative synthetic (directed and undirected) network models are simulated: the Erdös-Rényi (ER) randomgraph [47], generic scale-free (SF) [48]- [50], q-snapback (QS) [51], and Newman-Watts (SW) small-world [52] networks.The detailed generation methods for these network models can be found in [40] and [42], respectively.CNN-RP is trained for predicting the network robustness using the data collected form these synthetic networks, and then tested on the same or different distributed synthetic network data, as well as on 12 real-world networks.
Specifically, for directed networks, the following four cases are studied: 1) both training and testing data are drawn from the same dataset.2) The testing data are the training samples (with different average degrees) from a different dataset.3) The CNN-RP trained by synthetic network data is tested on 12 real-world network data, for which the study of the first case is also extended to undirected networks.4) CNN-RP is compared to the spectral measure in predicting the overall network robustness under same attacks.
In experiments, the network size is set to 1000 for synthetic networks, while it is real data size for any real-world network.
The training data are drawn from a set of randomly-generated network instances, where the average degree k is set to 5, 8, and 10, respectively.The total number of training samples is 9600 = 4 × 3 × 800, which contain 4 topologies, 3 degrees, and 800 random instances for each configuration.There is another set of instances, where the average degree k is set to 4, 7, and 9 respectively, which are used for the case that the training and testing data are taken from different distributions, respectively.
For the real-world networks used, their basic information is summarized in Table II.Three node-removal attack strategies are simulated, namely the random attack (RA), and targeted betweenness-based (TB) and targeted degree-based (TD) attacks.RA removes randomly-selected nodes, while TB and TD remove nodes with maximum betweenness and maximum degree, respectively.For TB and TD, if two or more nodes have the same maximum value (either betweenness or degree), one of them is randomly selected to remove by the attack.
The experiments are performed using a PC Intel (R) Core i7-8750H CPU @ 2.20GHz, with memory (RAM) 16  (3); and σ represents the standard deviation of the testing data that are randomly collected.The shadow in the same color represents the range of standard deviation.These figures show that CNN-RP can predict NLCs well for ER, SF, SW, and QS networks, not only in the general shapes but also in details such as the curve turning points.The prediction error is small, but slightly higher than the standard deviation of the testing data.In addition, compared to attack simulations, CNN-RP can return the network connectedness robustness performance within a run of significantly shorter run time.For example, for ER networks with N = 1000 and k = 5 under random attacks, the average run time for attack simulation is seconds, it is only 0.12 second by CNN-RP.
Compared to Figs. 4, 6 and 7, which show that ER, SW, and QS networks can maintain good robustness against random and targeted attacks, Fig. 5 shows that SF networks are more fragile than the other three, when the network sizes are the same.Nevertheless, in all the cases, CNN-RP can well predict the NLCs.The overall prediction error is small, but relatively large in the period when the network become drastically disconnected (the curve drops abruptly).2) Training and testing data are from different distributions: Fig. 9 shows the results of CNN-RP predicting the NLCs of the networks with average degree k = 4, 7, and 9, respectively, under random attacks.Table III shows the prediction error ξ and standard deviation σ of the testing data.Together with Fig. 8, the overall errors and standard deviation values are mostly of the same order in magnitude of about 10 −2 .For SF networks, the prediction errors are slightly lower than the standard deviations, while for ER, SW, and QS networks, the prediction errors are slightly higher than the standard deviations.

B. Undirected and Real-world Networks
Fig. 10 shows the results of CNN-RP predicting the robustness performance for 12 undirected networks under RA.Again, CNN-RP shows a competitive performance with a low error level.Here, the CNN-RP is newly trained using a set of undirected networks as the training data.
Fig. 11 shows the results of CNN-RP predicting the robustness performance for 12 real-world networks under RA.The CNN-RP trained using the synthetic networks as shown in Subsection IV-A.Since the sizes of some real-world networks are slightly larger than 1000, as shown in Table II, resizing is performed on the graph-converted images, i.e., a pair of rows and columns is randomly picked and removed until it reaches N = 1000.For each network, the random resizing is repeated 20 times, and the prediction results and errors are then averaged.
It shows that CNN-RP can predict the rough contour, while the details of the NLCs are not well revealed.This implies  that there is a lack of real-world data in the training data.However, choosing the representative real-world data for the training data is also a non-trivial problem.

C. Compared to Spectral Measures
Spectral measures are commonly used to predict or quantify the network robustness regarding connectedness.Here, 6 typical spectral measures are compared in predicting the network robustness.They are spectral radius (SR), spectral gap (SG), natural connectivity (NC), algebraic connectivity (AC), effective resistance (ERe), and spanning tree count (STC).Details (definitions and calculations) for these spectral measures can be found in, e.g., [32].In the above-discussed comparisons, CNN-RP is used to the entire NLC, can be to a scalar by taking the mean value using Eq. ( 4).
In this work, the above prediction measures (namely, SR, SG, NC, AC, ERe, STC, and CNN-RP) are used to predict the ordinal ranks of network robustness.As mentioned, there are 4 network types (namely, ER, SF, SW, and QS).For each type of network, there are 5 average degrees (namely, k = 5, 7, 8, 9, 10).For each network type and each average degree, there are 100 randomly-generated instances.Thus, there are where rl represents the predicted rank-list (by either a spectral measure or CNN-RP), and rl represents the true rank-list obtained from simulations.The resultant rank error information is summarized in Table IV.For example, given two predicted rank-lists, rl 1 = [1,4,5,3,2] and rl 2 = [5, 1, 2, 4, 3], and a true rank-list, rl t = [2, 1, 5, 4, 3], the rank errors are obtained as σ r1 = [1, 3, 0, 1, 1] and σ r2 = [3, 0, 3, 0, 0], respectively.The numbers of '0' in σ r1 and σ r2 are counted as the 'correct rank' in the table.The 'average rank error', 'max rank error', and 'min rank error' are calculated accordingly.Moreover, the number of network instances, which are predicted to be within top 10% (ordinal ranks in terms of connectedness robustness) and also  As shown in Table IV, AC receives the minimum 'average rank error' 190.72, followed by CNN-RP with an average rank error 272.44.AC obtains the smallest 'max rank error', followed by CNN-RP.Only AC, ERe and CNN-RP receive a 'min rank error' 0, implying that these measures predict at least once that is exactly the same as the true rank.CNN-RP predicts 3 ranks correctly.STC, AC and CNN-RP predict The predictive measures AC and STC, as well as the proposed CNN-RP, return good prediction results, better than other spectral measures.More importantly, CNN-RP returns not only the predictive results, but also predictive values throughout the entire LCC changing process; while the spectral measures return only a single quantitative value.However, CNN-RP requires a substantial amount of training data, while the spectral measures do not.

D. Compared to Predictor for Controllability Robustness
CNN-RP uses the same CNN structure as that in the predictor for controllability robustness (PCR) [40].The computational complexity of CNN-RP is similar to PCR.Both CNN-RP and PCR use a single CNN to perform the regression tasks for all the networks, but the task for CNN-RP is more difficult than that for PCR, since the variation of LCC is greater.
Fig. 12 shows an example of the comparison between the connectedness robustness and the controllability robustness.In Fig. 12 (a), it requires a proportion 4/6 of driver nodes and there is a proportion 6/6 in the LCC; but in Fig. 12 (b), it requires a proportion 5/5 of driver nodes and there is a proportion of 1/5 in the LCC.The change of "controllability"  is from 0.667 to 1, not as drastic as the change of "connectedness" from 1 to 0.2.Removing a node will increase the number of driver nodes at most by 1 regarding the controllability, but it may reduce the number of nodes by a number as high as N regarding the LCC.The installed filter helps relieve the variation burden in the connectedness robustness prediction.Note that PCR obtains an average error rate clearly lower than the standard deviation of the testing data, while CNN-RP obtains an average error rate that is slightly higher than the standard deviation on the testing dataset.
The conventional spectral measures have been developed to predict the connectedness robustness for a long time, while there is no evidence that these spectra are suitable for predicting the controllability robustness.On the other hand, CNNs are effective and efficient in predicting many general features and performances of networked systems that have no analytical solutions.As a matter of fact, in the comparison discussed in [42], the CNN methods outperform the spectral measures in predicting the controllability robustness.However, in the present work, CNN-RP receives the overall rank-2 performance, following the algebraic connectivity, yet nevertheless it performs better than other measures including spectral measures.Therefore, the results obtained in Subsection IV-C are truly satisfactory and indeed quite encouraging.

E. Utilities of the Filter
The utility of the installed filter is to filter out the unreasonable data predicted by CNN.Fig. 13 (a) shows the LCC predictions with and without a filter, respectively.It is clear that without the filter, the blue curve violates the nature that the number of nodes of LCC in a network under attacks must be monotonically non-increasing.In contrast, the green curve filters out these unreasonable data, becoming closer to the true curve.It is worthy mentioning that although the number of nodes of LCC is monotonically non-increasing, the NLC curve is not, as illustrated in Fig. 13 (b), as δ is approaching 1.Although precision check is not the utility of the filter, it is observed that the prediction precision can be improved after installing the filter.Table V shows a comparison of the CNN-RP prediction, where ξ (see Eq. ( 4)) represents the average error of the prediction and ∆ ξ represents the average error   0 .9 3 0 0 .9 3 1 0 .9 3 2 0 .9 3 3 0 .9 3 4 0 .As shown in Fig. 14, the generation mechanism of synthetic networks may impose some visible features to the adjacency matrix converted images.For example, for SF network, due to the preferential attachment mechanism, the 'old' nodes (with smaller node indices) have higher degrees, and thus there is a spark in the upper-left corner as shown in Fig. 14 (a).These features can be filtered out by performing random shuffling as shown in Fig. 14 (b)., which means to randomly exchange the rows and columns of the adjacency matrices.The simulation results in [42] show that the existence of these visible features does not affect the CNN performance in both network classification and controllability robustness prediction.Note that exchanging the rows and columns of an adjacency matrix will only affect the image, but not the network topology.
In the following experiment, the CNN-RP performance is investigated when the training data are unshuffled, while the testing data are shuffled.Let n sh be the number of random shuffles; and n sh = 1 means that there is a pair of randomly nodes exchanging their indices (namely, exchanging their rows and columns in the adjacency matrix).Table VI shows that the average error of the prediction, which can be calculated by Eq. ( 4), is generally not sensitive to the shuffling of adjacency matrices.Specifically, for SF networks, the prediction error becomes larger only when n sh = 500; as for QS networks, the prediction result is degraded when the input is the transpose of the original image.As can be seen from Fig. 14 (e), the QS transpose image is significantly different from the QS unshuffled image (although the network topology remains the same).In contrast, the SF transpose image is not significantly different from the SF unshuffled image.The degraded performance is likely caused by this significant image difference.However, although the images are clearly different after shuffling, CNN-RP can still perform well on processing these shuffled images, while the number of shuffles generally does not affect the prediction error.

V. CONCLUSIONS
This paper proposes a fast and effective approach to predicting the connectedness robustness of complex networks against node-removal attacks.Conventionally, the network robustness is determined by attack simulations, from which a sequence of measure values are collected to record the connectedness of the remaining network after a sequence of attacks, which is computationally very time-consuming if the network size is large.In this paper, CNN-RP is proposed to predict the connectedness robustness of various complex networks, based on the successful applications of CNNs for image processing and network controllability robustness prediction.Extensive numerical experiments on directed and undirected, synthetic and real-world networks have been performed, demonstrating the effectiveness of CNN-RP in prediction performances: 1) CNN-RP can predict the network connectedness robustness with a low average error, which is in the same order in magnitude as the standard deviation of the testing dataset.2) The CNN-based predictor provides a good and even better predictive measure than the traditional powerful spectral measures.This paper demonstrates once again that the CNN-based prediction technique has a good potential for generalization with a wide range of applications to complex networks.

Fig. 2 :Fig. 3 :
Fig. 2: The structure of the CNN-RP.FM represents for feature map and FC for fully connected.The input is the adjacency matrix converted image; the output is a 1 × N vector that represents the predicted LCC curve.The data size N i = N/(i + 1) , i = 1, 2, . . ., 7. The concatenation layer reshapes the matrix to a vector, from FM 7 to FC 1, i.e., N F C1 = N 7 × N 7 × 512.N F C2 is a hyperparameter and N F C2 ∈ (N F C1 , N ).Always set N F C2 = 4096 for the networks of sizes N = 1000 in this paper.
GB, running Windows 10 Home 64-bit Operating System.A. Directed Synthetic Networks 1) Training and testing data are both from the same dataset: Figs.4-7 show the results when the average degree k is set to 5, 8, and 10, respectively, for both training and testing data.In each figure, pv represents the CNN-RP predicted curve; tv represents the true curve obtained by attack simulations; ξ represents the prediction error that can be calculated by Eq.

Fig. 4 :
Fig. 4: [color online] Results of CNN-RP NLC prediction for ER networks under RA, TD, and TB, respectively.δ represents the proportion of removed nodes; s(δ) represents the ratio of versus the current network size, as shown in Eq. (1).

Fig. 5 :
Fig. 5: [color online] Results of CNN-RP NLC prediction for SF networks under RA, TD, and TB, respectively.δ represents the proportion of removed nodes; s(δ) represents the ratio of LCC versus the current network size, as shown in Eq. (1).

Fig. 6 :
Fig. 6: [color online] Results of CNN-RP NLC prediction for SW networks under RA, TD, and TB, respectively.δ represents the proportion of removed nodes; represents the ratio of LCC versus the current network size, as shown in Eq. (1).

Fig. 8 :
Fig. 8: [color online] Comparison of the mean prediction error ( ξ) versus the standard deviation (σ) of the testing data.The average degree for both training and testing data is set to k = 5, 8, and 10, respectively.

Fig. 9 :
Fig. 9: [color online] Results of CNN-RP NLC prediction for synthetic networks under RA, where the testing data ( k = 4, 7, 9) are different from the training data ( k = 5, 8, 10).δ represents the proportion of removed nodes; s(δ) represents the ratio of LCC versus the current network size, as shown in Eq. (1).

Fig. 10 :
Fig. 10: [color online] Results of CNN-RP NLC prediction for synthetic undirected networks under RA, where the average degrees are set to k = 5, 8, and 10, respectively.δ represents the proportion of removed nodes; s(δ) represents the ratio of LCC versus the current network size, as shown in Eq. (1).

Fig. 11 :
Fig. 11: [color online] Results of CNN-RP NLC prediction for real-world networks under RA, where δ represents the proportion of removed nodes; s(δ) represents the ratio of LCC versus the current network size, as shown in Eq. (1).Basic information of these networks are presented in TableII.
Fig. 11: [color online] Results of CNN-RP NLC prediction for real-world networks under RA, where δ represents the proportion of removed nodes; s(δ) represents the ratio of LCC versus the current network size, as shown in Eq. (1).Basic information of these networks are presented in TableII.

Fig. 12 :
Fig.12: An example of the difference between connectedness robustness and controllability robustness: (a) given a weakly connected network that has 6 nodes and requires 4 driver nodes; (b) after the hub node is removed, it becomes a network with 5 isolated nodes that requires 5 driver nodes.

Fig. 13 :
Fig. 13: Comparison of the predictions with and without the filter: (a) LCC prediction and (b) NLC prediction.It is an ER k = 8, under random attacks.

TABLE I :
Parameters in seven groups of convolutional layers.

TABLE III :
The mean prediction error the standard deviation of the testing data.The average degree for training data is set to k = 5, 8, and 10, respectively; while for testing data is set to k = 4, 7, and 9, respectively.

TABLE IV :
Comparison of the prediction error information for the 7 predictive measures.Bold numbers are results from the best performing prediction measures.

TABLE V :
Comparison of the average errors with and without the filter, for ER with k = 8, under random attacks.

TABLE VI :
Average error ( of the CNN-RP prediction, as the number of random shuffles n sh changes.The networks have average degree k = 8, under random attacks.