Fast 3D visualization of massive geological data based on clustering index fusion

With the development of 3D visualization technology, the amount of seismic data information is increasing, and the interactive display of big data faces severe challenges. Because traditional volume rendering methods cannot entirely load large-scale data into the memory owing to hardware limitations, a visualization method based on variational deep embedding clustering fusion Hilbert R-tree is proposed to solve slow display and stuttering issues when rendering massive seismic data. By constructing an efficient data index structure, deep clustering algorithms and space-filling curves can be integrated into the data structure to improve the indexing efficiency. In addition, this method combines time forecasting, data scheduling, and loading modules to improve the accuracy and real-time data display rate, thereby improving the stability of 3D visualization of large-scale seismic data. This method uses real geological data as the experimental dataset, comparing and analyzing the existing index structure and time-series prediction method. The experimental results indicate that when comparing the index of the variational deep embedded clustering-Hilbert R-tree (VDEC -HRT) with that of the K-means Hilbert R-tree (KHRT), the time required is reduced by 55.67%, the viewpoint prediction correctness of the proposed method is improved by 22.7% compared with Lagrange interpolation algorithm. And the overall rendering performance and quality of the system achieve the expected results. Ours experiments prove the feasibility and effectiveness of the proposed scheme in the visualization of large-scale seismic data.


I. INTRODUCTION
Three-dimensional visualization technology has always been an indispensable part of the development of computer graphics. It is an effective method for multi-dimensional presentation and the analysis of data objects in various industries, such as medicine, remote sensing, geology, and oil and gas exploration [1][2][3][4]. Among them, the volume-rendering algorithm [5][6][7] is applied to geological exploration, which can clearly depict the internal level of detailed information and characteristics of the geological body and provides a data research platform for researchers in related fields.
However, when dealing with a variety of data, the development of 3 visualization faces several challenges. For example, traditional volume rendering algorithms are more complex and require a large amount of memory space, necessitating relatively advanced computer hardware. In addition, for large-scale volume data, the calculation speed is slow, and there arise delays in the response time of interactive displays, and lags during browsing, among other issues. Therefore, several 3 visualization solutions have certain limitations.
Currently, a major problem of the 3 visualization process is that large-scale data cannot be displayed quickly and with high-quality using traditional volume rendering technology. Many related optimization solutions have been proposed. The literature [8][9][10][11][12][13][14][15] proposes a modified R-tree structure to improve query efficiency. The literature [16][17][18] further optimizes the structure of the Hilbert R-tree based on the R-tree; however, this method has shortcomings in the processing of large-scale data. The main problem is that when mapping with a spatial filling curve, significant overlapping space is generated in the process of building a tree structure, thus affecting the efficiency of data retrieval. Considering the problems of Hilbert R-tree in the literature [19][20] with respect to clustering, the coverage overlap between nodes can be reduced to a certain extent by clustering, thereby enabling the formation of a compact and efficient data structure. In addition to solving the problem of fast visualization regarding data structure, there are also some solutions to address the display lag when the data object is large. The Lagrangian interpolation algorithm is used to predict the trajectories of viewpoints, and different interpolation steps are set to determine the final prediction accuracy and rendering effect. Moreover, the deep learning model [21][22][23][24][25][26] is also used to predict the trajectory of viewpoints, which improves the accuracy of prediction. The literature [27][28][29][30] introduces the visual cone clipping algorithm in the real-time rendering of large-scale terrain, which performs the necessary clipping of data objects according to the change in the viewpoint range to load data objects quickly and accurately. In addition, level-of-detail ( ) technology [31][32][33][34] reduces the detailed information of data according to viewpoint position and object distance, thus improving the rendering efficiency.
To this end, this study proposes a fast 3 visualization method based on deep clustering for use in massive seismic data research. Firstly, in terms of data structure, deep clustering is adopted to reduce the partial overlap of tree structure, improving the efficiency of the overall index traversal for the data. Moreover, through the construction of a deep learning model under the time-series model to realize accurate viewpoint location prediction, and in combination with an improved data scheduling scheme to accelerate the volume rendering efficiency, this strategy allows for reduction in the complexity of the operation, load, and rendering of potential data in advance, and avoids sluggish browsing. This technique results in improved efficiency of the visualization in several ways to achieve rapid 3 visualization of massive seismic data. The experiments prove that the proposed scheme has adequate feasibility and research value.

II. METHOD
This study proposes a method of deep clustering combined with an data structure using deep learning to predict the trajectory of the viewpoint, as well as the application of improved field-of-view removal technology in the rapid 3 visualization of massive seismic data. As shown in 1, the algorithm for the rapid 3 visualization of data is mainly divided into three modules: a) the establishment of an efficient data index structure, b) the prediction of the motion trajectory of the viewpoint, and c) the scheduling and loading of volume data in the field-of-view removal technology. First, the seismic data file is read, the original seismic data format is mapped onto the 3 structure of space, and the subblocks with the smallest boundary cube are divided. Then, mutual information maximization is applied to clearly distinguish the samples from these datasets, using the Hilbert curve to reduce the dimensionality and the variational deep embedded clustering ( ) to perform the clustering operation, considering the center after clustering. The value of the code builds a Hilbert R-tree. Furthermore, the next module is entered to determine the coordinates of the current viewpoint position. This process has two branches: the first to use the frustum model to crop and render the spatial data corresponding to the current viewpoint position, and the second to perform prediction based on the current viewpoint position. The position coordinates of the next viewpoint are predicted by the frustum model, and the potential data are unloaded and rendered by comparing the divided regions. These new viewpoint coordinates are re-determined to draw the next frame.
The following subsections introduce the relevant algorithms of each module.

A. INDEX STRUCTURE OF DATA
Considering the inefficiency of the index caused by the scaling up of data, it is insufficient to only improve only the hardware. More importantly, it is more efficient to solve the rapid 3 visualization of massive seismic data by improving the algorithm and combining these software and hardware optimizations. In this study with the index of the module using the Hilbert R-tree structure, due to connecting the Hilbert space-filling curve in a particular way and passing through the data in high dimensional space, the goal is to encode the location coordinates sorting, straight for one-dimensional data and then finding adjacent elements with a minimum bounding rectangle box, node up, and framed space is larger, to form the Hilbert R-tree. However, when the Hilbert R-tree structure is applied to large-volume data, the spatial overlap of nodes will also occur. In other words, within the same leaf node, spatial data objects do not originally belong to the same category, but the adjacent data code values after spatial curve transformation, which forms a clustering problem. For large-scale datasets, it may be considered to first gather the data with relatively close structure using a reasonable algorithm to avoid the misclassification of adjacent code values after subsequent conversion, and for quickly index.

1.VARIATIONAL DEEP EMBEDDING CLUSTERING
To optimize the indexing technology, this study uses the . Compared with the traditional clustering algorithm, on the one hand, it uses the variational autoencoder ( ) [35][36] to reflect the input data distribution with an unsupervised algorithm. On the other hand, it learns the feature representation and cluster assignment of data into the latent variable space through deep embedding clustering ( ), and iteratively optimizes the target, thereby improving the performance of clustering. The can approximate the true high-dimensional distribution of complex data using unsupervised algorithms and can reconstruct the data characteristics of the hidden variable space. First given the dataset = { , , … , } !× , after the prior distribution #($) of the latent feature space, the latent variable space is generated as $, and then reconstructed by #( |$) to generate &. We denote the weight and bias of the encoder as model ∅, and the weight and bias of the decoder as model (. uses autoencoders as a network architecture and clustering allocation to reinforce losses as specifications. Network parameters are initialized through the autoencoder, and a two-layer neural network is defined as: &~ *#* +( ), ℎ~-(. & + ) ℎ 0~ *#* +(ℎ), 1~-(. ℎ 0 + ) *#* +() means that the dimension of input data of any set part is 0,represents the activation function of the encoder, and the model parameter ∅ is . , , . , . First, the training model of the compressed data part of the encoder is collected by rebuilding the loss of the auto-encoder and discarding the decoder part. For a given dataset = { , , … , } !× , the corresponding feature 3 ∅ can be obtained through the initial mapping 4 ∅ between data space and feature space, then, the algorithm iteratively improves the clustering by minimizing the divergence between the soft label distribution and the auxiliary target distribution. In order to effectively improve the clustering performance, the sample can be clearly distinguished from the entire dataset, and the concept of mutual information maximization [37] is introduced in the to identify the most unique information of the #( ) is the distribution of the original sample; the larger the divergence between #($| )#( ) and #($)#( ) , the higher the correlation between and $. This means that for each data point , the encoder #($| ) can encode a unique $, and the optimization goal of the feature encoder is to maximize mutual information: (2) Because it is difficulty to calculate the posterior distribution, the approximate posterior distribution > ? ($| ) is introduced to estimate the true posterior distribution #( |$), which only needs to minimize From equations (2) and (3), we can obtain the total minimization goal of the encoder as The data that maximizes the lower bound of the target loglikelihood change can be obtained as: using the Bayesian formula: Using the non-negativity of divergence, we can obtain: and depending on the following, Define $ to obey a normal distribution: #($) ∼ ($; 0, +) (11) The variational self-encoding network is used to output the two dimensions of the parameter mean ] and the variance ^ vector in the hidden layer: The heavy parameter technique is used to sample in the distribution space 3 of the latent variables to obtain $: $ = ] +^ * ` ,`~ (0,1) (13) The sampled $ is input into the generation model #( |$) to generate a new &, and the divergence is obtained between > ? ($| ) and the prior distribution #($) as the loss of the coding model: For the generative model, the reconstruction error of the decoder is defined as its loss: K f = || Z − &|| 2 (15) The coding part of the autoencoder is trained as the learned feature information $, and the similarity between it and the cluster center is computed. We define the similarity and target distribution as follows, where g is the dimension of the latent variable space: and In addition, we define the loss function of the target variable M and the similarity variable p as 1 1 log According to the similarity between the feature representation $ and the cluster center, as in formula (16), the cluster label of is obtained as follows: We use the gradient descent iterative method to optimize the overall objective function:

2.VARIATIONAL DEEP EMBEDDING CLUSTERING FUSION HILBERT R-TREE
The specific structural process pseudocode of the variational deep embedding clustering fusion Hilbert R-tree( − ) is shown in Algorithm 1: − fusion Hilbert R-tree index structure: 1：Determine the overall smallest boundary cube (E{ ) according to the size occupied by the target object in the space and divide it equally to obtain n E{ sub-blocks 2： for in 1 … |:

3：
Fill E{ block with the Hilbert curve ( Convert the Hilbert value corresponding to the center point K of the | smallest bounding cubes according to the above formula, where ~ |+ 5： end for 6：Sort the E{ center code values and get €•‚ (= , = , . . . , = ) 7：Sample data objects and calculate the boundary coordinates and center coordinates of the E{ block for each data object according to the size of the data volume 8：Use the data sample , G ∅ , K f to pre-train the variational autoencoding model Calculate the latent variable space of to represent $: Update the target distribution M Zb with $ and equations (16) and (17)  13： Save the cluster center label … †8 ‡ = … and update the label with: for ˆ in 1 … :

16：
Gradient descent method to optimize the objective function

18：
Return the weights of # S and > ∅ ,the cluster center ~ 23：Determine the maximum number of child nodes that a Hilbert R-tree node can store according to memory constraints All elements of the current category are regarded as nodes of the current level of the tree structure 26： else: 27：Arrange the Hilbert code values corresponding to the data center in ascending order to form a leaf node 28：According to the time when the node is generated, the middle node and the root node of the tree are formed from bottom to top to construct an efficient Hilbert R-tree index structure Algorithm.1 Algorithm flow of constructing the index structure.

B. PREDICTION OF VIEWPOINT MOVEMENT TRAJECTORY
A certain amount of data can be directly loaded into memory; however, when the data size increases, only essential data must be loaded, and the redundant data must be blocked. If the data range can be predicted in advance, it is loaded into the memory for rendering and the amount of data loaded during memory access time can be avoided, making the image smoother and more stable.
The viewpoint prediction module in this study accurately predicts the expected position of future viewpoints according to the continuously changing viewpoint positions and perspectives, which is a typical time-series prediction problem. In this study, a time series convolution network is used for prediction, and its basic network structure consists of the following three parts: 1. Causal convolution: a one-way time-constrained structure, implying that the convolution operation at the current moment ~ is only based on the information before and at the historical moment ~− 1. The structure is shown in 2. 2. Expansion convolution: This mainly solves the problem of the causal convolution having too many stacked layers and is limited by the convolution kernel size. Its structure is shown in 3, where represents the cavity coefficients 1, 2, and 4, and • is the convolution kernel size. 3. Residual connection: The basic unit of uses causality and dilative convolution as the standard convolution layer and adds layer normalization and a linear function. Every two of these unit blocks are connected with identity mapping as a residual module so that the network model can transmit information in a cross-layer manner.
In contrast to and other networks, has the characteristics of parallelism, gradient stability, and flexible receptive field. The forecasting process begins by selecting points of data are selected from the dataset. Each point corresponds to the viewpoint coordinates of the continuous motion track at different times. The current time coordinates have three dimensions ( , g, •), and the coordinates at time j are M j ( j , g j , • j ).
To obtain the long time series information with the expansion convolution, the stability of the network model needs to be considered as the network layers are deepened. Therefore, identity mapping is added to increase network stability, and the output result is = † ' (# j ) + # j (25) Two of these unit blocks were used as a residual module along with the identity map: = ~+ '<+ *|( † ' (# j ) + # j ) (26) The deep network is stacked using residual modules, and the structure is shown in 4. blocks are repeatedly connected, and the output of each stack is used as the input of the next one to deepen the network layers and extract essential features. Finally the C nonlinear factor is added to the output feature, and a one-dimensional convolution layer is used to replace the coordinate position of the next viewpoint predicted by the output of the full connection layer.

C. SCHEDULED LOADING OF VOLUME DATA
As the amount of data increases, the number of objects to be drawn during the loading process of 3 data also increases significantly. Visibility judgment is an effective method to reduce unnecessary drawing when loading large-scale data, thereby accelerating the image rendering speed. The frustum clipping method is a visibility judgment method, and its overhead operation is relatively small and easy to modify. Therefore, an efficient frustum clipping algorithm is conducive to fast and accurate loading of graphics objects, thereby greatly improving its display performance.
In the first part of this article, the original data are equally divided into E{ sub-blocks, and the original seismic data structure is then recombined using the Hilbert R-tree algorithm. When judging whether the current data object is visible, it turns to judge the viewpoint and each spatial position relationship of the E{ data sub-blocks. When it is found that the data object is outside the viewpoint area, it will be cropped immediately, thereby shortening the time for object traversal in the structure and improving the rendering speed.
In this part of this article, the two-layer frustum clipping algorithm is used as follows: 1). First, the rough cutting algorithm is adopted. We simplify the viewing cone into a simple cone, the position relationship between the cone and the space object is judged, and the overall judgment times are reduced to improve the cutting efficiency. The flow of the rough clipping algorithm flow is shown in C * +ℎ= 2: A: Simplify the viewing cone hexahedron into a simple cone. The cross-sectional view is shown in 5. From the viewpoint position, make the smallest cone encompassing the viewing cone and use the simple cone to check the positional relationship between it and the smallest boundary cube E{ . B: If the center of ( , 1, $) ∉~*| an E{ block is not inside the cone, { ≥ D , the E{ block is judged to be outside the cone, and the algorithm ends. Otherwise, proceed to . C: If (D + $" | ( 2 ⁄ ) ≤ ( + 1 )~*"(( 2 ⁄ ) , the E{ block is judged to be outside the cone, and the space object contained in the E{ block is cut. Otherwise, the E{ block is judged to have intersected the cone, the fine clipping algorithm can be used for further clipping, and the algorithm ends.

FIGURE 5. The position profile of the MBC block and simple cone (where A is the central coordinate of an MBC block, C is A point on the visual axis, AC is perpendicular to the cone surface at point B, and d is the distance from the center of MBC block to any vertex).
2). The fine cropping algorithm is based on the previous rough cropping and further uses the standard viewing frustum truncated pyramid to finely crop the space object and improve the accuracy of the cropping. When using the standard frustum for cutting, the E{ block of the data was first used to determine the spatial position of the six faces of the frustum (i.e., the top, bottom, left, right, near, and far of the hexahedron in 6). The overall idea is that when the E{ block is located outside a certain plane equation of the frustum, it is invisible and the object is cropped; if the E{ block is not cropped in the above process, it means that the positional relationship with the frustum is inclusive or intersect, specifically in the following two cases: (1) Inclusive: When the E{ block is in all six planes, indicating that the space object is located in the viewpoint area, the space object is directly sent to the -Mdrawing pipeline for rendering and display.
(2) Intersect: When the inner side of a certain plane of the frustum contains a part of the E{ , and the outer side of the plane also contains a part, indicating that the space object and the frustum are intersecting, then continue to judge the underlying objects and traverse all the space objects in turn. Specifically, the viewpoint position is located at the zero point of the world coordinate system and the viewing cone model is placed along the positive 3 axis, with the set projection matrix used to transform the vertices, allowing for the six plane equations corresponding to the truncated cone to be obtained.
After obtaining the six planes, the general approach is to calculate the distance from each vertex to each plane; however, it would be more complicated to calculate eight vertices in this way. The method in this study is to first determine the vertices | and # of the E{ block. Point # is the vertex closest to the plane, and point | is the vertex of the farthest diagonal (the vertex furthest from # ). Suppose #( ˜Z , 1˜Z , $˜Z ) brings six plane equations < + -1 +~$ + D = 0: if < ˜Z + -1˜Z +~$˜Z + D > 0 , it can be judged that the vertex of the E{ block closest to the plane is outside the plane, and then the E{ block is outside the plane for clipping.
In the same way, if #( ˜Z , 1˜Z , $˜Z ) is inside the plane and |( ˜OJ , 1˜O J , $˜O J ) is outside the plane, < ˜Z + -1˜Z +~$˜Z + D < 0 and < ˜OJ + -1˜O J +~$˜O J + D > 0 mean that the E{ block intersects the frustum. If point # and point | are both inside the frustum plane equation, < ˜Z + -1˜Z +~$˜Z + D < 0 , < ˜OJ + -1˜O J +~$˜O J + D < 0 , it means that the E{ block is inside the plane and is directly sent to the rendering pipeline for rendering and display.
Among these, the viewpoint movement process is regarded as a dynamic viewpoint model. The movement process divides the entire range of data objects into visible, potential, and unloading areas. After the predicted viewpoints were obtained through the cutting of the frustum model above, the unloading and loading rendering were performed by comparing the divided regions below. Specifically, when the viewpoint is at position , parts 1 and 2 in the figure are the visible areas, which are rendered and displayed; the potential areas are visible areas 2 and 3 of the predicted next viewpoint {, which need to be rendered in advance, and the data in this area are marked as visible data and loaded into the -Mpipeline for rendering, making it convenient for the user to directly read and display the data object during continuous browsing. When the viewpoint moves from to {, area 1 is marked as an unloaded domain to reduce memory, as shown in 7: Because of the process of browsing the image constantly, which changes direction and coordinates of the viewpoint, a viewpoint trajectory is formed. Therefore, the current viewpoint position and its historical data in module (b) can be used to predict the next viewpoint position, and the data objects of the potential area loaded into the memory to be displayed, improving the smoothness of the image display during the entire information loading process. The overall process of combining viewpoint prediction and frustum clipping algorithms is shown in 8:

A. EXPERIMENTAL PLATFORM
The processor used in this experiment was Intel Core i5-9300H, the operating system was Windows 10, the chip type was NVIDIA GeForce GTX 1650, the Mfrequency is 2.40GHz, the memory was 8GB, and the code was written using PyCharm2019, Visual Studio 2019, and QT5.7.
The data used in the experiment are a subset of the seismic area data of an oil field in China and are stored in … -− • format. Divided into three groups, dataset is 458.3 E{ , dataset { is 3219.5E{ (the data in group { is more evenly distributed), dataset is 13984.1E{ (the data in group is larger in amount and more scattered), and the data contains buried information such as depth, range, thickness, and the stretching trend.

B. DATA RECORD
Experiments were conducted to evaluate the effectiveness of this method by comparing the relevant modules. Specifically, this method is mainly divided into the following parts, as shown in 9: (1) Indexing efficiency of data structures In view of the model partially integrated with the index data structure in this study, validation was carried out on E 5… handwritten dataset. First, • − = <|" was used to initialize the cluster center, and encoder was used for pretraining. The optimizer used was the Adam optimizer, the learning rate was set to 0.003, and the parameters were updated after every 10 epochs trained. The output dimension was set to 10, and the training was conducted 300 times. The hidden feature space 3 was constructed according to the mean and variance returned by the encoding layer. For visualization, − "| was used to map the sampled part 3 to a twodimensional space. The aggregation distribution of data in the potential space is shown in 10, which indicates that the representation of potential features is suitable for clustering, and the clustering accuracy of reconstructed samples based on real labels and models reaches 94.3% with a Jaccard similarity coefficient 0.959, and an NMI score of 0.956. In addition, to evaluate the overall index effect of data structure, we set the minimum particle size to 1E{ in this part to compare it not only with the related Hilbert tree structure, but also with the index efficiency of octree ( ) structure. Specifically, we take the same proportion data blocks (1%, 3% and 6%) for three groups of data, A, B and C respectively, and compare the , , , and the VDEC − HRT structure query data times for blocks. The recorded results are shown in <-C 1. As shown in the results, for a small amount of data in group , the query time of the method in this study is less than that of the previous two methods, but the comparison is not obvious. For group { data, compared using the algorithm, the index time of − in this study is reduced by 65.34%-68.87%; compared using the algorithm alone, the index time of − in this study is reduced by 64.30%-66.60%; compared using , the index time of − is reduced by 46.63%-57.54%. Compared using , and , the query time of group data subblock was reduced by 65.34%-72.07%, 59.85%-65.63% and 49.26%-55.67%, respectively. Therefore, even compared with large data, our algorithm can make the data close to the original data nodes more compact through improved clustering, effectively reducing the frequent visits to disk, improving the query efficiency significantly, and reducing the index time. (

2) Evaluation of viewpoint predictions
To evaluate the accuracy of the prediction algorithm, we compared the accuracy of the Lagrange interpolation, stacked short and long memory networks, and the proposed method for predicting data blocks with durations of 1, 4 and 8 min, respectively. The results are listed in <-C 2. It can be concluded from the data that the accuracy of the proposed algorithm is 11.74%-23.01% higher than that of the Lagrange interpolation algorithm and 2.16%-5.51% higher than that of …+<~• D … E . With the average frame rate unchanged, we set the steps to 8, 16 and 24 frames respectively, and randomly change the position and direction of the camera to keep the viewpoint moving at a uniform speed for 15 min. Moreover, to avoid the interference of different average frame rate experiment, we chose the average frame rates of 12, 24, and 48 frames to calculate the asynchronous length and compare the prediction accuracy. The recorded data is presented in <-C 3. The data analysis results show that the accuracy rate decreases as the selected step size increases, but the smaller the selected step size frame, the weaker the prediction effect. Therefore, the accuracy of the prediction can be improved by adjusting the step size parameters.
In addition, when the average frame rate is approximately 30 fps, and • components of the partial position coordinates were sampled during prediction, and the errors between the predicted position using and … E models and the actual position coordinates are compared. The time step was selected to be 15, and the curve was obtained after 15 iterations, as shown in 11. According to the experimental results, the prediction accuracy of − in the coordinate dataset of this study was 97.34%, and that of the − … E was 93.86%.
It is evident that the prediction effect of is better than that of … E in the • coordinate component of the viewpoint in this study, indicating that is effective in the long time series tasks.
(3) Evaluation of data load rendering In addition, to test the stability of the algorithm, frame sampling points were selected as -axis components to evaluate the frame rates on three sets of datasets under the noprediction and proposed predictive scheduling loading algorithms. The results are listed in <-C 4. The experimental results show that compared with noprediction, the frame rate is still higher and more stable even when the data size is large, causing the rendering effect to be smoother. (4) Overall performance evaluation Here, with the index structure preloading model effectively, this algorithm can test the interactive performance of the entirety group { data. For example, after slicing seismic data every 10 samples and recording the time needed for each algorithm according to the section; the results are shown in <-C 5, which significantly reduces the query time slice compared with other algorithms. Finally, we test the effect of the system's overall display settings that have different display modes for the entire system platform rendering.
12 is a blue map mode of body data, 13 is Lord with as the contact line, • and 3 as rendering time axis line direction, and is the running observation of actual seismic data. And it can be seen that the system platform in this study can reflect the internal structure and information of geological data with high quality.

IV. CONCLUSION
In this study, based on the original Hilbert R-tree structure, variational deep embedding clustering is integrated to improve the overlap of the data node space and directly improve the efficiency of the index structure algorithm. In addition, the new time-series convolutional network is used to predict the viewpoint coordinates, which improves the accuracy of the original prediction. Combined with the data scheduling loading module, the data that need to be drawn and displayed are preloaded into the memory. Through comparative experiments on seismic datasets, it is proven that the proposed scheme can solve the problems of real-time display and lag during largescale data visualization through the use of multiple modules, and improve the accuracy and fluency of the entire data loading process.
In the future, the proposed method needs to be improved and optimized, and the algorithm can be explored in greater depth to include more complex and larger datasets. Moreover, the method proposed in this study has only been tested on seismic datasets, and the system can be extended to provide researchers with more convenient tools.

V. AUTHOR CONTRIBUTIONS
Yu-Hang Zhang conceived the algorithms, and designed the experiments; Chang Wen reviewed the paper; Min Zhang checked the spelling and made suggestions; Kai Xie conducted the comparative experiment; Jian-Biao He is responsible for data collection.