Dynamic Visualization of Uncertainties in Medical Feature of Interest

In a medical context, uncertainty visualization of a Feature of Interest (FOI) is important, which could provide medical experts with informative feedback to make better decisions for diagnosis or preoperative planning. Traditional uncertainty visualization techniques allow the exploration of the uncertainty visualization of the FOI. However, when the intensity of the FOI and other materials is similar or overlapping, and/or the FOI is occluded by other materials, it would be difficult or impossible for them to reveal the FOI and its uncertainties. To address this problem, we propose an uncertainty visualization technique that includes two main components: the Multiattribute-rule-based FOI Segmentation (M-rule Seg. for short) and the FOI Dynamic Uncertainty Visualization. To demonstrate its effectiveness, we performed an experiment comparing it with two traditional uncertainty visualization techniques. The experimental results show that our proposed uncertainty visualization technique allows to better reveal the FOI and all its uncertainties that are difficult or impossible to be visualized by traditional uncertainty visualization techniques. In addition, we performed another experiment to compare our proposed M-rule Seg. technique with three traditional segmentation techniques. The experimental results show that our M-rule Seg. technique can generate more accurate segmentation results at the cost of a longer computing time.


I. INTRODUCTION
Medical visualization pipeline ranges from initial medical data acquisition step to intermediate data processing steps such as registration or segmentation, to the final data visualization step. Each of these steps may introduce a certain amount of uncertainty. In many medical applications, visualization of the uncertainties associated with certain FOI is a desired and important task. For example, in the radiofrequency ablation, it is desired to visualize the uncertainties associated with a tumor, as underestimation of the tumor may cause its incomplete ablation and thus regrowth, while overestimation of the tumor may damage the patients' healthy tissues. Similarly, in neurosurgery, it is desired to visualize the uncertainties associated with fiber tracking, as a slight mistake in the fiber tracking approach may lead medical experts to make wrong decisions, which may damage patients' healthy brain regions for vision and The associate editor coordinating the review of this manuscript and approving it for publication was Mu-Yen Chen . motion. Therefore, it is necessary to perform research on uncertainty visualization of a FOI, which provides medical experts with informative feedback of the FOI and enables them to make reasonable diagnostic or preoperative planning decisions.
Probably the simplest and most popular technique for uncertainty visualize of a FOI is the 1D Transfer Function (TF)-based Visualization. By manual adjustment of the 1D TF, medical experts could study possible visualization results of the FOI. However, this technique has three main drawbacks. First and foremost, it is difficult or impossible to reveal the FOI and its uncertainties when the intensity of the FOI and other materials is similar or overlapping, and/or the FOI is occluded by other materials. Second, it is performed in an uncontrollable manner and thus is impossible to reveal all uncertainty visualization of the FOI. This may result in some important uncertainty visualization of the FOI being missing and thus mislead medical experts in making decisions. Third, it is performed in a non-automatic manner and thus is time-consuming.
Lundstrom et al. [1] proposed a novel uncertainty visualization technique called Probabilistic Animation, which partially solves the issues mentioned above. In comparison with the 1D TF-based Visualization, this technique has the following advantages: firstly, it could reveal all uncertainty visualization associated with a FOI, and thus will not mislead medical experts to make inappropriate decisions. Secondly, it could automatically reveal all uncertainty visualization of the FOI and thus is less time-consuming. However, it is still difficult or even impossible to reveal the FOI and its uncertainties when the intensity of the FOI and other materials is similar or overlapping, and/or the FOI is occluded by other materials.
To address the above-mentioned issues, we proposed a novel uncertainty visualization technique, which can be used for uncertainty visualization of certain FOI in medical volume data obtained from any medical imaging modalities e.g., MRI, CT. The contributions of this paper are summarized as follows: • A novel framework is proposed which allows users to automatically extract a FOI from medical volume data based on a few hand-drawing in the familiar slice space and dynamically visualize its uncertainties.
• A novel uncertainty visualization technique is proposed which allows to better reveal the FOI and all its uncertainties that are difficult or impossible to be visualized by traditional uncertainty visualization techniques.
• A M-rule Seg. technique is proposed which allows the generation of more accurate segmentation results of FOIs, compared to traditional segmentation techniques.

II. RELATED WORK A. UNCERTAINTY VISUALIZATION
Uncertainty visualization has now become an active research field in the visualization community since its significance has been identified by some leading experts [2]- [4]. Since then, plenty of studies have been proposed to show various uncertainty modelling and uncertainty visualization techniques. Some review papers that try to classify these techniques are also proposed. They classify these existing uncertainty visualization techniques based on either the types of data [5], [6] or the techniques used to depict uncertainty [7], [8]. The most relevant works to our research are uncertainty visualization in the medical domain, and compared to uncertainty visualization in other fields, there is less uncertainty visualization research that focuses on the medical data. This is because ''we lack a comprehensive understanding of what types of uncertainty exist in medical visualization and what their characteristics in terms of mathematical models are'', according to Ristovski et al. [9], Ristovski [10]. In their recent research [9], [10], they proposed a work that summarizes and classifies the uncertainty types in medical visualization and describes them mathematically. Al-Taie et al. [11] proposed a novel, information theory-based uncertainty estimation method that considers all probabilities to compute a single uncertainty value for each voxel, and presented corresponding uncertainty visualization methods to highlight uncertainties in the segmentation results. Raidou et al. [12] presented an uncertainty visualization method that overlays black circle-type glyphs on top of the colors to indicate the radiation dose uncertainties used to irradiate tumors. The drawback of this method is that the colors could be occluded by the black circle-type glyphs on top of them and thus causes difficulties for perception. Ristovski et al. [13] presented an uncertainty visualization method that can be used to visualize the uncertainty of the stenotic regions in vascular structures. They also conducted an evaluation to compare their uncertainty visualization method with some state-of-the-art uncertainty visualization techniques. Their evaluation results demonstrate that compared to those state-of-the-art uncertainty visualization techniques, their method is preferred by medical experts, and will not cause any wrong decisionmaking. Recently, Ristovski et al. [14] proposed the world's first glyph-based uncertainty visualization approach that is used to visualize the outcomes of Radiofrequency ablation simulations together with their uncertainties. Kniss et al. [15] introduced an uncertainty visualization technique that is used to visualize the combined fuzzy classification results from multiple segmentations. However, the drawback of their technique is that it needs user interaction and cannot reveal the uncertainties automatically. Lundstrom et al. [1] proposed an uncertainty visualization technique named Probabilistic Animation. This technique is used to visualize uncertainty caused by the material classifications and can automatically reveal each material's all possible appearances regarding an explicitly probabilistic TF. However, this technique has one main drawback: when the intensity of the FOI and other materials is similar or overlapping, and/or the FOI is occluded by other materials, it would be difficult or even impossible to reveal the FOI and its associated uncertainties. In comparison, our proposed uncertainty visualization technique could overcome this drawback and thus provides better uncertainty visualization service.

B. TRANSFER FUNCTION DESIGN
One of the most active research topics in the visualization community is the TF design, which is used to classify the voxels of volume data to different materials and assign them with appropriate optical properties. The simplest and most widely used TF is the 1D TF, where a single intensity attribute is used for both classification and optical properties assignment. However, the main drawback of the 1D TF is due to a lack of ability to separate materials with similar intensity, and a lot of works have been proposed to extend its capability. Kindlmann and Durkin [16] introduced a TF which includes the gradient magnitude and allows for the isolation of boundaries. Sereda et al. [17] proposed a TF that consists of the Low-High histogram, where boundaries appear as blobs instead of arches in the intensity-gradient magnitude histogram. Mean and standard deviation [18], curvature [19], feature size [20], ambient occlusion [21], have also been suggested as useful attributes to construct the TF. To further enhance the feature distinction ability of the TF, many works have been proposed for higher-dimensional TF design. Higherdimensional TF design generally takes advantage of clustering [22]- [24], dimension reduction [25], [26] or effective interactions [27] to convert high-dimensional attribute space to low-dimensional attribute space so that it can be easily interacted by users. Alternatively, multivariate visualization techniques such as the Parallel Coordinate Plot [26], [28] can be used as the higher-dimensional TF space where users can directly interact with high-dimensional attributes. Machine learning methods [29], [30], such as neural networks and support vector machines, have also been applied to classify higher-dimensional data.

C. VOLUME SEGMENTATION
Segmentation is a commonly used technique but also a longstanding challenge. One of the most classic and well-known segmentation methods is the Fuzzy C-means (FCM) [31], which classifies either 2D or 3D images by grouping similar data points in the feature space into clusters. This clustering is achieved by iteratively minimizing a cost function that is dependent on the distance of the pixels to the cluster centers in the feature domain. However, the classic FCM segmentation method does not fully utilize the spatial information of images. Chuang et al. [32] proposed an improved FCM method named Spatial Fuzzy C-means (SFCM), which incorporates the spatial information of images into the membership function for segmentation. Compared to the classic FCM, their method could generate more homogeneous segmentation results, and is less sensitive to noise. Nguyen et al. [33] introduced a clustering-based framework that first applies mean-shift clustering to oversegment the volume boundaries according to their low-high values and their spatial coordinates, then uses hierarchical clustering to group similar voxels. Wang et al. [34] introduced a work that decomposes the intensity-gradient magnitude histogram into valley cells based on the Morse theory. Lp et al. [35] presented a hierarchy of normalized-cut-assisted visual segmentation of an intensity-gradient histogram to assist in the volume exploration process. Kniss and Wang [36] presented a simple and robust method that treats the volume as a 3D manifold and performs segmentation based on manifold distance metrics. Zhou and Hansen [37] proposed an automated feature extraction method that is capable of automatically extracting a FOI from volume data and creating similar segmentation results in comparison with expert-extracted features and groundtruth segmentations. Unlike our method, their segmentation method consists of two main steps: the automated TF tuning and 3D connected component extraction. Although rulebased expert systems have been widely used for a long time in the field of Artificial Intelligence to help people to get appropriate conclusions in place of human experts, few studies have been performed to utilize rule-based methods to automatically and intellectually segment medical volume data.
Cai et al. [38] proposed a work that utilizes the rules obtained from user-labelled FOI on a few slices to automatically and intellectually segment the FOI in the entire medical volume data. We borrow their rule-based idea to automatically segment the FOI. However, compared to their Single-attributerule-based Segmentation (S-rule Seg. for short) method, our segmentation method consisting of multiattribute-rule and allows to produce a more accurate FOI segmentation result. There are also a variety of studies which segment volume data by using neural networks, but that topic is out of our scope.

III. METHOD
This section introduces the method used to construct our proposed uncertainty visualization technique, and Fig. 1 shows its framework, which consists of two main components: the first component is called M-rule Seg. (illustrated in green), which automatically extracts the FOI from medical volume data based on the multiattribute-rule selected from the target FOI labelled by users. The output of this component is the FOI probability volume data, which indicates each voxel's probability of being the target FOI and provides the uncertainty information for the second component. Therefore, the ''uncertainty'' in this paper explicitly refers to the probability of each voxel that belongs to the target FOI. The second component is called FOI Dynamic Uncertainty Visualization (illustrated in red), which utilizes animation to automatically reveal all probabilities of the segmented FOI that are unclear or impossible to be visualized by traditional uncertainty visualization techniques. Therefore, the word ''dynamical'' in this paper means animated and automatic. For each component, it is clear from Fig. 1 that it consists of more sub-steps. Section III.A and Section III.B will introduce the two components in great detail.

A. M-RULE SEG 1) ATTRIBUTES GENERATION, SELECTION AND BACKGROUND EXTRACTION
Given medical volume data, the first step of our framework is to generate its derived attributes, select which attributes are suitable to make up the multiattribute-rule, and extract its background voxels from the foreground voxels. For attributes generation, we consider a total of seven attributes: intensity, gradient, second-order derivative along the gradient direction [39], standard deviation, feature size [20], threeelement distance and four-element distance. More specifically, the three-element distance refers to the distance between each voxel and the FOI center specified by users, and the four-element distance adds an extra distance measure of intensity between each voxel and the FOI center into the three-element distance. In the current work due to the lack of an efficient algorithm that could be used to determine the optimal combination of multiple attributes for a dataset, we perform the attributes selection empirically by visualizing their values on 2D slices and observing how well they can distinguish different FOIs in medical volume data. If an attribute is good at distinguishing different FOIs, then we select it to make up the multiattribute-rule; otherwise we exclude it in the multiattribute-rule. For background extraction, we apply the region-growing algorithm to each 2D slice of the medical volume data to obtain the background voxels.

2) FOI LABELLING AND CANDIDATE MULTIATTRIBUTE-RULE GENERATION, PRE-PROCESSING
For the second step, users could observe their wanted FOI throughout the 2D slices of any selected attribute and then mark out their target FOI on a few 2D slices (typically two). This will generate a number of candidate multiattribute rules which meet the user-labelled target FOI. As introduced in Johnson et al.'s research [40], the local frequency distribution captures the quantitative characteristics of the neighbourhood centered at each voxel and can be used to effectively extract FOIs. Therefore, we compute the local frequency distribution for each selected attribute and combine them to compose our multiattribute-rule, as defined below: . (Def. 1) The above multiattribute-rule means if any voxel simultaneously meets each condition encapsulated within <>, then it belongs to the target FOI e.g., Aneurism. More specifically, each condition <> corresponds to a different attribute and consists of three parts: normalized attribute name, normalized attribute values and corresponding frequency range. Because the normalized attribute values and corresponding frequency range can take any values, infinite multiattributerule can be generated according to Def. 1. To simplify the problem so as to generate a smaller number of multiattributerule, for each normalized attribute, we divide its normalized attribute values and corresponding frequency range into 10 equal intervals. Therefore, for each normalized attribute, it can generate 100 rules. For a multiattribute-rule with n attributes, it can generate 100 n multiattribute-rule. From the experiments we found that this step usually generates a mass of repeated and unsorted candidate multiattribute rules, and it is extremely time-consuming to use them directly in the next step computing for the most effective multiattribute-rule selection. Thus, to accelerate the computing process of the most effective multiattribute-rule selection, we pre-process the candidate multiattribute rules by sorting them first, and then removing their repetition.

3) MOST EFFECTIVE MULTIATTRIBUTE-RULE SELECTION AND WEIGHT COMPUTING
Not all candidate multiattribute rules generated from the last step can be used to effectively predict the target FOI. This is because different combinations of them will generate different predict results of the FOI, including both true positives inside the FOI region and false positives outside it. Therefore, we need to find out an optimal combination of them (known as the most effective multiattribute rules) which can minimize the false positives outside the FOI region while preserving the true positives inside. To achieve this goal, we apply the Genetic Algorithm (GA) [38] to the candidate multiattribute rules to pick out the most effective multiattribute rules. This algorithm can be summarized as follows: first, we encode the candidate multiattribute rules into a binary string, with each bit of the string corresponding to a specific candidate multiattribute rule. If a bit value is 1, then it represents the corresponding candidate multiattribute rule is the most effective multiattribute rule; if it is 0, then it represents the corresponding candidate multiattribute rule is not the most effective multiattribute rule. Second, based on the encoding, we can generate a binary string array, with each bit value being randomly assigned as 1 or 0. This binary string array is known as the parent population. For each binary string in this array, we employ Equations 1 and 2 to compute its fitness score, which indicates how good this combination of the candidate multiattribute rules is to predict the target FOI. Here, v denotes any non-background voxel on the user-labelled FOI 2D slices; n s (v) denotes how many multiattribute-rule in an encoded binary string s are met by voxel v; t denotes the user-labelled FOI. Third, we randomly select those binary strings with high fitness scores from the parent population, and apply crossover and mutation to them to generate a new binary string array. This new array is known as the offspring population and we keep its size the same as the parent population. Again, for each binary string in the new array, we employ Equations 1 and 2 to compute its fitness. Fourth, we let the offspring population become the parent population, which is used to generate the next generation. Fifth, the third and fourth steps are repeated until the maximum fitness of each generation is convergent. Finally, by decoding the binary string with the maximum fitness in the last generation, we obtain the most effective multiattribute rules. One thing worth mentioning is that as our candidate multiattribute rules have been sorted in the last step, we can utilize the Binary Search algorithm to accelerate their comparison with the multiattribute-rule that is met by any voxel. This significantly reduces the computation time of the process of the most effective multiattribute-rule selection. Another thing worth VOLUME 8, 2020 mentioning is that for each most effective multiattribute rule, we consider its weight by counting how many user-labelled FOI voxels meet it. (2)

4) WEIGHTED MOST EFFECTIVE MULTIATTRIBUTE-RULE EVALUATION, REFINEMENT AND FOI PROBABILITY VOLUME DATA GENERATION
For each non-background voxel of the medical volume data, we evaluate its likelihood of being the target FOI by computing the total weight value of those weighted most effective multiattribute rules it meets. The higher weight value it has, the more likely it is the target FOI. For each background voxel of the medical volume data, its likelihood of being the target FOI by default is 0. Consequently, we obtain the FOI likelihood volume data. Due to the fact that the FOI likelihood volume data always includes many misclassified voxels, it needs to be further refined to produce a good evaluation result. We followed the three-step algorithm in [38] for the refinement. However, unlike their algorithm's first step which computes the threshold as an average of all user-labelled FOI's likelihoods, we compute the threshold by using a histogram-based method, which gives us a better control to filter out those outlier voxels' likelihoods. Our algorithm is illustrated in Algorithm 1, which consists of three steps: first, according to user-specified cutoffLeft and cutoffRight, we compute to get the new leftmost boundary lowerHistBoundary and the rightmost boundary upperHistBoundary of the user-labelled FOI's likelihood histogram, which is stored in the hist() function that has two input parameters: the first parameter indicates the bin number which corresponds to a specific likelihood of this histogram; the second parameter is always 1, as the hist() is a column vector. The hist() function can guarantee us to get the number of voxels at a specific likelihood. Second, we compute the total number and total likelihood of the remaining user-labelled FOI voxels falling within the new leftmost and rightmost boundaries. Third, we compute an average likelihood of remaining FOI voxels and use it as the threshold. The time complexity of Algorithm 1 is O(n), where n is the number of bins in hist(), and its runtime is less than 0.003 seconds for all datasets. As a result of the refinement step, we obtain a refined FOI likelihood volume data. We further normalize it to become the FOI probability volume data, which indicates each voxel's probability of being the target FOI.

B. FOI DYNAMIC UNCERTAINTY VISUALIZATION 1) DEFAULT VISUALIZATION TF GENERATION
Given the normalized attributes and user-specified colormap, the first step towards the FOI Dynamic Uncertainty Visualization is to generate the TF of the default visualization, as follows: for each voxel v of the medical volume data, its default color c d (v) is proportional to its normalized intensity value, as shown in Equation 3; its default transparency t d (v) is proportional to its both normalized intensity value and gradient value, as shown in Equation 4. Here, k and l refer to two user-specified positive constants. Although simple, this TF is capable of generating a powerful default visualization that can not only reveal internal features, but also highlight their boundaries.

2) FOI ENHANCED VISUALIZATION TF GENERATION
In this step, we design a TF that utilizes the extracted FOI probability volume data to enhance the FOI in the default visualization and thus generate a FOI enhanced visualization. This TF is designed as follows: for each voxel v, if its FOI probability prob(v) is 0, then its FOI enhanced color c e (v) keeps unchanged as its default color c d (v); if its FOI probability is greater than 0, then its FOI enhanced color c e (v) can be either the default color c d (v) or a new user-specified color c n (v), as illustrated in Equation 5; its FOI enhanced transparency t e (v) is computed by Equation 6, where ω 1 is a user-adjustable non-negative constant. From Equation 6 it is clear that if a voxel v's prob (v) = 0, then its transparency remains unchanged; the bigger its prob(v) is, the more opaque its transparency is. In this way, we can highlight those voxels which have high probabilities of being the FOI.

4) FOI DYNAMIC UNCERTAINTY VISUALIZATION TF GENERATION
In this step, we encode the FOI probability volume data into an animation to fuse both FOI enhanced visualization and FOI removed visualization so as to generate the FOI Dynamic Uncertainty Visualization. Its TF is designed as follows: for each voxel v, its FOI Dynamic Uncertainty Visualization color c uv (v) keeps unchanged as its default color c d (v), as shown in Equation 9; its FOI Dynamic Uncertainty Visualization transparency t uv (v) is computed as Equation 10.
Here, letting denotes the total number of frames in an animation cycle, and n a (v) denotes how many frames of the FOI enhanced visualization that a voxel v needs to display in an animation cycle, then we can compute n a (v) by using Equation 11, where round refers to the round-off function. Then, letting θ denotes an animation index in an animation cycle, whose range is [1, ]. Every time θ changes, the animation changes to a new frame. Now we can design a 2D animation matrix A (θ, n a (v)), as illustrated in Fig. 2, which allows to find out if a voxel v should display the FOI enhanced visualization (denoted by 1 in this matrix) or the FOI removed visualization (denoted by 0) at a specific animation index θ. An advantage of this animation matrix is that it can guarantee to reveal the FOI's probabilities in an organized way, because all its 1s and 0s are organized continuously.

IV. RESULTS AND DISCUSSION
This section introduces two experiments. Section IV.A introduces the first experiment, which compares our proposed uncertainty visualization technique with two traditional uncertainty visualization techniques, known as 1D TF-based Visualization and Probabilistic Animation, to demonstrate its effectiveness. Section IV.B introduces the second experiment, which compares our proposed M-rule Seg. technique with three traditional segmentation techniques, known as S-rule Seg., FCM and SFCM, to demonstrate its accuracy. In Section IV.C, we discuss the limitations of our method. All experiments were conducted based on our HP desktop with the following configurations: Intel Core i5-8400 CPU, 16GB RAM, NVIDIA Geforce GTX 1060 GPU, 6GB video memory. We used a combination of Matlab and CUDA to implement different steps of this research. Moreover, we utilized Dynamic Parallelism which is an advanced feature of CUDA for further acceleration. The left images in Fig. 3(a)-3(c) show three different uncertainty visualization results of the VisMale dataset [41] from the 1D TF-based Visualization technique, and the right images in Fig. 3(a)-3(c) show their corresponding 1D TFs. Employing the 1D TF-based Visualization technique for uncertainty visualization of this dataset has three drawbacks. First, no matter how you refine the 1D TF, it fails to reveal the human brain and its uncertainties hidden in this dataset. This is because the intensity of the brain and other soft tissues is overlapped, and the brain is occluded by the skull. Second, it is manual rather than automatic to obtain these TFs which can clearly show the structure of this dataset, and thus could be time-consuming. Third, even if it can reveal the hidden brain, it is still impossible to reveal all uncertainties of the brain, as its TF adjustment is performed in an uncontrollable way. This could lead to some important uncertainty visualization results of the brain are missing, and thus may cause medical experts to make inappropriate decisions. Fig. 4(a) and 4(b) show two frames from our FOI Dynamic Uncertainty Visualization method. Compared to the left images in Fig. 3(a)-3(c), it is clear that our method can clearly reveal the hidden brain and its uncertainties that cannot be visualized by the 1D TF-based Visualization technique. Also, our method can automatically generate the TF used to show the structure and the brain of this dataset, and thus is less time-consuming. Finally, our method can dynamically show all uncertainties of the brain without losing any results. As a result, our method results in a more informative visualization.
The left images in Fig. 5   1D TF-based Visualization technique, and the right images in Fig. 5(a)-5(c) show their corresponding 1D TFs. Employing the 1D TF-based Visualization technique for uncertainty visualization of the liver tumor has three drawbacks. First, no matter how you refine the 1D TF, it is difficult to clearly reveal the liver tumor and its uncertainties. This is because the intensity of the liver tumor and the liver is partially overlapped, and the liver tumor is occluded by the liver. Second, it is manual rather than automatic to obtain these TFs which can clearly show the structure of this dataset, and thus could be time-consuming. Third, it is never possible to reveal all uncertainties of the liver tumor, as its TF adjustment is performed in an uncontrollable manner. This could lead to some important uncertainty visualization results of the liver tumor are missing, and thus may cause medical experts to make inappropriate decisions. Fig. 6(a)-6(d) show four frames from our FOI Dynamic Uncertainty Visualization method. In comparison with the left images of Fig. 5(a)-5(c), it is clear that our method can better isolate and highlight the liver tumor and its uncertainties that are difficult to be clearly visualized by the 1D TF-based Visualization technique. Also, our method can automatically generate the TF used to show the structure and the liver tumor of this dataset, and thus is less time-consuming. Finally, our method can dynamically show all uncertainties of the liver tumor without missing any results (note how the liver tumor changes in the four frames). Fig. 7(a)-7(c) show three frames of the CT-head dataset [41] from the Probabilistic Animation technique, and Fig. 7(d) shows their corresponding explicitly probabilistic TF. It is clear from Fig. 7(d) that we try to classify the dataset into four materials, which are brain, bones, teeth and other soft tissues. However, as the intensity of the brain and other soft tissues is overlapped, and the brain is occluded by the skull, it is impossible for the Probabilistic Animation technique to reveal the brain and its uncertainties hidden in this dataset. Fig. 8(a)-8(d) show four frames from our FOI Dynamic Uncertainty Visualization method. Compared to Fig. 7(a)-7(c), it is clear that the four frames from our method can clearly reveal the hidden brain and all its uncertainties  (note how the brain shrinks in the four frames) that cannot be visualized by the Probabilistic Animation technique. Fig. 9(a)-9(c) show three frames of the CTNeck dataset [1] from the Probabilistic Animation technique, and Fig. 9(d) shows their corresponding explicitly probabilistic TF. It is clear from Fig. 9(d) that we try to classify the dataset into three materials, which are thyroid tumor, carotid arteries and bones. However, as the intensity of the thyroid tumor, carotid arteries and bones are overlapped, from the three frames it is clear that it is difficult for the Probabilistic  Animation technique to clearly isolate and reveal the thyroid tumor and its uncertainties e.g., in Fig. 9(a), the left carotid artery and partial bones are colored in green, which indicates that they are considered as part of the thyroid tumor; in Fig. 9(b) and 9(c), partial carotid arteries (as illustrated in the blue rectangles) and bones are colored in green, which indicates they are considered as part of the thyroid tumor. Fig. 10(a)-10(d) show four frames from our FOI Dynamic Uncertainty Visualization method. Compared to Fig. 9(a)-9(c), it is clear that the four frames from our method can better isolate and highlight the thyroid tumor and all its uncertainties (note how the thyroid tumor changes in the four frames) that are difficult to be clearly visualized by the Probabilistic Animation technique. Fig. 11(a)-11(c) show three frames of the Tumor-Breast dataset [41] from the Probabilistic Animation technique, and Fig. 11(d) shows their corresponding explicitly probabilistic TF. It is clear from Fig. 11(d) that we try to classify the dataset into three materials, which are breast tumor, vessels and other tissues. However, as the intensity of the breast tumor and other tissues is overlapped, from the three frames it is clear that it is difficult for the Probabilistic Animation technique to clearly isolate and reveal the breast tumor and its uncertainties e.g., in Fig. 11(a) and 11(b), the breast tumor is incompletely revealed because partial breast tumor is colored in white, which indicates it is considered as part of other tissues; in Fig. 11(c), the breast tumor is overly revealed because some other tissues are colored in green, which indicates they are considered as part of the breast tumor. Fig. 12(a)-12(c) show three frames from our FOI Dynamic Uncertainty Visualization method. Compared to Fig. 11(a)-11(c), it is clear that the three frames from our method can better isolate and highlight the breast tumor and all its uncertainties (note how the breast tumor changes in the three frames) that are difficult to be clearly visualized by the Probabilistic Animation technique. Fig. 13(a)-13(c) show three frames of the MRBrainTumor1 dataset [43] from the Probabilistic Animation technique, and Fig. 13(d) shows their corresponding explicitly probabilistic TF. It is clear from Fig. 13(d) that we try to classify the dataset into four materials, which are human brain, brain tumor, skin and bones. However, as the intensity of the brain tumor, brain and skin are overlapped, from the three frames it is clear that it is difficult for the Probabilistic Animation technique to clearly isolate and reveal the brain tumor and its uncertainties e.g., in Fig. 13(a), partial brain tumor is colored in blue, which indicates it is considered as part of the skin and thus is wrong; in Fig. 13(b), partial skin is colored in green, which indicates it is considered as part of the brain tumor and thus is wrong; in Fig. 13(c), the brain tumor is barely seen, and also, partial skin is colored in green, which indicates it is considered as part of the brain tumor. Fig. 14(a)-14(d) show four frames from our FOI Dynamic Uncertainty Visualization method. Compared  to Fig. 13(a)-13(c), it is clear that the four frames from our method can better isolate and highlight the brain tumor and all its uncertainties (not how the brain tumor changes in the four frames) that are difficult to be clearly visualized by the Probabilistic Animation technique. Table 1 shows the rendering performance of various datasets in terms of Frames-Per-Second (FPS) for the 1D TF-based Visualization technique, the Probabilistic Animation tech-  nique and our FOI Dynamic Uncertainty Visualization method. All three techniques are implemented by using CUDA and are tested on the HP desktop as described at the beginning of Section IV. The window size used to render these datasets is 512 × 512. From this table it is clear that for each dataset, the Probabilistic Animation technique has the fastest rendering performance, and the 1D TF-based Visualization has the second fastest rendering performance, and our FOI Dynamic Uncertainty Visualization method has the slowest rendering performance. On average, our FOI Dynamic Uncertainty Visualization method is about 1.4 times slower than the 1D TF-based Visualization technique, and 1.5 times slower than the Probabilistic Animation technique, as illustrated in Table 1.

4) USER STUDY
An initial user study was conducted to demonstrate the effectiveness of our proposed FOI Dynamic Uncertainty Visualization technique. We apply our uncertainty visualization VOLUME 8, 2020 technique and two traditional uncertainty visualization techniques (1D TF-based Visualization and Probabilistic Animation) to six datasets to compare their effectiveness. There were six medical experts involved in this user study, and for each dataset, each medical expert was asked to answer two questions regarding each uncertainty visualization technique (thus a medical expert needs to answer a total of 36 questions). The two questions are: (1) the degree of ease to identify a given FOI by this technique; (2) the degree of ease to identify the uncertainties of a given FOI by this technique. Each medical expert can complete the answer to a question by subjectively rating a numerical score on a scale of 1 to 10 (1 denotes extremely difficult or impossible, and 10 denotes extremely easy). Before the user study, we spent 10 minutes to introduce how to use each uncertainty visualization technique. To eliminate the bias of rating, the three uncertainty visualization techniques were presented one by one in random order during the user study. Fig. 15 and Fig. 16 show the average rating score results regarding Question 1 and Question 2, respectively, for three compared uncertainty visualization techniques. The given FOI of each dataset is also listed in the brackets of both figures. From both figures it is clear that our proposed FOI Dynamic Uncertainty Visualization technique receives the highest rating scores among the three uncertainty visualization techniques, for each dataset. This indicates that it is the best uncertainty visualization technique which allows to easily reveal the given FOI and its associated uncertainties. Moreover, from both figures it is also clear that for some datasets such as the VisMale [41] and CT-head [41], both 1D TF-based Visualization and Probabilistic Animation receive extremely low rating scores that are closer to our minimum rating score ''1''. This indicates that they are extremely difficult or impossible to be used to reveal the given FOIs and their uncertainties for those datasets. However, for   [43] and LiverTumor [42], they receive relatively high rating scores, and this indicates they can be used to reveal the given FOIs and their uncertainties, but only to a very limited extent.

B. PROPOSED SEGMENTATION VS. TRADITIONAL SEGMENTATION
We evaluated the segmentation accuracies of our M-rule Seg. technique and three traditional segmentation techniques (which are S-rule Seg. [38], FCM segmentation [31], and SFCM segmentation [32]) by comparing their segmentation results with the ground truths of six datasets, and  Table 3 summarizes the attributes used for our M-rule Seg. and the S-rule Seg., for each dataset, and the intensity attribute is used for both FCM segmentation and SFCM segmentation. We apply both FCM segmentation and SFCM segmentation only to the foreground voxels of each dataset, and the cluster which has the biggest overlap with the corresponding ground truth is reserved as the final segmentation result. For each dataset, we show the two user-labelled slices and the segmentation result from our segmentation method, and the corresponding ground truth in Fig. 17(a) to 17(f), respectively.

C. DISCUSSION
The major limitation of our method is that not arbitrary attribute combination could produce very good segmentation results, one has to empirically try to find the optimal attribute combination in order to obtain an accurate segmentation result. Another limitation of our method is that we took advantage of the optimization algorithm GA to select the most effective multiattribute rules that can be used to predict the FOI. However, due to the fact that GA itself is computationally expensive because of the repeated calculation of fitness values, it is very time-consuming to obtain the most effective multiattribute rules by using it, although we have used the CUDA technique for acceleration. This situation will become worse when we apply the GA to extract FOIs that contain more candidate rules and more user-labelled voxels. We thus need to explore alternative optimization method that allows us to select the most effective multiattribute rules faster.

V. CONCLUSIONS AND FUTURE WORK
In this paper, we proposed an uncertainty visualization technique, which consists of two main parts: the first part is the M-rule Seg. used to automatically segment the FOI from volume data according to the user-labelled FOI on a few 2D slices; the second one is the FOI Dynamic Uncertainty Visualization that utilizes time to fuse both FOI Enhanced Visualization and FOI Removed Visualization so as to generate automatic uncertainty visualization of the FOI. To demonstrate the effectiveness of our proposed uncertainty visualization technique, we conducted an experiment to compare it with two traditional uncertainty visualization techniques, and the experimental results show our proposed uncertainty visualization technique can outperform the two traditional uncertainty visualization techniques. In addition, we performed an experiment to compare our proposed segmentation technique with three traditional segmentation techniques, and the experimental results show that our proposed segmentation technique is more accurate, but at the cost of a longer computing time.
For future work, we would like to study the lengthchangeable-rule-based FOI segmentation that may generate more accurate segmentation results than the current one. We also would like to incorporate more state-of-the-art uncertainty visualization techniques into our evaluation and utilize more advanced statistical methods to analyze their effectiveness.
JINJIN CHEN received the bachelor's and master's degrees from Jiangnan University. She is currently a Lecturer with the School of Design and Art, Communication University of Zhejiang. Her major research interests include virtual reality and interactive design.
LIYE CHEN is currently a Radiologist with the Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University.
LINJIANG JIN is currently pursuing the master's degree with the School of Computer Science and Technology, Zhejiang University of Technology. His research interest includes data visualization.
XUJIA QIN is currently a Professor with the School of Computer Science and Technology, Zhejiang University of Technology. His research interests include data visualization, computer graphics, digital image processing, and geometric modeling. VOLUME 8, 2020