IEEE Xplore At-A-Glance
  • Abstract

Perception-Based Transparency Optimization for Direct Volume Rendering

The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method.

SECTION 1

Introduction

Volume visualization is a useful means to discover meaningful structures in volumes. It relies on proper transfer function specification to deliver the expected results according to user requirements. In typical scientific volumes, structures to be visualized may be layered or partially occluded by others in the rendered images. Instead of completely removing the occluding structures or exterior layers and assigning an opaque property to the target structures, the structures are often rendered in a semi-transparent manner to preserve their appearances and spatial information in the image, which is an advantageous characteristic of volume rendering. Despite its attractiveness, producing satisfying direct volume rendered images (DVRIs) is still a challenging research issue, as witnessed by the large amount of literature on transfer function design and volume rendering.

There are several problems in obtaining satisfactory rendered images of volumes with semi-transparent structures. Firstly, all the constituent structures should obtain a balanced and sufficiently high opacity in order to be visible in the image. As the opacity of structures affects the visibility of other layered structures, such mutual effects are difficult to resolve when the structural complexity of the image is high. A well-balanced opacity specification or adaptive adjustment becomes a non-trivial problem. Actually, visibility is a necessary criteria but not sufficient for expressive visualization. The structure and transparency perceptions play a more important role in viewers' understanding of the volume. Meanwhile, other visual properties like color and lighting are some of the crucial factors influencing our visual perception of the structures. These factors lead to a high dimensional parameter space, which is complicated and tedious to explore or manipulate. Therefore, an automatic or interactive adjustment method is necessary to maintain the quality of the rendered image.

Enhancing the perception of semi-transparent structures has been studied for decades. Non-photorealistic lighting [10] and visual cues [1] are often integrated in typical approaches. Psychological studies [25] have identified that luminance and contrast are two major factors in transparency perception, while the contextual information is useful to provide visual hints. Other factors including lighting, shadow, reflection, and contours are also evaluated in the psychological studies. It is thus reasonable to exploit these psychological principles to facilitate the enhancement of the visualization process.

The goal of this paper is to develop a unified framework for automatic specification of rendering parameters and interactive enhancement for DVRIs to reveal layered structures in a semi-transparent manner. Based on the perception principles, transfer functions for illustrating layered volumetric structures can be obtained by means of novel image quality measures on visibility, shape, and transparency in conjunction with a parameter optimization procedure. The result is an image quality improvement that preserves the meaningful structures while revealing the context and spatial relation of these structures.

The contributions of this paper are as follows:

  • A suite of image quality measures for assessing the effectiveness of the rendered image in revealing the layered semi-transparent structures in the volume

  • An adoption of psychological principles to derive rules to estimate the perceived transparency of structures in an image

  • A novel optimization framework for enhancing the rendering parameters and consequently the perceived quality of semi-transparent structures in the image

  • An adaptive and interactive refinement solution to obtain specific refinements on transparent structures

SECTION 2

Previous Work

In this section, we will first review some recent methods for transfer function design. The typical techniques for layer and surface visualization will then be briefly surveyed. Related work on transparency perception in psychology and perception fields will also be discussed.

Transfer Function Design Transfer functions [20] can be categorized as data-centric or image-centric. The former determines the visual properties based on the volume data values and their derived attributes. Multi-dimensional transfer functions [14] can be defined on the local properties of the volume to reveal the target structures. Properties like curvature [13] and size [3] have also been used. Alternatively, the image-centric transfer function is designed based on the rendered images. For example, transfer function can be searched based on the specific features [29] or visibility [4] of structures in the rendered image. Image processing operations have also been incorporated into the transfer function design [6]. To facilitate the transfer function specification, many intelligent approaches have been proposed, including semi-automatic generation [5] and semantics layers [21], [24]. Our approach is both data- and image-centric, and focuses on the optimization of the perception of transparent structures.

Surface/Layer Visualization Effective surface visualization not only concerns making a surface visible in an image but also maintaining the properties such as curvature, orientation, and texture of the shape Shape-from-shadow [11] is a common approach. To enhance the shape perception, Gooch et al. [10] proposed a non-photorealistic lighting model for technical illustration. Light Collages [15] use multiple lights and local illumination to adaptively enhance the shape of different parts of the structures. Lighting methods can emphasize shape perception through the features such as shadows, highlights, and silhouettes in an image. Specular reflection [8] has also been proven to be an effective channel for shape perception. Another common class of techniques is shape-from-motion [27]. Through the spatial and temporal changes in a sequence of images, shape details can be reconstructed by viewers. Kinetic visualization was proposed by Lum et al. [16] to add motion cues to static objects using a particle system. For layered surfaces, texturing has also been extensively explored. Different textures were tested and a guideline on texture synthesis [1] was developed for effective layered surface visualization. Interrante et al. [12] also used textures to enhance the relative depth and features of layered surfaces. Other visual cues like boundary or silhouette contours, stereo, and occlusion can also be used to encode the shape information of surfaces. Volume illustration approaches [23] have also been proven to be an effective way to convey the structural details in volumes. In this paper, we do not work on non-photorealistic rendering or visual cues for shape but focus on enhancement of direct volume rendered images. We believe a well-balanced rendered image is necessary before any visual effects or techniques can be applied.

Transparency Perception The perceived transparency of structures depends on subjective human perception. This topic has been studied in psychology for decades. Various factors [7] like shadow, lighting, contrast, color have been considered as the critical visual cues to reveal transparency. Metelli et al. [18] used a simple physical model to rationalize visual perception on transparency. Luminance [9] is considered as an important channel in conveying transparency information. Based on this theory, Singh and Anderson [25] formulated an extension using contrast and proposed the transmittance anchoring principle (TAP) to evaluate the transparency of layers in images. This principle was tested in various conditions [26] and has been widely used in the perception field. Commonly used visual cues for transparency actually emphasize the luminance profile of the image to enhance the transparency. For example, lighting and color [28] can be used to give distinct luminance and contrast to transparent structures. The effects on the image can be explained and evaluated by these perception rules. Our work is closely inspired by these principles, which lead to a new transparency measure and guide the optimization of appropriate transparency configurations.

SECTION 3

Enhancement Framework

The proposed enhancement framework consists of several image quality measures and an optimization process. Given a volumetric dataset, we assume that the structures in the volume are defined and assigned with color and importance values (or opacity). Our objective is to automatically adjust the rendering parameters to reveal the structures in a semi-transparent manner. This semi-transparent appearance of structures can help preserve the context and spatial relation information among layered or hierarchical structures in the volume.

Structures in Volume To evaluate the perception of semi-transparent layers in a rendered image, the structures in the volume have to be implicitly or explicitly defined in the volume or other feature space. In this paper, we implicitly segment the data based on intensity and the segmented regions are treated as structures in the following discussion. We focus on the conceivable quality enhancement and regard the volume segmentation as an input. The discussion of volume segmentation or volume classification via transfer function designs is beyond our scope. Likewise, we define the boundaries of structures as the structural layers that can be computed with previous methods like opacity peeling [22] and volume catcher [19].

Structural Layers in DVRI Each structure in a volume can be treated as a structural layer projected on the rendered image with certain transparency. A layer can reveal the shape and appearance of the corresponding structure in the volume. However, the layers may not be perceived effectively due to various factors like poor lighting and rendering parameter settings. Image quality may also deteriorate with overlapping or adjacent layers. Our objective is to enhance the visual perception of layers in conveying the underlying structural information. Users can specify the expected visual properties (i.e., color and opacity) of each class of structures using a transfer function interface. The opacity is considered as the importance value and the transparency of the layers is optimized with respect to it in the final image.

An overview of the framework is shown in Fig. 1. Given the volume and structural information, some invariant volume and image metrics are first pre-computed. In the optimization process, the quality of the rendered images is assessed based on three aspects of layered visual perception, namely visibility, shape, and transparency. Quantitative measures are proposed to evaluate these perceptions. The fundamental idea is to ensure that the layer and shape information of the structures can be faithfully conveyed in the rendered image. The deviation between the volume content and perceived image is minimized. We formulate the perceptual deviation as a set of energy terms for a least square optimization, yielding an optimal rendering setting. An enhanced DVRI is produced using the optimized rendering parameters.

Figure 1
Fig. 1. Flow chart showing the optimization pipeline.
SECTION 4

Perceptual Quality Measures

Visibility is the first necessary condition for a structure to be clearly shown in an image. To reveal the layered structures in an image, each layer should acquire a significantly high opacity and meanwhile the visibility of the structures should be balanced. Provided a good visibility, the shape and transparency should also be faithfully presented in the image for depicting the details of structures and ensuring the distinguishable appearance and correct layer perception. Several perception rules were derived from these factors in previous psychological studies [7], [25] and were supported with extensive experimental evidences. Based on these investigations, we formulate three measures to assess the quality of rendered images with layered structures.

4.1 Visibility

While a structure may be assigned a significantly high opacity, its visibility in the rendered image may still be low. In the ray casting process, each ray may pass through various structures and has different ray compositions. Consider that a layer at the back may be severely occluded if the accumulated opacity is high. The ideal situation is to have all the constituent structures contribute to the ray composite value in proportion to their structural composition. Our solution is to equalize the opacity of structures with respect to the portion of constituent structures (layers) in the image and the originally assigned opacity (importance). The optimal opacity setting should ensure the visibility of structures to be proportional to their constituent portion in every ray. The visibility measure is proposed to evaluate the overall deviation of the structural constituent and visibility profiles of the image.

4.1.1 Structural Opacity and Visibility Histograms

Given a volume with defined structures and associated importance (opacity), volume analysis is performed to evaluate the composition of structures in the rendered image by considering the content of the cast rays during rendering. The opacity in the image is derived by the compositing equation given by Formula TeX Source $$\alpha _{accum}= \alpha _o (1 - \alpha _{accum}) + \alpha _{accum}$$ where αo is the opacity value. Each sample point contributes to the final image in different degrees and its contribution or visibility αv is defined by a (1 − accum). For each ray, we compute the voxel opacity αo (importance) of every sample point on the ray and its visible opacity αv. The voxel opacity profile along the ray indicates the voxels to be visible in the ideal case without occlusion. By comparing these two ray opacity profiles, we can compute the possible deviation between actual and perceived ray constituents.

The visibility of each class structure should be kept proportional to its contribution to the ray opacity. For each ray r, the opacity αo and visibility αv of a structure sS are defined as the normalized sum of assigned opacity and visible opacity of all sampled voxels belonging to that structure along the ray. The distribution of opacity and visibility of constituent structures can be represented as histograms [4] (Fig. 2).

Figure 2
Fig. 2. The opacity (importance) and visibility of structures are sampled in the ray casting process and the values from all the rays are grouped into different classes of structures and represented as histograms. The overall visibility deviation of the image is defined as the difference between the two measures and is minimized in the equalization process.

4.1.2 Deviation Measure and Equalization

The difference between the structural opacity and perceived visibility is derived to indicate the visibility deviation. As occlusion is inevitable in ray composition, the visibility of voxels is always lower than its initially assigned opacity. However, we can still strike for a low variance and average of deviation in the optimization process. Similar to the histogram equalization in image processing, we equalize the visibility of structures by minimizing the visibility deviation of rays. The visibility deviation δv of a ray is defined as |αoαv| and the overall deviation of the image is given by Formula TeX Source $$E_v= {1 \over {\left| R \right|}}\sum\limits_{r \in R} {\sum\limits_{s \in S} {\delta _v (s,r)}= {1 \over {\left| R \right|}}} \sum\limits_{r \in R} {\sum\limits_{s \in S} {\left| {\alpha _o (s,r) - \alpha _v (s,r)} \right|} }$$ Minimizing this deviation achieves a balanced visibility distribution and yields the first criteria for transparency enhancement. We describe the other two measures below.

4.2 Shape

Shape is another important perception factor that influences the perception of layered transparency. The shape can be interpreted as the variation of surface orientation. Geometric or topological details of structures provide useful information and must be faithfully depicted in the image. In fact, shape visualization is a typical perceptual problem and is generally attributed to various psychological factors [1].

Figure 3
Fig. 3. Shape deviation measure: volume and image variations are computed based on the volume curvature and image gradient. The composite measure estimates the deviation in the strength of variations.

The structural shape should have a strong correlation with visual variations on an image. For example, crests and valleys on a structure should result in a gradual change in the image intensity. Any visual deviation on this correlation can affect the discrimination of the shape. To evaluate the expressiveness of the image in revealing the shapes, a measure on the overall visual shape deviation of an image is needed.

We define the structural shape vs and image variation vi as the curvature of structures and the gradient of the image respectively. We follow the methodology of curvature measurement [13] for volume data. In particular, we use the mean curvature H =121 + κ2) for the structural shape and use the Sobel operator to estimate the image gradient G. The scalar magnitudes of the two measures (i.e., |H| and | G|) are normalized to [0..1]. The value of vs of a pixel in an image is given by the weighted sum of vs of all voxels on the boundary of structures along the ray. The deviation δs between these measures at each pixel is derived by a composite function (Eq. 3) and the overall shape deviation Es of an image I is defined as their average, as shown in Eq. 4. Formula TeX Source $$\delta _v= 1 - exp\left({ - {{(v_s- v_i)^2 } \over s}} \right)$$ Formula TeX Source $$E_s= {1 \over {\left| I \right|}}\sum\limits_{i \in I} {\delta _v (i)}$$where s controls the steepness of the exponential function. The response is high if the shape value vs is high, while the image variation vi is low or vice versa. It indicates the unclear shape of structures or fault shape cues in the image. For example, a focused spot light with high spot exponent and low cut-off angle gives an illusion of strong shape variation on a plane. In fact, lighting [15] and shading are the determinant shape perception factors in such images and are adjusted in the optimization process. Moreover, we agree that shading in combination with visual cues provides powerful emphasis on shapes. Based on our evaluation results, textures [1] or shape cues can be adaptively applied on the poorly perceived structures in the image.

By minimizing the Es, image variations are reinforced at the regions of structures in the image to reveal the underlying shape information or fine structural details. This yields a boosting of the shape perception to ensure the correct discrimination of shape variations through the rendered image and maximize the expressiveness power of the image in conveying the shape information.

An illustration using a sinc function plane is shown in Fig. 3. Random noise is added to the center peak to generate high frequency shape variations, which are indicated in the bright regions in the volume variation image. The composite image is generated by convolving the volume variation with the image gradient. The bright region indicates the unclear shape of the structures.

4.3 Transparency

4.3.1 Perception Theory

Equalized visibility is not always sufficient to obtain a good perception of transparent layers. Given the same opacity, the perceived transparency of a layer can still vary dramatically. For example, Wang et al. [28] showed that the color design of the layers is crucial and transparency can change with the color saturation of layers. Furthermore, lighting on the layer and context can provide hints to the recognition of the layers. Other image cues [7] like highlight, contrast, and blur also play an important role in human visual perception.

Metelli's episcotister model [18] is a widely adopted transparency perception theory. Given a transparent layer on a bipartite background of region A and B (Fig. 4(a)), the transmittance αmet of the layer, which indicates the fraction of light passing through the layer, is derived from the physical equations (Talbot's Law) as follows: Formula TeX Source $$\displaylines{P = \alpha _t a + (1 - \alpha _t)t \cr q = \alpha _t b + (1 - \alpha _t)t \cr\alpha _{met}= (p - q)/(a - b) \cr}$$where p, q, a, and b are the reflectances of regions P, Q, A, and B respectively. The reflectance can be replaced by luminance, which has been proven to be an intuitive channel to human visual system and is more effective in rationalizing the contrast perception [9] Fig. 4(c) shows an example of two spheres overlapping in an image. Both spheres have identical opacity but different colors. The perceived transmittance of yellow and purple layers in the overlapping region P is derived using the Metelli's equation and the values are 0.49 and 0.36 respectively. It indicates that less light is allowed to pass through the purple layer, thus resulting in a higher opacity. This complies with our visual perception that P is more purple than yellow.

Figure 4
Fig. 4. Example of Metelli's theory: transparent layer on (a) bipartite and (b) inhomogeneous regions; (c) illustration using overlapping spheres.

This model can explain the transparency perception of layers with uniform luminance. For textured or complex layers, Singh et al. [25] extended the original theory as a generative model to tackle the inhomogeneity in transmittance and reflectance. Inferring the transparency from luminance distribution involves scaling and anchoring problems. They determine how the luminance ratio is mapped to the ratio of perceived transparency and how to anchor this relative scale to the absolute one. Based on the psychological observation, scaling can be implied in contour contrast, which changes linearly with the transparency. Moreover, according to the transmittance anchoring principle (TAP), the highest contrast segment along a contour can serve as an anchor for determining the absolute scale of lower contrast regions. This model was validated in the experiments on various inhomogeneous surfaces and media. The transmittance is defined in terms of the contrast (range of luminance I) of background A and transparent regions P (Fig. 4(b)) and is given by Formula TeX Source $$\alpha _{tap} = {{I_{max} (P) - I_{min} (P)} \over {I_{max} (A) - I_{min} (A)}} = {{I_{range} (P)} \over {I_{range} (A)}}$$ This implies that the contrast of the underlying contour is reduced by the overlay transparent layer. Singh et al. [26] further improved the model by replacing the luminance range with the Michelson contrast defined as follows: Formula TeX Source $$C = I_{range} = {{I_{max} - I_{min} } \over {I_{max} + I_{min} }}$$ To avoid the luminance ranges being affected by noise, the Imin and Imax are defined as max (0,Iμ − 2σ) and min IMAX, I{μ + 2σ} given I is in [0 IMAX] and μ, σ are the mean and variance of luminance.

4.3.2 Transparency Measure

In our paper, we adopt a hybrid approach of TAP and Metelli's model. In homogenous regions with low contrast, Metelli's model can perfectly estimate the transparency, while the TAP (Eq. 6) becomes unstable with small ranges of luminance. Therefore, a modulation is performed on the transparency, given by Formula TeX Source $$\alpha _t = \alpha _{met} h + \alpha _{tap} (1 - h)$$ where h = (IMAXIrange)/ IMAX, αt tends to αmet if the regions become homogeneous. The perceived opacity is interpreted as 1 − αt.

With this transparency perception model, we can compute the transparency of layers as well as the perception deviation in different regions of the image. In the preprocessing procedures, the structural layer composition at different parts in the image is computed and the image is decomposed into sub-regions based on their composition. The regions are then classified into different types (empty, plain view, or overlay) and the perceived transparency of each constituent layer is computed, as shown in Fig. 5(a). An empty region does not consist of any layer, while a plain view region only contains one layer. An overlay region consists of more than one layer and is either an overlapping or enclosed regions of structural layers. A structural layer may be decomposed into several sub-regions and the relations between the structural layers are defined as separate, touch, overlap, or enclose using the region connection calculus [2]. Based on this information, we can determine the relation property of the constituent layers in each region. This information can be used to determine the perceived transparency as well as the transparency perception deviation δt of the region. An illustration of perceived transparency computation is shown in Fig. 5(a) and the rules are summarized as follows:

Figure 5
Fig. 5. Structural layers and sub-regions in image: (a) shows different kinds of relations between the structural layers. The image is divided into sub-regions based on the layer composition. Another example of overlapping and enclosed layers is shown in (b).

Case 1: For an empty region (Ro) or a plain view region (R3) with a separate or touch layer, the deviation is zero because no layered structure is present.

Case 2: For a plain view region (R1, R4, R6) with an enclosing or overlap layer, the transparency of the layer can be derived by Eq. 6 for the enclosing layer (L1) or Eq. 8 for the overlapping layer (L3, L4).

Case 3: For an overlay region (R2, R5), the transparency of each layer is determined by its relation property. If a layer belongs to an enclosing layer (L1 in R2), the transparency is the same as that in the enclosing region (L1 in R1). If the layer belongs to an overlapping layer (L3 and L4 in R5), the transparency can be derived by Eq. 8.

Fig. 5(b) shows a more complicated example of regions with three or more layers. Region Ro, R1 R2, R3, R8 can be evaluated by the above rules. Region R4, R7, R6 consist of three layers. The enclosing layer L3 is the same as that in R1, which is derived by case 2 where R1 is the background and all regions enclosed by L3 are treated as a transparent layer. For the overlapping layers in R4, R7, R6, they can be degenerated to the simple case by ignoring the enclosing layer L3. Region R5 contains three overlapping layers L = { L0, L1, L2}. The transparency of a layer Li can be resolved by treating other overlapping layers LLi as a single layer. The Metelli's equation can be generalized to Formula TeX Source $$\alpha _{L_i } = {{I_{\{ L\} } - I_{\{ L_i \} } } \over {I_{\{ L - L_i \} } - I_{\{ 0/\} } }}$$ where I{0/} is the empty view excluding any enclosing layer. For example, the transparency of L2 in R5 is computed as (IR5IR3)/(IR6IR1).

Given the perceived transparency αt of the non-empty regions, the transparency deviation δt of a region is derived by the sum of square differences between the perceived opacity 1 − αt and the structural opacity αo of all the constituent layers. The overall transparency deviation Et measure is given by Formula TeX Source $$\delta _t = \sum\limits_{i = L} {(1 - \alpha _t (i) - \alpha _o (i))} ^2 $$ Formula TeX Source $$E_t = {1 \over {\left| R \right|}}\sum\limits_{r \in R} {\omega _r s_t (r)}$$where R is the set of all non-empty regions in the image and ωr is the weight of region r, which is given by its portion of area in the image. By minimizing the transparency deviation, the perceived transparency of the structures will be closer to the expected transparency or composition of the structures in the image.

SECTION 5

Optimization

Recall that the perception of a semi-transparent structure in a DVRI is driven by the visibility, shape, and transparency, which are governed by the rendering parameters including opacity, lighting, and color. To faithfully depict the structures in the image, we have to solve for an optimal parameter setting for rendering. The three measures are used to drive the optimization of the rendering parameters for an optimal rendered result. More specifically, the rendering parameters (transfer functions) of the structures are formulated as an objective function f and is optimized as a least square problem [17]. Our objective is to minimize the perception deviation (or measures) of the overall image by fitting an optimal parameter configuration for the volume at a specific viewpoint. For each ray r, we derive the energy as Formula TeX Source $$E = \omega _v E_v + \omega _s E_s + \omega _t E_t$$ where ωv, ωs, ωt are the weights of the measures. We setup an over-determined system of all the ray quality equations and compute the sum of residue Formula TeX Source $$S = \sum\limits_{r \in R} {E(f,r)}^2$$ The optimal solution with the minimum residue is derived by finding an f for the given DVRI, such that Formula TeX Source $$argmin_f \{ E(f)\}$$ To solve the non-linear least square problem, the parameters are refined by an iterative solver [17]. We adopt the conjugate gradient method, which is a widely used direct search method with good convergence performance. Because it is difficult to explicitly compute the analytical expressions for the partial derivatives of the measure equations, we perform an empirical approximation by sampling the image with different f . Based on the steepest descent direction − ∇f E(f) and the derived conjugate direction Λ f, the transfer function is updated as Formula TeX Source $$f_{n + 1} = f_n + \alpha _n \Lambda f_n$$ where αn is given by argminE (fn + αn Λ fn).

Optimizing all the rendering parameters simultaneously is inefficient; thus, a hierarchical approach is adopted. The visibility of structures is first optimized by adjusting the opacity. Given that every structure becomes basically visible in the image, the shape of the structures is then preserved by proper lighting. The transparency perception of the structures is finally enhanced with proper color. Optimization can be done sequentially and each step only involves a subset of the parameters. The importance and color of structures provided by users as well as the default light parameters are the initial guess for the optimization. To avoid local optima, the simulated annealing technique is applied on the parameter optimization. For example, a high transition probability is assigned to opacity if the current visibility deviation is high or does not show a significant improvement from its initial value.

Figure 6
Fig. 6. (a) Our user interface for selective enhancement of DVRIs; (b) Luminance chart showing the relation between luminance and transparency; (c) Calibrated chart based on user perception.
SECTION 6

Adaptive Refinement

A globally optimized solution may not be applicable to all parts of the image. The optimized configuration may be biased towards the dominating structures in the image and leaves the less significant structures unenhanced. Our system allows users to selectively enhance a specific part of the image or structures. To guide the interactive enhancement process, an image quality map interface is provided to show the deviation measure values at different parts of the image. Users can use a lens-like tool to specify the regions with poor quality with reference to the quality map. The regions are then enhanced individually. Users can also specify an expected visibility or transparency to the structures of interest within the region to ensure that they are clearly shown and enhanced in the refined image. An example is shown in Fig. 6(a).

Besides, as the perceived transparency may slightly vary between viewers and may not always change linearly with the luminance mean and contrast, a calibration tool is provided to estimate the transparency and contrast relation (Fig. 6(b)). According to the perception theory, the expected luminance values of different transparencies are represented by the straight line between the background luminance (L0, R0) and the origin. From the psychological experiment [25], we can observe that the user perception falls within the region shown in the chart. The exact perception can slightly deviate from the straight red line in Fig. 6(c). To calibrate the curve, a test on sample images is performed to record the user perception on layers with different transparencies. After the test, a calibrated curve representing the user perception is computed. The perceived transparency can be located on the calibrated curve. The transmittance value derived from the contrast ratio (Eq. 6) will be adjusted according to this curve in the optimization process.

SECTION 7

Experiments

We conducted experiments on several datasets to demonstrate the quality measures and the optimization of rendered images. Our system was run on a Dell machine (Pentium Core2Duo 6400, 2G RAM) equipped with an NVIDIA GeForce 7600GTS graphics card. The volumes were pre-segmented. Results on different measures will be first discussed and two comprehensive results will be provided afterwards.

We first used a carp datatset (256 × 256 × 512) as shown in Fig. 7 to show the result of visibility equalization (opacity optimization). The skin and bones of the carp were assigned with importance values of 0.2 and 0.8 and our objective was to balance the overall visibility of each structure based on this weighting. The equalization on the opacity was performed by minimizing the visibility deviation from the importance weighting (i.e., visibility measure). The measure image indicating the deviation of the original DVRI showed that the bones were occluded. The opacity of the structures was optimized and the result showed that the overall deviation of the image was lowered and the bone structures were revealed according to the importance ratio. Some optimized results with different ratios were also shown in the figures. By observing the resulting quality image and the refined DVRI, we can see that the carp was rendered in a semi-transparent manner with balanced visibility after the optimization based on our measure.

Figure 7
Fig. 7. Experiment on the visibility equalization using a carp dataset. The skin and bone structures are given the importance values of 0.2 and 0.8. Results of other importance ratios are shown at the bottom.

To demonstrate the shape enhancement result, we conducted an experiment on a CT head dataset (128 × 128 × 231) as shown in Fig. 8. The shape measure image indicates the shape variations and perception deviation in the DVRI. To obtain a better shape perception of the face and skull, the lighting parameters for the structures have to be optimized for emphasizing the shape variations on both layers. In the optimization process, the parameters obtained by the structures could be different. For example, a large specular highlight (small specular reflection exponent) is exerted on a large and smooth surface (e.g., skin). By optimizing the lighting parameters of the structures, the shapes as well as features on different layers of the head were enhanced. This can be reflected in the reduced overall value of the resulting shape measure image. From the experiment, we can see that illumination is important to shape enhancement and our measure effectively drives the optimization on illumination to achieve better shape-revealing results.

To demonstrate the transparency measure, an experiment was conducted on a protein (Neghip) molecule dataset (64 × 64 × 64) obtained by simulation. To show the effect of color on transparency perception, DVRIs of the molecule were generated with different color saturation values (Fig. 9). In the DVRIs, the layers of structures overlapped and the color saturation change resulted in different transparencies of the outer layer. The perceived transparency of the outer layer decreased with the saturation; thus, the overlapped inner structures became less visible. Based on the TAP, we can derive the perceived transparency of the structures in the DVRIs. These results show that the transparency perception does not only depend on visibility but also the color or appearance of the layer. To faithfully present the structures in the image, we should optimize the transparency of structures in the image in addition to the opacity of each constituent structure. From the experiment, we can see that the TAP can effectively estimate the perceived transparency of layers in the volume and the result follows the previous psychological findings. Various transparency effects can be achieved by optimizing on the color using our measure.

Figure 8
Fig. 8. Experiment on the shape enhancement using a CT head dataset. (a) The original DVRI; (b) An image that indicates the unclear or missing details (deviation) on the DVRI and measures the shape; (c)-(d) Optimizing the lighting parameters for each structure in the data; (e) Final shape-enhanced result.
Figure 9
Fig. 9. Experiment on the effect of color on transparency perception using Neghip dataset. The outer layer structures in pink are assigned with different saturation values. From left to right: 100%, 75%, 50%, and 25%. The results show different transparency effects.

Comprehensive experiments were conducted to illustrate the complete pipeline of the optimization process. We first demonstrated how to generate the semi-transparent style DVRI of a CT engine dataset (256 × 256 × 128) with layered structures, as shown in Fig. 10. The opacity equalization was first performed to balance the visibility of the interior and exterior structures. Semi-transparent layers of structures were generated as a result. Afterwards, the lighting parameters were optimized to enhance the overall shape perception of the transparent layers of structures. The results show that the features of the structures were better preserved in the image. To ensure that the perceived transparency complies with the composition (structural opacity) of the layers, the color of the structures were optimized with respect to the TAP-based transparency measure and the expected transparency. Another experiments was conducted on a CT chest dataset (384 × 384 × 240) as shown in Fig. 11. The result shows that the structures became more distinguishable and the details were better preserved after the optimization on visibility, shape, and transparency. From these experiments, we can see that our optimization method allows high quality semi-transparent style DVRIs to be generated with optimal visibility, shape, and transparency perception.

Figure 10
Fig. 10. Experiment on a CT engine dataset. (a) The original DVRI; (b) Equalizing visibility through an opacity optimization; (c) Optimizing the lighting parameters for shape enhancement; (d) Adjusting the color for correct transparency perception.
Figure 11
Fig. 11. Experiment on a CT chest dataset. (a) The original DVRI; (b) The bones are visible after an opacity optimization; (c) The visible structures are further enhanced after a shape enhancement; (d) The transparent structures are more distinguishable after a transparency adjustment.

The performance of the system benefits from the hierarchical optimization, which only involves a subset of the parameters in each step. The results usually converge within 10 iterations in each step. In our experiment, we found that the expected opacity (importance values), color of structures, and default lighting parameters could already provide a good initial guess for the optimization and enhancement could be done very efficiently based on these settings. As the partial derivatives are empirically estimated on the rendered image, the process can be speeded up by sampling on the image. The performance depends on the computation in ray and structural analysis, which increases with the size and complexity of the volume as well as the image resolution. For the CT chest, each iteration takes about 0.2s and the whole process completes in 10s with an image resolution of 512 × 512.

SECTION 8

Evaluation and Discussions

To validate our method, we invited 20 graduate students to participate in our user study. Before the test, they were given sample images illustrating different degrees of transparency as reference for quantitative judgment of perceived transparency. While the effect of opacity and lighting on visibility and shape perception has been studied [4], [15], we specifically studied the correlations between transparency and color by conducting controlled experiments. The subjects were first asked to rank a set of DVRIs, which were generated by adjusting the color of different structures, based on the perceived transparency. The results were compared with the measured transparency (Section 4.3). The results showed that 85% of the subjects got the expected rankings coincident with those of the measured values.

To more quantitatively study the transparency perception, the subjects were then asked to evaluate the degrees of perceived transparency of the structures in 10 DVRIs of the 5 datasets used in our experiments (Section 7). The reported values were compared with the measured transparency of the structures. The results showed that most subjects could perceive the correct degree of transparency of the structures. While visual perception varied among viewers, the means of the perceived transparency of images were close to the measured transparency and the relative differences were between 3.7% and 8.2%. The relative standard deviations of the results were between 9% and 16%. A single-sample t-test was also conducted for the results of each image with the measured transparency value as the hypothetical mean. The p-values were between 0.02 and 0.21, while 2 results generated from the CT head dataset fell below the significance level of 0.05. The minor errors could be attributed to the users' varied experience with transparent structures and deviation due to the existing image cues [7]. In general, there was no significant difference between the perceived and measured values. The user study demonstrated that the transparency measure can correctly estimate the perceived transparency.

Finally, the subjects were asked to rate the improvement and quality of the images throughout the optimization process. The feedback from the subjects indicated that the layered transparent structures (e.g., ribs in the chest image) might not be distinguishable even after the visibility equalization but an improvement was observed after the shape and transparency adjustment. It indicated that a good perception of transparent structures relies not only on visibility (opacity) but also the color and lighting. All the subjects agreed that color and opacity are important visual factors and 70% also thought that lighting can improve the visual quality of transparent layers. 90% subjects rated the improvement in visual quality as good or significant. This study demonstrated that all the three measures introduced in the paper have their values and can improve the perception of transparent layers.

Our method is an improvement over the conventional methods. Compared with the manual specification of transfer function, our approach does not require any user involvement. Manual manipulation highly depends on user expertise. For many end-users such as doctors and scientists, they do not have the expertise to directly manipulate transfer functions and lighting parameters. Thus, it is unlikely that they can obtain results with similar quality by manual manipulation. By using the proposed measures, optimal and objective results based on human perception can be automatically obtained. Moreover, the specific and adaptive refinements on each layer cannot be achieved manually using typical intensity or class-based transfer function interfaces. Compared with the semi-automatic approach [5] which generates the opacity transfer function based on the histogram volume structure (i.e., boundaries), our solution is a data- and image-centric approach and can provide comprehensive optimization on more rendering parameters including color and lighting. While the semi-automatic approach provides a high-level interface for opacity specification, our method can automatically generate semi-transparent layers with proper opacity, color, and lighting as well. Recall that visibility (opacity) is only one of the factors to our transparency perception while color also play an important role [28]. Our method can comprehensively optimize different rendering parameters, such that the perception of each layered structure in the image is better reinforced. Different from typical transfer function design approaches [3], [13] in which specific data features are incorporated into the multidimensional transfer function [14], we focus on the effectiveness of the resulting images in conveying the layered features in a semi-transparent manner and ensure that the specific features are not only enhanced and rendered properly but also faithfully perceived in the images. Our solution provides an additional perception-based quality enhancement on the image, which has not been addressed in the previous approaches.

There are several limitations in applying the measures on rendered images. Recall that layers of structures have to be implicitly or explicitly defined for the perception measurement. An intuitive segmentation or feature specification tool is necessary for the purpose. Furthermore, it is basically an ill-posed problem to determine the transparency of a single layer in plain background. In fact, our perception measures use the available visual hints in the image to estimate the quality of the image based on the user perception. Such hints should be available and they are actually required by humans to make correct visual judgment. Moreover, the improvement may be limited if there are many layered structures coupling in the image. Usually, human vision can only handle a limited number of layered structures and the perceived quality of each layer deteriorates in complex data. Thus, visual cues should be added on the poorly perceived regions indicated by our measures in addition to optimizing the rendering parameters.

SECTION 9

Conclusion

In this paper, we proposed a DVRI enhancement solution based on the perception principles. Three measures were designed to evaluate the perception of the semi-transparent layer from the visibility, shape, and transparency aspects. Rendering parameters were optimized based on these measures to deliver results complying with viewers' perception. High quality semi-transparent style DVRIs with structures faithfully revealed can be automatically generated using our method. Although our work focuses on the optimization for direct volume rendering, the measures can provide good indications of structural perception so that additional visual cues like textures and shape cues can be adaptively applied to enhance the expressiveness of the image.

While opacity is usually considered as the determinant factor for the transparency of structures, in the experiment we showed that color and contrast also affect our visual perception of transparency. Our optimization method puts these factors into account to deliver results that can faithfully reveal the layered structures in a semi-transparent manner and ensure a correct perception. Our method also eases the fine-tuning of the parameters for transparent results. In the future, we will extend the current static viewpoint solution to an efficient image refinement of dynamic views for the purpose of interactive exploration.

Acknowledgments

The research is partially supported by grant HK RGC CERG 618706 and 973 program of China (2010CB732504) and NSF China (No. 60873123). We thank the reviewers for their valuable comments.

Footnotes

Ming-Yuen Chan, Yingcai Wu, Wai-Ho Mak, and Huamin Qu are with the Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, E-mail: pazuchan@cse.ust.hk | wuyc@cse.ust.hk | nullmak@cse.ust.hk | huamin@cse.ust.hk.

Wei Chen is with State Key Lab of CAD&CG, Zhejiang University, E-mail: chenwei@cad.zju.edu.cn.

Manuscript received 31 March 2009; accepted 27 July 2009; posted online 11 October 2009; mailed on 5 October 2009.

For information on obtaining reprints of this article, please send email to: tvcg@computer.org.

References

1. Texturing of layered surfaces for optimal viewing.

A. Bair, D. H. House and C. Ware

IEEE Transactions on Visualization and Computer Graphics, 12 (5): 1125–1132, 2006.

2. Relation-aware volume exploration pipeline.

M.-Y. Chan, H. Qu, K.-K. Chung, W.-H. Mak and Y. Wu

IEEE Transactions on Visualization and Computer Graphics, 14 (6): 1683–1690, 2008.

3. Size-based transfer functions: A new volume exploration technique.

C. Correa and K.-L. Ma

IEEE Transactions on Visualization and Computer Graphics, 14 (6): 1380–1387, 2008.

4. Visibility-driven transfer functions.

C. Correa and K.-L. Ma

InM IEEE Pacific Visualization Symposium, pages 177–184, 2009.

5. Semi-automatic generation of transfer functions for direct volume rendering.

J. W. Durkin and G. Kindlmann

InM IEEE Symposium on Volume Visualization and Graphics, pages 79–86, 1998.

6. Image-based transfer function design for data exploration in volume visualization.

S. Fang, T. Biddlecome and M. Tuceryan

InM IEEE Visualization, pages 319–326, 1998.

7. Low-level image cues in the perception of translucent materials.

R. W. Fleming and H. H. Bölthoff

ACM Transactions on Applied Perception, 2 (3): 346–382, 2005.

8. Specular reflections and the perception of shape.

R. W. Fleming, A. Torralba and E. H. Adelson

Journal of Vision, 4 (9): 798–820, 2004.

9. Transparent layer constancy.

W. Gerbino, C. Stultiens, J. Troost and C. de Weert

Journal of Experimental Psychology: Human Perception and Performance, 16: 3–20, 1990.

10. A non-photorealistic lighting model for automatic technical illustration.

A. Gooch, B. Gooch, P. S. Shirley and E. Cohen

InM SIGGRAPH, pages 447–452, 1998.

11. The derivation of 3-d surface shape from shadows.

M. Hatzitheodorou

InM Proc. Image Understanding Workshop, pages 1012–1020, 1989.

12. Conveying the 3d shape of smoothly curving transparent surfaces via texture.

V. Interrante, H. Fuchs and S. M. Pizer

IEEE Transactions on Visualization and Computer Graphics, 3 (2): 98–117, 1997.

13. Curvature-based transfer functions for direct volume rendering: Methods and applications.

G. L. Kindlmann, R. T. Whitaker, T. Tasdizen and T. Möller

InM IEEE Visualization, pages 513–520, 2003.

14. Multidimensional transfer functions for interactive volume rendering.

J. Kniss, G. Kindlmann and C. Hansen

IEEE Transactions on Visualization and Computer Graphics, 8 (3): 270–285, 2002.

15. Light collages: Lighting design for effective visualization.

C. H. Lee, X. Hao and A. Varshney

InM IEEE Visualization, pages 281–288, 2004.

16. Kinetic visualization.

E. Lum, A. Stompel and K.-L. Ma

IEEE Transactions on Visualization and Graphics, 9 (2): 115–126, 2003-06.

17. Methods for non-linear least squares problems,

K. Madsen, H. B. Nielsen and O. Tingleff

2004.

18. Balanced and unbalanced, complete and partial transparency.

F. Metelli, O. D. Pos and A. Cavedon

Perception and Psychophysics, 38 (4): 354– 366, 1985.

19. Volume catcher.

S. Owada, F. Nielsen and T. Igarashi

InM Symposium on Interactive 3D Graphics, pages 111–116, 2005.

20. The transfer function bake-off.

H. Pfister, B. Lorensen, C. Bajaj, G. Kindlmann, W. Schroeder, L. S. Avila, K. Martin, R. Machiraju and J. Lee

IEEE Computer Graphics and Applications, 21 (3): 16–22, 2001.

21. Semantic layers for illustrative volume rendering.

P. Rautek, S. Bruckner and E. Gröller

IEEE Transactions on Visualization and Computer Graphics, 13 (6): 1336–1343, 2007.

22. Opacity peeling for direct volume rendering.

C. Rezk-Salama and A. Kolb

Comput. Graph. Forum, 25 (3): 597–606, 2006.

23. Volume illustration: nonphotorealistic rendering of volume models.

P. Rheingans and D. Ebert

IEEE Transactions on Visualization and Computer Graphics, 7 (3): 253–264, 2001.

24. High-level user interfaces for transfer function design with semantics.

C. R. Salama, M. Keller and P. Kohlmann

IEEE Transactions on Visualization and Computer Graphics, 12 (5): 1021–1028, 2006.

25. Towards a perceptual theory of transparency.

M. Singh and B. L. Anderson

Psychological Review, 109 (3): 492–519, 2002-07.

26. Predicting perceived transparency in textured displays.

M. Singh, J. D. Kadt and B. L. Anderson

Journal of Vision, 1 (3): 277–277, 2003.

27. Human perception of structure from motion.

S. Treue, M. Husain and R. Andersen

Vision Research, 31: 59–75, 1991.

28. Color design for illustrative visualization.

L. Wang, J. Giesen, K. T. McDonnell, P. Zolliker and K. Mueller

IEEE Transactions on Visualization and Computer Graphics, 14 (6): 1739–1754, 2008.

29. Interactive transfer function design based on editing direct volume rendered images.

Y. Wu and H. Qu

IEEE Transactions on Visualization and Computer Graphics, 13 (5): 1027–1040, 2007.

Authors

No Photo Available

Ming-Yuen Chan

Student Member, IEEE

No Bio Available
No Photo Available

Yingcai Wu

No Bio Available
No Photo Available

Wai-Ho Mak

No Bio Available
No Photo Available

Wei Chen

Member, IEEE

No Bio Available
No Photo Available

Huamin Qu

Member, IEEE

No Bio Available

Cited By

No Citations Available

Keywords

IEEE Keywords

No Keywords Available

INSPEC: Controlled Indexing

rendering (computer graphics), user interfaces.

More Keywords

No Keywords Available

Corrections

No Corrections

Media

No Content Available

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved