IEEE Xplore At-A-Glance
  • Abstract

Hue-Preserving Color Blending

We propose a new perception-guided compositing operator for color blending. The operator maintains the same rules for achromatic compositing as standard operators (such as the over operator), but it modifies the computation of the chromatic channels. Chromatic compositing aims at preserving the hue of the input colors; color continuity is achieved by reducing the saturation of colors that are to change their hue value. The main benefit of hue preservation is that color can be used for proper visual labeling, even under the constraint of transparency rendering or image overlays. Therefore, the visualization of nominal data is improved. Hue-preserving blending can be used in any existing compositing algorithm, and it is particularly useful for volume rendering. The usefulness of hue-preserving blending and its visual characteristics are shown for several examples of volume visualization.

SECTION 1

Introduction

Color and transparency play crucial, fundamental roles in visualization. For example, color mapping is frequently and effectively used to display quantitative and qualitative data. Color is particularly effective for visual grouping, which can be utilized for labeling regions of a data set, visualizing nominal data. Transparency is almost indispensable when displaying 3D structures because it is one way of alleviating occlusion problems. For example, direct volume visualization depends heavily on transparency to show the complete 3D structure of a volumetric scalar field.

This paper focuses on the combination of color and transparency for the visualization of nominal data. An important application example is volume visualization with a transfer function that classifies different materials and shows them by clearly distinct colors. For example, 3D medical images like CT or MRI are often classified according to their material components that are then labeled by different colors. Since we propose a modification of the basic Porter and Duff compositing operators [27], our approach is generic and may be applied to any transparent image overlay for visualization or illustrative application.

Per se, color and transparency have been extensively investigated in both visualization and perception literature. However, the interaction between color and transparency in terms of perception has played a very limited role in visualization before. The most significant and directly related prior visualization work is by Wang et al [31], who propose that opposite colors should be used for two semi-transparent layers to avoid hue shift after alpha blending. In addition, saturation of the input colors may be modified to reduce hue effects. We follow their rationale that hue shift should be avoided to prevent problems from false colors and respective mislabeling of nominal data and extend their color-design guidelines to a complete and robust computational model.

We contribute a parameter-free model for generic image compositing that handles hue, saturation, and brightness separately in order to have independent control over achromatic and chromatic compositing (Section 4). Blending of brightness resembles existing and established blending schemes, whereas we keep the hue that has the dominant input color (the color which has the strongest impact on the final image). Color discontinuities that might be introduced by naive hue preservation are avoided by smoothly adjusting saturation. An important practical benefit is that our approach only affects the additive aspect of color compositing and, thus, any kind of Porter and Duff compositing operator can be extended to observe hue preservation. In particular, the over operator for the discretization of volume rendering can be made hue-preserving. Several examples of hue-preserving volume rendering are shown and discussed in Section 5. Hue-preserving blending is founded on results from research in perceptual psychology and psychophysics, which we review in Section 2. Those results guided the development of our blending model, as documented in the form of respective design criteria and requirements (Section 3).

Figure 1 compares traditional blending with hue-preserving blending for a typical example of direct volume visualization. The rendered data set shows the scan of a tomato. The transfer function was chosen with variable opacity, but only three different, discrete colors: red, orange, and yellow. Traditional blending (Figure 1 (left)) leads to a mix of those input colors so that different materials tend to be hard to distinguish. In particular, the outer peel (red) and next layer (orange) are indistinguishable. In contrast, hue-preserving blending (Figure 1 (middle)) separates all regions clearly—even the outer peel from the rest of the tomato.

SECTION 2

Background

Semi-transparency has been playing a relevant role in previous research on human visual perception, visualization, and computer graphics alike. We review related work from those research areas.

In computer graphics and visualization, the focal point is the algorithmic aspect and efficiency of semi-transparent rendering, either for rendering semi-transparent surface geometry or participating volumetric media (see Engel et al [12]). The conceptual basis for both surface and volume rendering with transparency is usually related to alpha blending—also known as the over operator in image compositing according to Porter and Duff [27]. Alpha blending can be directly applied to semi-transparent surface overlays and indirectly to volume rendering, where the volume rendering integral can be discretized by iterative application of alpha blending in back-to-front order. While global illumination effects for translucent materials are highly relevant for photorealistic rendering (e.g., subsurface scattering [32] or translucent volume visualization [22]), volume rendering with single scattering is the de-facto standard for direct volume visualization due to its efficiency and ease of interpretation by the user. This paper also relies on the algorithmic basis of semi-transparent rendering, adopting the fundamental idea of a compositing operator that (subsequently) blends two images. Our main goal is volume visualization, but we also consider generic image overlay for visualization or illustrative purposes.

This paper focuses on a modification of the Porter and Duff compositing operator, guided by human visual perception. We consider previous work in the perception literature because perceptual transparency is not identical with physical transparency, which is typically the starting point in computer graphics. Perceptual transparency relates to the perception of two objects, where one object is recognized as being in front of the other background object [23]. A key observation is that perceived transparency is not based on inverse physical optics, but influenced by low-level, mid-level, and high-level components of visual perception. In fact, physical transparency is neither sufficient nor necessary for perceptual transparency. Transparency perception is affected by many different aspects, including luminance, chromaticity, apparent motion, stereo depth, subjective contours, and figural organization. In particular, figural organization and geometric aspects play an important role, such as x-junctions [5], part boundaries [30], or Gestalt aspects [25], [20].

We restrict ourselves to per-pixel compositing of images and, therefore, focus on the low-level color blending aspect of transparency. Other conditions for transparency are complementary to, and can be combined with, our approach; typically, these conditions are related to parameters beyond image compositing, such as scene configuration, camera parameters, or lighting conditions. There is strong and ample empirical evidence that image luminance has the most impact on transparency perception. In fact, most studies have focused on investigating achromatic configurations; see, for example [25, 6, 16, 21]. Based on empirical results, different variants of psychophysical models of transparency were developed. These models can be broadly classified as additive or subtractive, referring to their underlying compositing of colors. The role model of additive transparency is the episcotister model by Metelli [25]: conceptually, a disc with an open sector rotates in front of the background object; the perceptual effect of the overlay of the weighted foreground (disc) and background colors is achieved by fusion. This model is identical to alpha blending by the over operator [27]. The Metelli model can be generalized in a couple of ways, for example to the model of linear atmospheres that modifies the luminance shining through them [2] or to generic addition (translation) and mix of colors [11]. Alternatively, subtractive models rely on the idea of a light-transmitting material such as a colored screen (e.g., [6], [14]). There are conflicting empirical results favoring either additive or subtractive models. However, for the achromatic case, both types of models are hard to distinguish and lead to comparable results [6]. As we adopt the luminance computation from the literature (and modify only the chromatic computation), we may as well reuse any of the previous luminance models. In accordance to traditional blending in computer graphics, we follow the approach of (additive) alpha blending.

While the crucial role of the achromatic channel is undisputed for perceptual transparency, a large portion of the perception literature indicates that chromatic information has very limited influence on transparency perception. For example, Nakayama et al [26] report that transparency perception is robust under a wide range of color configurations, both for the occluder and the occludee. Similarly, Anderson [3] identifies achromatic contrast as the primary determinant of scission. An extreme view would remove chromatic information completely from a transparency model. Such a view is quite accepted for the perception of motion, where chromatic contrast apparently plays (almost) no role; see, for example [28], [24]. However, there is also some evidence that special configurations of chromatic contrast alone can trigger transparency perception [11]. For example, the color of the overlay image should share hue properties with the images underneath [9]. As a consequence of the unclear role of chromatic information, we favor a conservative approach to perceptual transparency by focusing on well accepted models of luminance composition and by reducing the impact of the chromatic channels. In particular, we "synchronize" the hue characteristics in an extreme way: by favoring complete preservation of hue.

There is much previous work on perceptual transparency in psychology and psychophysics, but only little related work in computer graphics and visualization. Most relevant for our work is the recent publication by Wang et al [31]. They investigate and provide guidelines and rules for color design for illustrative visualization. In particular, they describe the appropriate choice of colors for semi-transparent layers: colors should have opposite hue in order to avoid hue shift after blending. In the case of more than two semi-transparent layers, they propose further constraints on the input colors. One of their guideline variants is to assign two colors with opposite hues for the two most important image elements and a more neutral color for the less important element(s). An alternative guideline is to change the input colors locally: they recommend reducing the saturation of the background element in overlap regions. The (geometric) overlap is detected by depth peeling. We adopt the very idea that hue shift should be avoided, but guarantee hue preservation by a generic blending model that allows for arbitrary number and configuration of input colors. In particular, we provide a complete computational and parameter-free model that may be applied to any kind of compositing problem and without constraints on the color maps.

Another recent example of transparency research is the perceptual evaluation of volume rendering techniques by Boucheny et al [8], who report that motion parallax and perspective projections are well suited to improve depth perception. Their work does not consider the impact of color, but their findings can be used to improve volume visualization in general and, thus, can be immediately combined with our approach to volume visualization. Fleming and Bülthoff [15] investigate low-level image cues for the perception of translucency, as produced by subsurface scattering. They particularly focus on achromatic aspects and the image blurring introduced by subsurface scattering, as compared with traditional image blending. They only briefly touch color, where they report that saturation is neither necessary nor sufficient to generate the impression of translucency. Finally, Bair et al [4] present guidelines for perceptually optimal visualization of layered surfaces, focusing on suitable texture patterns, but not on image compositing.

We aim at using color for visual grouping and labeling, which is most effective by means of chromatic information, not luminance information [19], [34]. In general, the design of appropriate color maps for visualization has been studied extensively in the literature. There are useful guidelines for generating effective color maps (e.g. [17, 18, 29, 33]). For this paper, we assume that an effective color palette is provided for the visualization of nominal data, i.e., for clearly separable elements or regions in the visualization. Typically, a small number of distinct colors are easily discriminated and, therefore, can be used for visual labeling. For example, up to roughly seven different colors may be used effectively [18]. Similarly, basic color names, which are known across cultures, could be used for color labels [7]. Distinguishable colors can also be used to design color palettes that lead to reduced display energy [10]. On a technical level, this paper makes use of computations in color space to guarantee hue preservation during the construction of visual overlays. A description of color spaces and tristimulus theory can be found in related books like the ones by Fairchild [13] or Wyszecki and Stiles [35].

SECTION 3

Design of Hue-Preserving Blending

We discuss the perceptual motivation and the design considerations for the development of hue-preserving blending before we present the respective computational model in Section 4. Since we target perceptual transparency, our compositing approach is not subject to any physical constraints, but can be formulated as an algebraic model. The discussion is initially restricted to compositing two overlaid images, and it will be later extended to compositing several images and even to continuous compositing in volume rendering. The primary goal of the new compositing model is to support easy perception of distinct colors for labeling, in combination with a good perception of transparent overlays.

Summarizing previous work in perceptual psychology and psychophysics (see Section 2), the following observations can be made:

[O1]

Perceptual research indicates that luminance is most important for the perception of transparency.

[O2]

Shape perception by shape-from-shading is based on luminance information.

[O3]

The chromatic channels play a major role in visual grouping; hue is particularly well suited for visual labeling, e.g., of nominal data.

[O4]

Chromatic information and especially saturation play a minor—at least unclear—role for transparency perception.

From these observations, we arrive at the following design criteria:

[D1]

Any new compositing model has to exhibit the same behavior for the luminance channel as established compositing models. According to [O1], luminance is critical for transparency perception, and there exist models with demonstrated effectiveness. In addition, the achromatic channel may carry important information, such as shape-from-shading information [O2], that should not be interfered with.

[D2]

The same, constant hue should be used for each nominal data entry to facilitate visual grouping [O3].

[D3]

Artificial color discontinuities should be avoided for continuously varying input colors, so that artificial perceptual contours are avoided.

These design criteria guide the construction of a generalized compositing operator. According to Porter and Duff [27], a wide range of compositing strategies can be formulated as the weighted sum of two colors. In particular, their approach includes alpha blending (the over operator) typically used for computing transparent overlays according to the Metelli model. We adopt the compositing idea by Porter and Duff and add just a little modification: instead of a direct, componentwise sum of two colors C1 and C2, a new "add" operator is proposed that meets the above design criteria. We denote traditional addition of colors by the symbol "+" and the new operator by " ⊕ ". In this notation, the hue-preserving sum of colors is: Formula TeX Source $$C_{\rm new} = C_1 \oplus C_2$$From the above design criteria, we impose the following requirements that hue-preserving color addition has to meet:

[R1]

The same luminance behavior as in traditional summation for the achromatic case should be achieved: the luminance of (C1C2) should be identical to the sum of the luminances of C1 and C2.

[R2]

The hue of Cnew is either equal to the hue of C1 or C2: Hue(Cnew) ∈ {Hue(C1 Hue(C2)}. The hue of Cnew is chosen as the dominant hue of the two colors C1 and C2. The dominant color is the one whose hue would be closest to the blended color in traditional color summation.

[R3]

Saturation variations are used to avoid color discontinuities. When the dominant color, and thus the final hue, is to change Cnew should go through the gray point with vanishing saturation, so that even an abrupt change of hue does not imply a discontinuity in chromaticity.

The requirements [R1] and [R3] correspond directly to the design criteria [D1] and [D3]. However, the design criterion [D2] cannot be implemented completely because it asks for conflicting choices of hue: if two different nominal data entries are composited, not both of their hues can survive. The requirement [R2] approximates [D2] by choosing the dominant hue.

The semantics and mathematical structure of the new ⊕ operator is designed to resemble the traditional + operator as much as possible, so that it can be used in any existing blending algorithms, especially in compositing schemes for volume rendering. The ⊕ operator is binary: it takes two input colors. The extension to compositing several image layers or to many samples along viewing rays in volume rendering is possible by applying ⊕ several times along the image compositing stack. The mechanics and mathematical definition of the ⊕ operator are presented in the following section.

SECTION 4

Mechanics Of Hue-Preserving Blending

This section presents the computational model of hue-preserving blending that follows the requirements [R1]–[R3]. We aim at a generic compositing model, modifying the Porter and Duff image compositing approach. In its original form, any Porter and Duff operator can be written as a weighted sum of two input colors CA and CB [27]: Formula TeX Source $$(\alpha_{A} F_{A})C_{A} + (\alpha_{B}F_{B})C_{B}$$where αA and αB are the alpha values associated with the two colors and FA and FB are respective fractional components. The scalar values (αAFA) and (αBFB) can be interpreted as combined weights for the two input colors. The original version of those compositing operators assumes colors in RGB color space. However, any other color space related to RGB by linear transformation may be employed, e.g. CIE XYZ. The basis of color computation is the tristimulus theory, which interprets color as elements in a 3D vector space.

Equation (2) contains two relevant arithmetic operations: the multiplication of a scalar weight with a 3D color, and the sum for 3D colors. With hue-preserving blending, multiplication with a scalar weight remains unchanged. The only difference is that the traditional component-wise addition by the + operator is replaced by the new operator ⊕ from Eq. (1).

The hue-preserving ⊕ operator is based on computations in a set of appropriate color representations: in hue, saturation, and brightness components that are modified separately. We start from linear RGB as basis for our color computations and apply transformations to separate the hue, saturation, and luminance aspects.

Figure 2 illustrates and compares traditional blending with hue-preserving blending Figure 2(a) sketches the geometry of blending in the hue–saturation plane—with hue as angle and saturation as radial distance from the center. The two exemplary input colors, teal and orange, are marked by small white circles. Depending on the relative weights assigned to the two colors, the result of traditional blending yields a color on the long dashed line crossing several color hues. The possible resulting hues are also shown in the color bar in Figure 2(b). Our aim is to modify the traditional blending + operator so that when two colors are blended, the resulting color only has the same hue as either of the original ones, as shown in Figure 2(c). The basic idea is to blend two colors through the middle gray point (or the central axis, where color saturation equals zero), as illustrated by the red dotted line in Figure 2(a).

Figure 1
Fig. 1. Volume rendering of a tomato data set using traditional (left) and hue-preserving (middle) color blending. The data histogram, transfer function, and color legend are shown on the right.
Figure 2
Fig. 2. (a) Traditional blending of two colors yields various color hues (indicated by white dotted line). In contrast, hue-preserving color blending mixes the two colors so that they go through the gray point (red dotted line), avoiding any extraneous hues. (b) Traditional alpha blending of teal and orange. (c) Hue-preserving alpha blending of teal and orange. Note the presence or absence of the yellow hue in both color profiles.
Figure 3
Fig. 3. Blending opposite (i.e., complementary) colors in the traditional color blending model leads to a more neutral color and preserves either original hue. We follow the same idea in our hue-preserving color blending model. Given two arbitrary colors (circled in white) that are not necessarily opposite to each other, we modify only the hue component of the non-dominant color to be the opposite hue of the dominant color (circled in red), then they are added as before.

In other words, hue-preserving blending can be essentially split in two pieces: blending from one input color C1 towards the gray axis (which keeps the hue of C1), or blending from the other input color C2 towards the gray axis (which keeps the hue of C2). We decide which of the two pieces is used by examining the relative "strengths" of the two input colors; the hue of the dominant color determines the hue of the blended color. The actual compositing step has to ensure that the dominant hue does not change. This is achieved by modifying the non-dominant color in a way that it becomes the opposite of the dominant color; the saturation and luminance of the non-dominant color stay the same. By adding opposite colors, the color moves towards the gray point, and we guarantee that the original hue does not change Figure 3 illustrates this idea.

Algorithm 1 describes our hue-preserving blending model. In particular, we do not require that hue values are explicitly available, but we just need a mechanism that provides the notion of equal hue (i.e., colors that are on the same straight line from the gray axis in 3D color space), opposite hue (i.e., colors that are on a straight line on the opposite sides of the gray axis), and isoluminance. Here, the gray axis denotes the line that goes from black through the white point; it corresponds to the completely desaturated center point in Figure 2(a) and Figure 3. Luminance is explicitly available in CIE XYZ or indirectly in CIELAB and it can be easily computed in RGB by a weighted sum of the RGB components.

In particular, the following abstracted functions are required. The function equal_hue(C1 C2) yields the Boolean value "true" if the two input colors C1 and C2 have the same hue. This can be implemented by checking whether the color difference vector between C1 and the gray axis (at the same luminance level as C1) is a positive multiple of the difference vector between C2 and the gray axis (at the same luminance level as C2). Alternatively, hue values of C1 and C2 may be directly computed and compared, provided that a color system with explicit notion of hue is used.

The other required function is opposite_color(C1 C2): it computes a color that is opposite to C1 (opposite with respect to the gray axis) and that has the same luminance and saturation as C2. Opposite color is achieved by negating the difference vector between C1 and the gray axis (at the same luminance level as C1). Isoluminance is crucial to implement [R1] of Section 3. The same luminance and saturation as C2 is achieved by scaling the negated difference vector so that isoluminance is guaranteed and equal distance to the gray axis (for isosaturation). Alternatively, in color systems with an explicit notion of hue and saturation, the hue angle can be rotated by 180 degrees to obtain the opposite color.

The code in Algorithm 1 first checks if the two input colors have the same hue, and if so, then the result is identical to traditional blending. Otherwise, hue preservation has to be explicitly ensured by modification of color addition. Here, we first assume that C1 is the dominant color, leading to a tentatively assigned mixing color Cnew. If Cnew has different hue than C1 (in fact, opposite hue), the assumption that C1 is dominant is wrong. In this case Cnew is computed by using C2 as dominant color. Put differently, the dominant color is indirectly determined by testing the two potential alternatives for hue-preserving mixing of colors.

The example images of this paper are produced by using functions equal_hue and opposite_color computed in linear RGB space, along with calculations of luminance values from CIELAB. For the final display, linear RGB colors are transformed to sRGB.

Figure
SECTION 5

Results

We illustrate the effects of hue-preserving blending for several different examples of image compositing, and compare them to traditional blending. First, we start with the simple case of alpha blending two colors in a hue-preserving way. Two colors C1 and C2 are alpha-blended: Formula TeX Source $$C_{\rm new} = (1 - \alpha) C_{1} \oplus \alpha C_{2}$$Figure 4 compares pairs of alpha-blended color profiles using traditional and hue-preserving blending. The two input colors are at opposite ends of each color profile, and alpha ranges from 0 to 1. It is easy to see that the hue-preserving blending produces no extra hues other than the original ones. A nice property of our method is that blending opposite colors or blending same-hue colors yields the same result as traditional blending, as shown in Figure 4(c)– Figure 4(e).

Figure 4
Fig. 4. In each pair, traditional (left) and hue-preserving (right) alpha blending for two colors are compared side by side. Images (a) and (b) show the typical cases where hue-preserving blending employs color transitions through gray to avoid extraneous hues. Images (c) and (d) show that for blending opposite colors our method gives the same result as traditional blending. Image (e) demonstrates blending two colors of the same hue, which also yields the same result as traditional blending.
Figure 5
Fig. 5. Traditional (left) and hue-preserving (right) color compositing of red, green, and blue. Since luminance is preserved, color transparency remains perceivable.

Figure 5 demonstrates the additive mixing of three distinct color lights: red, green, and blue. Traditionally, the red, green, and blue lights combine to form yellow, magenta, and cyan. In our hue-preserving approach, only the original colors are present. Perceptual transparency is still perceived because our method maintains the original luminance.

Next, we examine the more complex example of blending several colors normally encountered in volume rendering. In volume rendering applications, it is typical for users to choose a few distinct colors for visual labeling of classified materials during data exploration (usually 1–7 material colors). However, as the number of colors exceeds 2, the colors that can result from traditional blending cover a large and continuous range of different hues Figure 6 compares the possible colors that can result from blending up to 4 colors in both the traditional and hue-preserving methods. The colors are displayed in their respective coordinates in the HSL double-cone, viewed from above (i.e., looking down the HSL double cone from where L = 1.0). The traditional approach covers a large portion of the HSL color space as many mixed colors are introduced, whereas the hue-preserving approach is limited to its distinct input hues.

Figure 6
Fig. 6. In volume rendering, many colors of various hues may be mixed. At the top, we show the colors used in the blending. The next (middle) row shows all possible colors that can result using traditional color blending. The last (bottom) row shows the colors that can result from hue-preserving blending. Note that the possible colors are viewed in the HSL color cone from above (showing hue by angle and saturation by radius), so that the lower-lightness colors are occluded. We surround the colors with isoluminant HSL color circles to aid readers in identifying color hues.

We are now ready to apply our blending technique to actual volume visualization of 3D scalar data sets. Images are rendered by front-to-back raycasting with optical properties such as color and opacity assigned to data values via a 1D transfer function Figure 7 shows a tooth model using traditional blending and hue-preserving blending. In the traditional tooth model, the 3 input colors yellow, red, and blue mix to produce tints of orange and purple. The presence of these offhue colors is quantitatively documented in the color hue histogram in Figure 7(b). Using the new blending method, only the original 3 colors are present, as confirmed by the hue histogram in Figure 7(c). In this way, color labeling is improved at no loss of feature identification.

Figure 7
Fig. 7. (a) Traditional (left) and hue-preserving (right) rendering of a tooth data set. In the traditional rendering, orange colors can be seen where red and yellow mix. There are also purple hues where red and blue mix. These extraneous hues completely disappear in the hue-preserving rendering. The color hue histograms for both renderings are shown in (b) and (c). Note the three vertical lines in the hue-preserving histogram, representing the original color hues.

Figure 8 shows the volume rendering of a human chest data set. In Figure 8(a), we use opposite colors blue and yellow, and show that our approach produces the same result as traditional blending. However, when the blue flesh color moves its hue towards cyan, the traditional blending produces an undesirable tint of green, whereas our approach does not. This example was designed to resemble the color choice by Wang et al [31] in their Figure 8. If opposite colors are chosen according to their guidelines, hue-preserving blending is identical to traditional blending. However, we have essentially given the user the freedom to select arbitrary colors without having to worry about generating extraneous hues and false, mixed colors.

Figure 8
Fig. 8. (a) Traditional (left) and hue-preserving (right) rendering of a chest data set, using opposite colors blue and yellow. Since the original colors are already opposite to each other, the traditional method does not suffer from extraneous hues, and in fact looks just like the hue-preserving rendering. (b) The blue hue of the flesh is offset towards cyan, and we immediately see that traditional blending produces tints of green. This, however, does not pose a problem for hue-preserving blending, which still maintains only cyan and yellow.

The smooth transition of colors through gray, as required by [R3], is demonstrated in Figure 9. Here, the opacity of the brain is gradually increased (from left to right). With increasing opacity, that inner part of the volume data set is becoming more and more pronounced and the respective color (green) is increasingly more dominant. The transition from dominant exterior color (red) and dominant interior color goes through gray (instead of yellow, as in traditional blending) with smooth variations of saturation. Additional comparisons of traditional and hue-preserving blending are shown and documented in Figures 10 through 13.

Figure 9
Fig. 9. (a) Volume rendering of a segmented frog data set with only the flesh and brain shown. (b) We illustrate the effect of increasing the brain opacity (left to right) in both the traditional (top) and hue-preserving (bottom) methods. The gray colors in the hue-preserving approach indicate the smooth transitions between the two colors. The yellow hue from traditional blending is eliminated in our approach.
Figure 10
Fig. 10. Traditional (left) and hue-preserving (right) rendering of a fish data set. On the right, both red and orange are more distinguishable from each other, and the fish bone structure is more pronounced.
Figure 11
Fig. 11. Traditional (left) and hue-preserving (right) rendering of a piggy bank data set. On the right, the blue color of the coins is much more distinguishable from the purple color of the piggy bank.
Figure 12
Fig. 12. Traditional (left) and hue-preserving (right) rendering of a foot data set. Traditional blending produces shades of purple, whereas hue-preserving blending does not.
Figure 13
Fig. 13. Traditional (left) and hue-preserving (right) rendering of a segmented frog data set, with five features identified and color-coded. Input colors light green, yellow, and orange blend in the traditional approach to produce various shades of orange across the entire image. On the other hand, in the hue-preserving approach, we see that only the frog eyes are orange. This comparison shows that hue-preserving blending can work with a larger number of input hues.

Figure 14 shows the tooth data set using energy aware colors [10]. The hue-preserving method fixes a problem of the volume-rendering application in the original paper on energy aware colors, where colors shifted dramatically due to blending. The original design goal was to specify a palette of discrete, distinguishable color, which was achieved completely only for 2D maps. With hue-preserving blending, we can now maintain constant hue even in volume rendering. There is an additional side benefit of hue-preserving blending—it tends to lower energy consumption further because desaturated colors tend to be more energy-efficient.

Figure 14
Fig. 14. Traditional (left) and hue-preserving (right) rendering of a tooth data set using energy aware colors. The inset shows a zoomed-in view, where traditional blending exhibits a substantial hue shift, producing tints of blue and green. In contrast, hue-preserving blending shows constant teal.

There are some drawbacks of using hue-preserving color blending in practice. Gray colors in hue-preserving blended images can be confusing, as gray can come from blending various hues (see Figure 15). One possible solution for reducing the amount of gray is to incorporate a bias function during the blending so that colors tend to be at the saturated end of the color vector (rather than in the less saturated, gray regions). Another drawback is that hue-preserving blending is order-dependent. When blending more than two colors, our method can produce different results depending on the blend order. Consider blending 3 different colors one after another, and any pair of those colors blends separately to gray, then there are 3 possible results depending on the blend order.

Figure 15
Fig. 15. Traditional (left) and hue-preserving (right) rendering of an engine data set. Although colors are much more distinguishable in the hue-preserving case, gray colors can be confusing as they can come from blending various colors. One possible solution to reduce gray is discussed at the end of Section 5.

Hue-preserving color blending in other color spaces produces similar results, as documented on our web page [1]. We also encourage the reader to view our supplementary videos because motion parallax improves depth perception in any variant of volume rendering [8].

SECTION 6

Conclusion

We have presented hue-preserving blending as a modification of general Porter and Duff image compositing. The goal of hue preservation is to avoid false colors and improve visual labeling by color, even in transparent rendering. Our model is based on results from previous perception research indicating that perceptual transparency may be treated separately for achromatic and chromatic information. Accordingly, we have reused existing blending models for the achromatic channel and just modified chromatic compositing. Here, the main idea is to identify the dominant color whose hue survives blending; continuous color transition is achieved by gradually changing saturation, instead of hue. A practical benefit of our approach is that it may be readily included in any visualization system using Porter and Duff compositing because only minimal algorithmic changes are required. We have targeted direct volume visualization as the main application, but any kind of non-photorealistic image overlay may benefit, too.

Hue-preserving blending and the color guidelines by Wang et al [31] share the same basic motivation of avoiding hue shifts. A fundamental difference is that our approach provides a generic blending model with parameter-free mathematical expressions, whereas Wang et al. focus on guidelines for color design, not on mathematical models. A related difference is that we target a replacement of arbitrary Porter and Duff image compositing, while Wang et al. rely on the specific geometric computation of overlap of surface geometry by means of depth peeling. Therefore, hue-preserving blending can be included in any transparency computation, including volume rendering. On the perceptual level, our model achieves a brightness behavior analogous to the Metelli model for the achromatic case, by separate compositing of brightness values. In contrast, the approach by Wang et al. does not target separate control of brightness. Furthermore, we have proposed the concept of dominant color to identify which color should shine through. In contrast, the local blending approach by Wang et al. always chooses the hue of the foremost color. That choice is not robust under small changes of input colors or scene geometry because even little tints of close-by colors could completely change the final image, which is particularly problematic for volume rendering. Therefore, hue-preserving blending can be considered the extension of the guidelines by Wang et al. to a robust, generic, versatile, and parameter-free computational model.

Hue-preserving blending is motivated by previous perception research that provides a reliable basis for our approach to handling achromatic information and visual labeling. However, it should be pointed out that the exact role of the chromatic channels for perceptual transparency and corresponding computational models are still under investigation in perceptual psychology. According to some indication in previous work, we have ignored the chromatic channels for perceptual transparency. Yet, there are other, conflicting publications that indicate that color may play a role, too. Therefore, further perceptual studies in that area are necessary, subject to future work. It may turn out that transparency perception might be improved by loosening the hard restriction to complete hue preservation and by tuning further blending parameters. Such research might have to dive deeply into complex research questions of perceptual psychology. Since our main goal is to improve visual color labeling in combination with transparency rendering, the subtle factors for optimizing transparency perception have been outside the scope of this paper.

Another area of future research could implement hue-preserving blending based on color systems different than RGB. In particular, a strict computational separation of different perceptual channels could be achieved by more sophisticated color systems. One advantage of our approach is that we do not rely on measures of perceptual difference between different hue values, but only on a mechanism that provides identical hue or opposite hue. Therefore, perceptual uniformity is not really needed. However, care must be taken such that hue does not change along a straight line from the gray axis. Finally, applications outside direct volume visualization could be investigated.

Acknowledgments

This work is made possible through the support of the Natural Science and Engineering Research Council of Canada.

Footnotes

Johnson Chuang and Torsten Möller are with GrUVi (Graphics, Usability, and Visualization Lab) at Simon Fraser University, Burnaby, Canada, E-mail: jca54@cs.sfu.ca, torsten@cs.sfu.ca.

Daniel Weiskopf is with VISUS (Visualization Research Center) at Universität Stuttgart, Germany, E-mail: weiskopf@visus.uni-stuttgart.de.

Manuscript received 31 March 2009; accepted 27 July 2009; posted online 11 October 2009; mailed on 5 October 2009.

For information on obtaining reprints of this article, please send email to: tvcg@computer.org.

References

1. Additional results of hue-preserving color blending on the web

http://www.cs.sfu.ca/gruvi/Projects/HuePreservingColorBlending/, [last access: 27 Jul 2009].

2. Lightness perception and lightness illusions.

E. H. Adelson

In M. Gazzaniga, editor,

The New Cognitive Neurosciences, pages 339–351, Cambridge, MA, 2000. MIT Press.

4. Texturing of layered surfaces for optimal viewing. IEEE Transactions on Visualization and Computer Graphics,

A. Bair, D. H. House and C. Ware

12(5):1125–1132, 2006.

5. On the role of figural organization in perception of transparency.

J. Beck and R. Ivry

Perception & Psychophysics, 44 (6): 585–594, 1988.

6. The perception of transparency with achromatic colors.

J. Beck, K. Prazdny and R. Ivry

Perception & Psychophysics, 35 (5): 407–422, 1984.

7. Basic Color Terms: Their Universality and Evolution

B. Berlin and P. Kay

University of California Press, Berkeley, 1969.

8. A perceptive evaluation of volume rendering techniques.

C. Boucheny, G.-P. Bonneau, J. Droulez, G. Thibault and S. Ploix

ACM Transactions on Applied Perception, 5 (4): 1–24, 2009.

9. Test of a convergence model for color transparency perception.

V. J. Chen and M. D'Zmura

Perception, 27 (5): 595–608, 1998.

10. Energy aware color sets.

J. Chuang, D. Weiskopf and T. Möller

Computer Graphics Forum (Eurographics 2009), 28(2), 2009.

11. Color transparency.

M. D'Zmura, P. Colantoni, K. Knoblauch and B. Laget

Perception, 26 (4): 471–492, 1997.

12. Real-Time Volume Graphics

K. Engel, M. Hadwiger, J. M. Kniss, C. Rezk-Salama and D. Weiskopf

A. K. Peters, Natick, MA, 2006.

13. Color Appearance Models

M. D. Fairchild

Wiley & Sons, 2nd edition, 2006.

14. Psychophysical model of chromatic perceptual transparency based on substractive color mixture.

F. Faul and V. Ekroll

Journal of the Optical Society of America A, 19 (6): 1084–1095, 2002.

15. Low-level image cues in the perception of translucent materials.

R. W. Fleming and H. H. Bülthoff

ACM Transactions on Applied Perception, 2 (3): 346–382, 2005.

16. Transparent layer constancy.

W. Gerbino, C. I. Stultiens, J. M. Troost and C. M. de Weert

Journal of Experimental Psychology. Human Perception and Performance, 16 (1): 3–20, 1990.

17. ColorBrewer.org: an online tool for selecting color schemes for maps.

M. A. Harrower and C. A. Brewer

The Cartographic Journal, 40 (1): 27–37, 2003.

18. Choosing effective colours for data visualization.

C. G. Healey

In Proceedings of the IEEE Conference on Visualization, pages 263–270, 1996.

19. Essentials of Neural Science and Behavior

E. R. Kandel, J. H. Schwartz and T. M. Jessell, editors

Appleton & Lange, Norwalk, 1995.

20. Organization in Vision

G. Kanizsa

Praeger, 1979.

21. Precision, accuracy, and range of perceived achromatic transparency.

R. Kasrai and F. A. A. Kingdom

Journal of the Optical Society of America A, 18 (1): 1–11, 2001.

22. A model for volume lighting and modeling.

J. Kniss, S. Premoze, C. Hansen, P. Shirley and A. McPherson

IEEE Transactions on Visualization and Computer Graphics, 9 (2): 150–162, 2003.

23. Principles of Gestalt Psychology

K. Koffka

Routledge, 1999.

24. Perceptual motion standstill from rapidly moving chromatic displays.

Z.-L. Lu, L. Lesmes and G. Sperling

In Proceedings of National Academy of Science, 96, pages 15374–15379, 1999.

25. The perception of transparency.

F. Metelli

Scientific American, 230 (4): 91–98, 1974.

26. Transparency: Relation to depth, subjective contours, luminance, and neon color spreading.

K. Nakayama, S. Shimajo and V. S. Ramachandran

Perception, 19 (4): 497–513, 1990.

27. Compositing digital images.

T. Porter and T. Duff

Computer Graphics (ACM SIGGRAPH 1994 Conference), 18 (3): 253–259, 1984.

28. Does colour provide an input to human motion perception?

V. S. Ramachandran and R. L. Gregory

Nature, 275:55–57, 1978.

29. A tool for dynamic explorations of color mappings.

P. Rheingans and B. Tebbs

In Symp. on Interactive 3D Graphics, pages 145–146, 1990.

30. Part boundaries alter the perception of transparency.

M. Singh and D. D. Hoffman

Psychological Science, 9 (5): 370–378, 1998.

31. Color design for illustrative visualization.

L. Wang, J. Giesen, K. T. McDonnell, P. Zolliker and K. Mueller

IEEE Trans. on Visualization and Computer Graphics (IEEE Visualization 2008), 14 (6): 1739–1754, 2008.

32. A practical model for subsurface light transport.

H. Wann Jensen, S. R. Marschner, M. Levoy and P. Hanrahan

In Proceedings of ACM SIGGRAPH 2001, pages 511–518, 2001.

33. Color sequences for univariate maps.

C. Ware

IEEE Computer Graphics and Applications, 8 (5): 41–49, 1988.

34. Information Visualization

C. Ware

Morgan Kaufmann, 2nd edition, 2004.

35. Color Science

G. Wyszecki and W. S. Stiles

John Wiley & Sons, New York, 2nd edition, 1982.

Authors

No Photo Available

Johnson Chuang

No Bio Available
No Photo Available

Daniel Weiskopf

Member, IEEE Computer Society

No Bio Available
No Photo Available

Torsten Möller

Member, IEEE

No Bio Available

Cited By

No Citations Available

Keywords

IEEE Keywords

No Keywords Available

More Keywords

No Keywords Available

Corrections

No Corrections

Media

Video

Engine

7,105 KB
Download
Video

Tooth

7,733 KB
Download

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved