IEEE Xplore At-A-Glance
  • Abstract

Image-Based Modeling of the Human Eye

Rendering realistic organic materials is a challenging issue. The human eye is an important part of nonverbal communication which, consequently, requires specific modeling and rendering techniques to enhance the realism of virtual characters. We propose an image-based method for estimating both iris morphology and scattering features in order to generate convincing images of virtual eyes. In this regard, we develop a technique to unrefract iris photographs. We model the morphology of the human iris as an irregular multilayered tissue. We then approximate the scattering features of the captured iris. Finally, we propose a real-time rendering technique based on the subsurface texture mapping representation and introduce a precomputed refraction function as well as a caustic function, which accounts for the light interactions at the corneal interface.



RESEARCH on human face rendering has principally focused on the modeling and rendering of skin. The human eye, as an important part of nonverbal communication, also needs to be carefully modeled and rendered to enhance the realism of virtual characters. Recent improvements to graphics hardware offer the opportunity of rendering complex organic materials in real time. However, recovering or approximating morphological properties and optical behaviors of organic materials remains a challenging issue. Our aim is to mimic the appearance of human eyes using an image-based method that quickly generates believable virtual eyes, visually close to the originals. Film production mostly relies on artistic design to provide aesthetically pleasing renderings. Nevertheless, taking advantage of physical and anatomical studies makes the modeling and rendering methods more general and easy to use. This is what we aim at providing in this paper.

We propose to take advantage of domain knowledge from the field of ophthalmology to infer anatomical properties of the iris. As with most organic materials, the iris is translucent and composed of several layers. While the overall iris color is due to specific pigment concentrations, the local variations of its shades are mostly governed by the nonuniform thickness of its tissues. We propose a pipeline, based on in vivo eye photographs, that allows us to approximate the iris structure and scattering features so that rendered irides closely imitate real ones. We have developed a technique to unrefract iris photographs. These processed images allow us to acquire the relief of the iris (Section 4). In a second step, we estimate the scattering properties of the captured iris from the eye photographs (Section 5). Finally, we propose a real-time eye rendering algorithm by representing the iridal structure as a Subsurface Texture [1] and by introducing a refraction function as well as a caustic function (Section 6). Our results are presented and discussed in Section 7.



2.1 Related Work

Most of the previous work in human eye modeling and rendering only partially relies on the anatomical structure of the eye. Furthermore, only a few papers address the problem of generating visually pleasant models of human irides. Instead, classical methods model the eye features using textures which encode color variations: They assume that the variation of colors in the iris is only due to a variation of the densities of its pigments, hence ignoring the light scattering within its tissues.

By using a simple Gouraud shading model and textures, Sagar et al. [2] represent and render the human eye for surgical practicing purposes.

Lefohn et al. [3] present a method based on ocularist's studies: they introduce an aesthetically pleasant method for human iris cloning, based on multiple hand-designed colored layers overlaid to create convincing irides. Nevertheless, painting many semitransparent layers can be a tedious task. To minimize the user interactions, we propose a different approach which takes advantage of the RGB and scale information contained in iris photographs.

Zuo and Schmid [4] generate human irides using synthetic fibers to mimic the fibrous nature of the iris layers. However, this method has been developed for the evaluation of iris recognition algorithms and does not address the problem of mimicking and rendering given irides.

Lam and Baranoski [5] propose a Monte Carlo based rendering technique to accurately simulate the light scattering within the iris. This method, namely ILIT, allows accurate modeling and rendering of the human iris using the anatomical and biophysical properties of its layers. Baranoski and Lam [6] show that this rendering method can be used to accurately infer physical and chemical information of the human iris, such as the iridal melanin distribution, useful for medical purposes. However, this is an offline rendering method and does not address the problem of the human eye cloning.

In this regard, we developed a semiautomatic method, based on photographs and inspired by the ophthalmic literature. We approach the irregular structure of the iridal layers as well as their respective scattering properties by comparing iris photographs with rendered virtual irides. Even though less accurate than the ILIT model proposed in [5], it allows fast and simple human eye modeling, which only requires a few user interactions. We also propose a real-time rendering method by introducing a refraction function and a caustic function for a fast and accurate estimation of light interactions with the cornea. This rendering technique enables real-time modifications of the iridal thickness and scattering parameters, while keeping believable results. This feature is particularly useful for graphics designers, who, at times, lack biophysical knowledge about human eyes.

Figure 1
Fig. 1. Image-based modeling of the human eye: (a) eye photograph and (b) real-time rendering of the modeled eye with estimated pigment concentrations: CABLem = 7.56, CABLpm = 5.84, CStromaem = 4.61, and CStromapm = 2.11, and (c) rendering of a caustic on the iris.

2.2 Anatomy of the Human Eye

The human eye has been studied for decades. It is a complex organ involving proper mechanisms to adapt vision and focus on objects. In this section, we describe the visible ocular tissues (Fig. 2) such as the iris which, like fingerprints, is unique to every person. This particularity makes the iris very important for realistic eye rendering.

Figure 2
Fig. 2. Schematic profile of the human eye and closeup of the iris. Almost spherical, the human eye is about 24 mm long by 22 mm across.

2.2.1 The Sclera

The sclera is the white outer coating of the eye that gives the eye its shape. It is a tough, fibrous tissue consisting of highly compacted flat bands of collagen bundles which scatter light. This characteristic makes the sclera a highly translucent tissue. Its scattering properties are described in [7]. The sclera is overlaid by the episclera, a vascularized fibrous tissue that exhibits blood vessels and irrigates the cornea at the sclera/cornea interface, namely, the limbus. Furthermore, the episclera is coated by a tear film, providing its surface with a highly reflective appearance.

2.2.2 The Cornea and Anterior Chamber

At the front of the eye and overlying the iris, the cornea is a dehydrated and avascularized transparent tissue that serves as the first and strongest convex element of the human eye lens system. Thus, most of the light refraction occurs at the air/cornea interface, while the lens only "tunes" the focus. The human cornea has an average diameter of 11 mm and an average index of refraction of 1.376 [8]. A precorneal tear film coats the outer surface of the cornea, giving its surface a distinctive thin film reflection. The posterior corneal surface is bathed in the aqueous humor having an index of refraction of 1.336. However, note that due to the similarities of refraction indices of the cornea and the aqueous humor, most of the light refraction occurs at the outer corneal interface [9].

2.2.3 The Iris

Overlaid by the cornea and the aqueous humor, the iris is the colored diaphragm which serves as an aperture controlling the amount of light entering the eye. Made up of circular and radial muscles, the iris is on average 11.6 mm in diameter and can expand or contract the pupil from about 2 mm in bright conditions to 8 mm in darkness.

The iris is a translucent, layered tissue (Fig. 2). The outermost layer of the iris, the Anterior Border Layer (ABL) caps the stroma which, made up of collagen, is the thickest layer of the iris. The innermost layer, the Iris Pigment Epithelium (IPE), is an opaque tissue that prevents the light from entering the eye.

Multiple pigments are responsible for the color of the iris, ranging from shades of gray, blue, green, hazel, to brown. However, even though the iridal cells contain hemoglobin and carotenoid pigments, melanin provides the most significant qualitative and quantitative contributions to the iridal chromatic characteristics [10]. As in human skin and hair, the iris contains two types of melanin: the eumelanin, a brown-black pigment, and the red-yellow pheomelanin [11]. The IPE is maximally pigmented in every iris, and absorbs all the light coming in. We can then consider that the iridal color is determined by the pigmentation of its ABL and stroma [12]. In blue or lightly pigmented irides, as the ABL and stroma have a low concentration of melanin, mostly blue wavelengths return to the observer due to collagenic Rayleigh scattering. Dark irides are generally thicker and heavily pigmented [12].

Depending on the phenotype, human irides exhibit different and complex morphologies. The irregularities of the iris tissues have an important influence on the iridal appearance. We are interested in recovering the structural features visible in Fig. 3 using photographs: the crypts of Fuchs, a series of openings located on either side of the collarette, the pupillary ruff, small ridges at the pupillary margin, and the circular contraction furrows, folds about midway between the collarette and the origin of the iris.

Figure 3
Fig. 3. Iris imaging by Scanning Electron Microscopy shows the iris morphology in detail. Images courtesy Ralph C. Eagle.

Overview of the Method

This section describes the overall method for mimicking the human eye appearance. Our iris modeling and rendering algorithm depends on three main groups of unknown parameters: an iris morphology parameter T, describing the structure of the iris layers, the iridal pigment concentrations C, and the environment lighting or light probe, Li.

In the remainder of this paper, the rendered iris image is denoted as Isynth(T,C,Li). This image is not only used for display purposes: Isynth is also required by our recovering algorithms, based on image comparisons (Fig. 4).

Figure 4
Fig. 4. Iris cloning scheme.

We adopt different solutions and assumptions to infer these iridal parameters with a simple and fast image-based technique so that rendered images of one's iris are visually close to the real iris. In this paper, we use a real-time rendering technique to assess the iridal structural and scattering features. This rendering process is described in Section 6. However, the cloning scheme presented hereafter does not constrain the user to the use of a particular rendering technique. Our method generates an anatomically inspired clone adapted to the rendering method used for the assessments, yielding an optimal match of the rendered and actual irides.

Our cloning method is organized as follows: We take an iris photograph with corneal reflections and another without reflections, obtained using polarizing filters. We also acquire a high dynamic range (HDR) photograph of the environment by concatenating multiple exposures [13]. Note that this step also allows us to calibrate the camera.

We first position and register the virtual eye in the captured environment Li so that its corneal reflections are similar to the reflections visible in the photograph with corneal reflections. Once adjusted, we use Li in our renderer to illuminate the virtual eye (we assume that the indirect lighting from the nose and eyelids is negligible). Note that Nishino et al. [14] automatically recover the environment lighting from corneal reflections. Nevertheless, this method does not distinguish the iridal reflectance from the corneal reflection, constraining its use to dark-brown irides. We still take advantage of their findings to estimate the camera position by using only the information contained in our eye photographs (Section 4.1).

After setting up the lighting environment, we approximate the iridal structure T using the information contained in the reflection-free photograph. We make use of this photograph so that the corneal reflections do not mask the iridal texture. This image is first unrefracted and reoriented to match the iris picture with reflections. This last step allows us to compare the rendering of the reconstructed iris to the iris photograph with reflections. When used in our renderer, the details of the unrefracted image do not directly match the real iris morphology due to light scattering. Therefore, we address this issue by deconvolving the iris texture to mimic the real iridal structure (Section 4).

Once the iridal morphology is estimated, we automatically assess the iridal pigment concentrations C by comparing rendered irides with the iris photograph with reflections (Section 5).

Fig. 5 summarizes the steps illustrated in Fig. 4 and presented along this paper. Each step is described in the sections hereafter and presented as a short algorithm at the end of each section.

Figure 5
Fig. 5. Image-based modeling of the human eye.

Capturing and Reconstructing the Eye Morphology

Ophthalmologists often use a biomicroscope, or slit lamp, to study the different parts of the eye in extensive detail.

We propose to capture the visually significant iridal features using a simple digital camera and a macro lens. The corneal reflections are avoided using polarizing filters: we cap both the lighting device and the camera lens with orthogonally polarized filters so that the light reflected on the cornea (which keeps its polarity) does not appear in the captured image. Wang et al. [15] propose a computational method to remove the corneal reflections without using any filters. However, this image-based method cannot accurately recover the iridal parts that are frequently burned out (clipped) by corneal reflections and due to the limitations of the capturing device.

4.1 Unrefracting an Iris Photograph

The iris is overlaid by highly refractive elements: the cornea and the aqueous humor. Consequently, an eye photograph shows a distorted image of the iris that needs to be unrefracted (Fig. 7a).

4.1.1 Recovering the Camera Parameters

We first determine the unknown relative distance and orientation between the iris and the camera. The limbus, boundary between the sclera and the cornea, appears as an ellipse in the photograph. The polar angle θ between the vertical axis of the eye and the camera direction can be estimated as follows: Formula TeX Source $${\theta = \arccos \left({r_{min} \over r_{max}}\right).}$$ The major and minor axes rmax and rmin are estimated from the detected limbus as in [14]. As the major axis of the ellipse in the image corresponds to the real diameter of the limbus, the distance D, from the center of the iris to the camera position is Formula TeX Source $$D = r_{limbus}{f \over r_{max}}, $$ where rlimbus = 5.5 mm is the radius of the limbus, with f being the focal length in pixels. Once the orientation and distance from the camera are known, we can unrefract the iris photograph. Note that these camera parameters are also used in our renderer to produce comparable images.

4.1.2 Unrefraction Algorithm

Unrefracting an iris photograph consists of finding the relation between each pixel and its corresponding point on the real iris surface. At this stage, we assume a planar iris as the iridal thickness variations are small compared to the size of the anterior chamber. We use the generic corneal profile derived in [16]: Formula TeX Source $$pz^2 - 2Rz + x^2 + y^2 = 0, $$ where p = 0.75 and R = −0.78.

The refractive index of the tear film being very similar to the corneal index, we disregard its influence on the overall refraction. Even though the refractive indices of the cornea and aqueous humor are different, the main light refraction occurs at the interface between the air and the cornea [9]. Therefore, we approximate these refractions considering only the refraction at the upper corneal interface.

Figure 6
Fig. 6. Refraction at the corneal interface.

Under these assumptions, we estimate the actual light path from each point P on the iris surface to the camera C (Fig. 6). Following the Fermat's principle, we search for the point K on the cornea so that the optical path, L(K), is minimum: Formula TeX Source $$\eqalign{&{argmin}_{K \in cornea}L(K) \cr &L(K) = n_{air}\|CK\| + n_{cornea}\|KP\|. }$$ We solve this minimization problem by numerically finding the root of the optical path derivative using a classical Newton-Raphson algorithm, yielding a reliable estimate of the point K. Fig. 7 shows that the captured iris image and the original iris morphology are significantly different.

Figure 7
Fig. 7. Creation of the iris subsurface map: (a) refracted iris photograph, (b) unrefracted iris, and (c) deconvolved iris texture.

4.2 Recovering the Iris Morphology

We are interested in manipulating the pixel values of the unrefracted image I(x,y) so that we obtain a believable estimation of the three-dimensional shape of the iris.

The image comparison stage being performed using the iris image with reflections, we first register the unrefracted iris texture and the iris photograph. Note that the captured iris morphology may differ in the two photographs due to the reflex pupil adaptation of the human eye. Nevertheless, as the two captures are performed in the same environment, the small variations of the pupil size do not significantly modify the iridal structure.

Computing a depth map from a single image is a classic shape from shading problem. Unfortunately, the iris being a multilayered translucent tissue, this problem is severely underconstrained. As we propose to infer the iridal structure from photographs, we must introduce approximations of the structure of the iridal layers.

First, we consider the stromal collagenic fibers to be uniformly distributed, as accurate measurements of their complex distribution can only be performed ex vivo, using a scanning electron microscope. Second, we consider the anterior border layer to have a constant thickness TABL = 0.05675 mm [17] and a uniform pigmentation.

We previously stated that the stroma is a highly scattering tissue due to its collagenic structure. Furthermore, light gets absorbed when reaching the pigment epithelium (IPE), whose thickness is determined from ultrasound biomicroscopy: TIPE = 0.07 mm.

As a result, and as we considered the ABL to have a constant thickness and uniform pigment concentrations, we make the assumption that the light is scattered more in regions of the iris where the stroma is thick and less in thinner regions. Consequently, the brighter regions of the iris photograph correspond to thicker parts of the stroma whereas the darker ones correspond to its thinner parts. Our results in Section 7 show that this assumption provides us with virtual eyes visually close to the original ones.

When approximating the iridal thickness using these assumptions, we must consider some common iridal imperfections such as freckles or a higher melanin pigmentation in the collarette region. These lesions, which appear as darker brown spots on the iris, do not correspond to a thin part of the stroma but are due to a high density of melanin pigments in the ABL [12].

We first locate these particularities in the unrefracted iris photograph using their color range. We then remove them from the picture, and fill the holes using the neighboring iris color. Although we have extracted the freckles by hand, sophisticated techniques have recently become available to assist the user in this task [18]. Note that the pigment concentration assessments presented in Section 5 are not performed in these regions, the freckles being specifically handled in our renderer (Section 6.4).

4.2.1 Stromal Thickness Estimation

Each iris has a unique morphology. We divide the stromal thickness into two components: one representing the iridal generic thickness and another one describing the individual iris specificities.

The generic thickness of the human iris slowly varies along its radius r. We use the average of accurate measurements obtained using ultrasound biomicroscopy [19] to generate a generic iris profile IT(r), by fitting a quadratic polynomial to those measurements (Fig. 8). As the thicknesses of the ABL and IPE are considered constant, we deduce the expression of the generic stromal thickness, only depending on the radius Formula.

Figure 8
Fig. 8. Iridal thickness measured by Pavlin and Foster [19] used to generate the iridal generic profile: IT1 = 372± 58 μm (at 500 μm to the scleral spur), IT2 = 457± 80 μm (at 2 mm from the iris root), and IT3 = 645± 103 μm (maximum thickness near the pupillary edge).

We then consider the iris individual specificities, localized thickness variations around the generic profile. We derive those variations from the unrefracted iris image. As we chose to use an arbitrary polarized lighting environment we first normalize the pixel intensities: Formula TeX Source $$I_{norm}(x,y) = I(x,y) - \bar{I}(x,y), $$ where, for a given pixel (x,y), I(x,y) is the pixel intensity of the unrefracted iris image and Ī(x,y) is the average of the pixel intensities of the neighboring pixels.

We must consider that due to light scattering, the iridal features appear "blurred" in the photograph and larger than they are in reality. D'Eon et al. [20] estimate light scattering within layered materials by convolving the incident radiance with a sum of Gaussian filters.

Even though we consider nonuniform layered tissues, we found that applying a Gaussian deconvolution to our unrefracted iris image leads to a reasonably accurate estimate of the latent stromal thickness variations, Ideconv (Fig. 7c). We use the Richardson-Lucy deconvolution method: given an arbitrary Gaussian filter G(σ,μ), the deconvolved image is Formula TeX Source $$I_{deconv} = Deconv(I_{norm},G(\sigma,\mu)). $$ The stromal thickness, Tstroma(x,y), is then estimated as a weighted sum of the generic stromal thickness and the pixel intensities of the deconvolved iris image: Formula TeX Source $$T_{stroma}(x,y) = \bar{T}_{stroma}(r) + \alpha I_{deconv}(x,y), $$ where α is a scalar value controlling the importance of the local individual features of the stroma. We set α = 350 μm which produces convincing results when using our rendering technique.

The parameters of the deconvolution, σ and μ, can be deduced by comparing the iris photograph with high quality renderings. We adjust the parameter σ of the Gaussian deconvolution until the iris details in the rendered images fit the details in the photograph. Fig. 9 shows the impact of the deconvolution on the rendered iris. In our experiments we found out that using a variance σ = 0.5 mm, in our deconvolution yields compelling results compared to the original photograph (see Section 7). The parameter μ is chosen to be equal to zero. Note that the deconvolution may produce ringing artifacts. However, these latter are barely visible as the size of the filter kernel is very small compared to the size of the iridal features.

Figure 9
Fig. 9. Deconvolution of the iris subsurface map: (a) detail of the iris photograph, (b) rendered iris without deconvolving the unrefracted iris texture, (c) rendering with a deconvolution of σ = 0.5 mm, α = 350 μm, and (d) rendering with a deconvolution of σ = 1.0 mm, α = 350 μ m.

The steps of the iridal reconstruction presented in this section are summarized in Fig. 10. We first estimate a coarse thickness Formula by comparing iris renderings using generic pigment concentrations (such as the ones proposed in [5]) with the iris photograph. This allows us to create synthetic images used for estimating the scattering parameters of the individual iris, presented hereafter. Once the iridal scattering properties are estimated, we perform a more precise deconvolution by readjusting the filter variance σ which leads to a more accurate estimation of the stromal thickness variations.

Figure 10
Fig. 10. Iris structure recovery.

Recovering the Iris Scattering Parameters

Directly recovering the wavelength-dependent scattering parameters of a layered tissue is a difficult task. We propose to approximate these parameters from assessments on the pigment concentrations, assuming these latter and small-scale tissues to be uniformly distributed throughout the ABL and the stroma.

5.1 Light Scattering within the Iris Layers

Multiple pigments are responsible for the iris appearance. However, the iridal melanin pigment is the chromophore, the most responsible for the iridal color. Therefore, following Lam and Baranoski [5], we consider constant concentrations of hemoglobin and carotenoid for every iris specimen.

In this above cited paper, the absorption coefficients of oxyhemoglobin and deoxyhemoglobin, Formula and Formula as well as lutein Formula and zeaxanthin Formula are estimated from their spectral molar absorption curves and concentrations Cox, Cdeox, Clut, Czea available in the medical literature (see Table 1). Note that all the spectral absorption curves are projected into the RGB space using the CIE RGB Matching Functions.

Table 1
TABLE 1 Notations for the Absorption Coefficients and Concentrations of the Iris Pigments

The iridal melanin pigments are only slightly different from the melanin within the human hair and skin. The spectral absorption of eumelanin and pheomelanin respectively can be well approximated by power laws: Formula TeX Source $${\sigma_a^{em}(\lambda) = 6.6 \times 10^{10} \times \lambda^{-3.33}\;{\rm mm}^{-1}},$$ Formula TeX Source $${\sigma_a^{pm}(\lambda) = 2.9 \times 10^{14} \times \lambda^{-4.75}\;{\rm mm}^{-1},}$$ where λ is the wavelength of light in nanometers, Formula is the spectral absorption coefficient of eumelanin, and Formula that of pheomelanin. Equation (8) fits data from [21], while (9) is a fit to data proposed by Donner et al. [22].

The net spectral absorption of the stroma is calculated as a sum of every spectral absorption of the pigments weighted by their respective volume fraction: Formula TeX Source $$\eqalign{ \sigma_a(\lambda) &= C_{em}\sigma_a^{em}(\lambda)+C_{pm}\sigma_a^{pm}(\lambda) \cr &\quad+C_{ox}\sigma_a^{ox}(\lambda)+C_{deox}\sigma_a(\lambda)^{deox} \cr &\quad+C_{lut}\sigma_a^{lut}(\lambda)+C_{zea}\sigma_a^{zea}(\lambda).\cr} $$

The baseline absorption, being very small compared to the melanin absorption, is neglected. The ABL spectral absorption follows the same equation but we assumed this layer to be free of hemoglobin and carotenoid.

We consider that light scattering only occurs in the stroma, made up of collagen, and follows the Rayleigh theory. The scatterer density is estimated considering spherical collagen fibrils with a radius rc = 30 nm [23]: Formula TeX Source $$N = \left({4 \over 3}r_{c}^3\pi \right)^{-1} f_{collagen}, $$ where fcollagen = π/(4sin (π/3)) is the volume fraction of stroma occupied by the collagen fibrils in the tissue [5]. The scattering coefficient can then be derived as Formula TeX Source $$\sigma_s(\lambda) = {8\pi^3 \over 3N} \left(\left({n_{c} \over n_{b}}\right)^2 - 1\right)^2\lambda^{-4}, $$ where nc and nb are respectively the refractive indices of the collagen fibrils and stromal baseline equal to 1.47 and 1.5 [7], [24].

Following (10), we evaluate the ABL and stromal absorption coefficients using assessments on the melanin concentrations Cem and Cpm as described in the next section.

5.2 Comparing Iris Photographs and Renderings

Once the iridal morphology T is recovered, we estimate the iris scattering parameters using an iterative method based on image comparisons. As described above, the iridal chromatic appearance mostly depends on the concentrations in eumelanin and pheomelanin pigments.

We previously stated that, if setting aside the freckles and collarette hyper pigmentation, the luminance values of the iris photograph can be related to the thickness of the stromal layer. However, the iridal color itself still depends on the light interactions in both the ABL and stroma simultaneously. As stated in the previous section, we assume the hemoglobin and carotenoid concentrations to be constant and similar for every type of iris. Under such an assumption, the concentration parameter C introduced in Section 3 only depends on the melanin concentrations. Therefore, we need to estimate the quadruplet of concentrations (CABLem,CABLpm,Cstromaem,Cstromapm).

Considering the melanin concentrations used by Lam and Baranoski [5] in the ILIT simulations, βm, the ratio of pheomelanin to eumelanin is almost constant: βm ≈ 0.226. Thus, we first constrain the pheomelanin concentrations to a value of 0.226 Cem.

We search for the concentrations Cem of eumelanin in the ABL and stroma that minimize the differences between the iris photograph and the corresponding rendered iris.

Comparing the mean iridal color from the images leads to an underconstrained problem, with multiple solutions as several concentration configurations can lead to a similar mean iridal color. We take advantage of the morphological characteristics of the iris, say the thickness variations of the stroma. Fig. 11a shows several different local profiles of the same iris. Finding a solution which simultaneously minimizes the image differences for regions with different thicknesses reduces the number of optimal concentration configurations.

Figure 11
Fig. 11. (a) Iris profile and local iridal thicknesses and (b) image after the ortho-radial Gaussian filtering and selected windows used in the search for the concentrations.

The image of difference is defined as follows: Formula TeX Source $$I_{dif\!f} = \vert I_{photo} - I_{synth}(\tilde{T},C,L_i)\vert, $$ where C is the only varying parameter as we already estimated Formula and Li.

Nevertheless, the rendered iris suffers from a coarse preestimation of the iridal structure Formula (Section 4.2), and therefore cannot be directly compared to the iris photograph.

As presented in Section 2.2, the iridal features are mostly radial furrows from the pupil to the limbus, ortho-radial particularities being less common. An ortho-radial Gaussian filtering step allows us to disregard the errors made in the estimation of the stromal thickness. The ortho-radial Gaussian filter, grθrθ), smoothes the iris details without modifying its overall color variations. The variances σrθ, and mean values μrθ, of the Gaussian filter are arbitrary chosen to reasonably blur the image Idiff (values for the filter parameters are provided in Section 7).

This filtering step has several other advantages: First of all, it allows us to perform the comparisons between photographs and renderings which cannot match pixelwise. Moreover, applying this filter minimizes the noise present in both the iris photograph and the rendered image. In the first case, this noise is due to the captor sensitivity and, in the second case, appears if generating the synthetic image using a Monte Carlo rendering method such as ILIT [5]. Note that it also allows us to neglect the photographic blur caused by the reduced depth of field common in macrophotography.

Once the image is filtered, we search for the concentrations (CABLem,Cstromaem) that minimize the RGB values for each pixel of the image Idiff: Formula TeX Source $$I_{dif\!f}^{\prime }(x,y) = g(\sigma_r,\sigma_{\theta },\mu_r,\mu_{\theta })\ast I_{dif\!f}(x,y). $$ To speed up the computation of the synthetic image and the convergence of the search, we limited our comparisons to a set of selected windows of the image Iphoto, as shown in Fig. 11b.

We solve this minimization problem using the Simplex algorithm [25], considering no additional assumptions about the concentration variations, such as their quantitative and qualitative influence on the iridal color. Having assessed eumelanin concentrations, we lastly estimate stromal and ABL pheomelanin concentrations more accurately by modifying the ratios between the eumelanin and pheomelanin say, βABLm and βstromam, using the same approach.

Fig. 12 summarizes the steps of search for the iridal pigment concentrations. Note that the convergence of this method is not guaranteed. Nevertheless, we found it effective in our experiments and it yielded visually pleasing iris models (see Section 7).

Figure 12
Fig. 12. Recovering iridal pigment concentrations.

Iris Rendering

We developed a real-time rendering technique that serves for fast assessments of the iridal thickness and pigment concentrations. Nevertheless, any iris rendering method can be used to perform the estimations presented in this paper. Monte Carlo methods, such as the one presented in [5], while slower, can more accurately estimate the light scattering within the iridal tissues. This light propagation is governed by the Radiative Transport Equation [26]. For the sake of real-time frame rates, we limit our subsurface scattering estimation to the single scattering term, hence neglecting the multiple scattering contribution.

Figure 13
Fig. 13. Iris rendering using a Subsurface Texture Mapping approach.

6.1 Iridal Subsurface Map

The layered structure of the human iris is modeled and rendered using the Subsurface Texture Mapping technique [1].

A subsurface map encodes the thickness of four layers using the four RGBα channels of a two-dimensional texture which allows us to represent highly detailed irides with very low memory and computational costs. As in the Relief Mapping technique proposed by Policarpo et al. [27], we encode the distance between a plan upon the iris surface and the surface of the ABL within the red channel as presented in Fig. 13. The thickness of the ABL itself is then encoded within the green channel, while the blue channel encodes the thickness variations of the stromal layer. The IPE thickness is not encoded in the map as its contribution to the resulting iridal color is neglected.

Our rendering method, such as the Subsurface Texture Mapping algorithm, is implemented on graphics hardware and uses a ray-marching algorithm to estimate the single scattering within eye tissues, as illustrated in Fig. 13.

Each camera ray is refracted when passing through the cornea. The ray is then sampled when reaching the first iridal layer (the reaching point P is obtained using the iridal subsurface map). At each sample point M, our algorithm estimates the single scattering contribution using the scattering and absorption coefficients derived from the iridal pigment concentrations (Section 5.1).

The single scattering evaluation also needs an estimation of the incoming light arriving at point M. While the Subsurface Texture Mapping rendering technique only considers the attenuation of the light when it enters multilayered materials, we also take into account the light refraction at the corneal interface. To do so, and based on the observation that any point light source can be reasonably approximated as a directional light source due to the small size of the iris, we introduce a precomputed refraction function, to achieve real-time performance.

Under grazing lighting conditions, caustics commonly appear on the iris. Therefore, we also define a caustic function to handle this phenomenon in our real-time renderer.

6.2 Refraction Function

The appearance of the iris highly depends on the refraction of the incident light on the cornea (Fig. 14). The computation of the minimal optical path between two fixed points through the cornea being time-consuming, we define a precomputed refraction function fr(P, ωi).

Figure 14
Fig. 14. (b) A rendering without refraction which lacks the typical darkening on the outer edge of the iris due to the (a) corneal refraction of light. (c) The simulation of light refraction at the cornea adds a key aspect to the realism of the synthetic eye.

For a given point P on the iris and a given incoming light direction ωi, this function returns the direction of the refracted light ray ωr, such that the optical path is minimal (Fig. 15): Formula TeX Source $$f_r(P, \omega_i) = T_c (1-F_{i}) e^{-\sigma_t^{ah} \Vert KP\Vert } \omega_r. $$ The norm of fr(P, ωi) gives the amount of light power carried by the refracted ray, with Fi = Fi· N,η) being the Fresnel factor of corneal interface, N the normal at the light entry point, and η the ratio between the air and corneal refractive indices. Formula accounts for the light absorption within the anterior chamber of the eye, with σaht being the absorption coefficient of the aqueous humor, considered close to pure water (provided in [28]. Tc is the corneal transmittance: Formula TeX Source $$\log (T_{c}) = -0.016 - c\lambda^{-4}, $$ in which c depends on the incoming light angle [29].

Figure 15
Fig. 15. Refraction and caustic functions.

We precompute the values of this function for a set of directions ωi and a set of points P on the iris using a similar approach to the one presented in Section 4.1. The radiance incoming at point P from the direction &omegar is equal to fr(P, ωi)Is, with Is being the radiance of the light source as seen from direction ωi.

The cornea is symmetrical around the vertical axis. As the iridal thickness variations are small compared to the length of the light path through the anterior chamber of the eye, we consider the iris surface equation to be equal to IT(r). This means considering a symmetrical iris around the vertical axis with respect to P. The refraction function being then also symmetrical around the z -axis, we evaluate it for a number of points NP on a single radius of the iris. For each sample point P, we uniformly sample Nωi directions on the surrounding hemisphere to compute the refraction function. At runtime, those values are linearly interpolated for in-between values of P and ωi. As shown in (3), the profile of the cornea is smooth and continuous. Consequently, the refraction function can be accurately represented using a small number of sample points and directions.

We tested our refraction function in the recovery of the light entry point K for nonprecomputed values of P and ωi. The interpolation error remains below 0.08 mm with 100 sample points on the iris and 2,500 sample directions while the raw storage cost is only 300 KB.

6.3 Caustic Function

For grazing lighting angles, light enters the anterior chamber of the eye and reflects off the internal surface of the cornea, creating a caustic on the iris (Fig. 16). We developed a caustic function, fc(P,ωi), computed, as for the refraction function, on a sampled radius of the iris for every sample direction ωi of the upper hemisphere.

Figure 16
Fig. 16. (a) Eye photograph exhibiting a caustic on the iris and (b) iris rendered in real time using the caustic function: due to light scattering, the caustic color depends on the iridal pigmentation.

For a given point P on the iris and a set of rays parallel to the direction ωi, this function returns a weighted average direction of the rays incoming at P due to the refractions by the cornea and the reflections off the internal surface of the cornea.

The caustic function is surjective (Fig. 15). To tackle this problem, fc(Pi) returns a single direction Formula, considered as the resulting direction of the incoming radiance, weighted sum of the caustic rays ωc: Formula TeX Source $$\eqalign{ f_c(P,\omega_{i}) &= \sum T_c (1-F_i) F_r e^{-\sigma_t^{ah}(\Vert KK^{\prime }\Vert +\Vert K^{\prime }P\Vert)}\omega_c \cr&= \kappa \tilde{\omega }_{c}.\cr} $$ The norm of fc(Pi), κ, is the cumulative refraction/reflection factor for the point P and for the incoming direction ωi. The incoming radiance carried by each caustic ray is attenuated by the product of Fresnel terms: Fi = F((ωi· N,η) for the refraction of the incoming ray and Formula for the internal reflection. Tc accounts for the corneal transmittance while Formula represents the light absorption within the aqueous humor.

The total radiance incoming at point P from the direction Formula is then equal to κ Is, with Is being the radiance of the light source as seen in direction ωi.

For 2,500 incoming directions ωi on the surrounding hemisphere, and 100 points P on a radius of the iris, we estimate each fc(Pi) by shooting 400,000 rays parallel to the direction ωi toward the corneal surface. This yields a reliable estimate of the caustic function, and allows real-time rendering of iris caustics by interpolating the precomputed values.

6.4 Freckles and Collarette Hyperpigmentation

Small flecks of pigments, ranging from yellow-tan to deep brown, are observed on the anterior surface of approximatively half the adult population [30]. Iris freckles appear as relatively discrete colonies of cells that sit on the surface of the ABL (Fig. 17).

Figure 17
Fig. 17. Rendering of freckles laying on the surface of the iris.

As explained in Section 4.2, we locate these iridal imperfections and create a freckles texture. We include these freckles in our iris model by locally changing the thickness of the ABL. We locally increase the ABL concentrations of eumelanin or pheomelanin, depending on the freckle color. We model hazel irides which exhibit a hyperpigmentation in the region of the collarette following the same texturing method.

6.5 Volumetric Veins in Episclera

The white outer coating of the eye, the sclera, is overlaid by the episclera. We model both of these tissues as a subsurface texture map [1] to allow volumetric rendering of the veins in the episclera (Fig. 18). The scleral scattering coefficients are derived from the data presented in [7], while the episceral veins are modeled considering a maximal concentration of oxyhemoglobin. Finally, the rendering of the interface between the sclera and the cornea, say the limbus, is achieved using classical blending.

Figure 18
Fig. 18. Volumetric veins on the sclera rendered using the Subsurface Texture Mapping technique.

Results and Validation

Our iris images were captured using a 12.3 megapixels Nikon D300 with a 105 mm macro lens and an additional 36 mm extension tube. Note that all the synthetic irides used for the assessments are rendered using the estimated camera position and its focal length. In order to minimize the indirect unpolarized lighting, all the specimens were captured in a dark room lit by two collimated polarized spot lights. Even though our unrefraction algorithm works for arbitrary camera positions, we also chose to capture all the eyes from the front to maximize the iris texture resolution and sharpness. Our cloning process has been performed using our real-time rendering method.

The cloned irides presented in Fig. 19 were rendered using 25 point light sources approximating the environment lighting. The Simplex algorithm, used to estimate the iridal pigment concentrations, provided visually satisfying results after about 50 iterations. Increasing the number of steps in the concentration search did not significantly change the iridal appearance.

Figure 19
Fig. 19. Convergence analysis of the concentration estimation stage for a brown and a blue eye. From left to right, initial synthetic image using coarse concentration values, mid-convergence result, and final rendered iris with visually fitting concentrations. The second and fourth rows show the cumulative histograms of the image of difference between the photograph and the rendered image at each iteration. The abscissa represents the pixel intensity from 0 to 255, and the y -axis the percentage of pixels.

Comparing photographs with computer-generated images is a complex issue, as the images cannot match pixelwise. We chose to base our searching algorithms on the root-mean-square (RMS) distance for its simplicity and speed. Nevertheless, the image differences have to be interpreted carefully. Fig. 19 illustrates the iterative process aiming at determining the different concentrations of the iris and providing the cumulative histogram of the image of difference at respectively the initialization, at an intermediate step, and at the final step of our pigment concentrations search algorithm. Note that to demonstrate the robustness of the iterative process, we chose very coarse values of the initial guesses of pigment concentrations. The tables in Fig. 19 provides the RMS error for each channel at each step of our iterative search. The final error has an average of about 5 percent which is not significantly visible. We can notice that the more the algorithm converges, the closer the RGB histograms are to the y -axis, which means that the image difference decreases progressively and uniformly. Moreover, we can also notice that the RGB histogram curves get closer and closer to each other, meaning that the same amount of error occurs for each channel, hence recovering the chrominance of the real iris.

Fig. 20 compares the radial luminance profile of a real iris to that of an iris rendered using different estimates of the iridal morphology. Note that a radial luminance profile gives the luminance of the points lying on an arc on the iris. When describing the stromal thickness with the unrefracted nondeconvolved texture, the luminance of the rendered iris appears "smoothed out" (Fig. 20a) and do not match the real iridal morphology. Rendering irides using a deconvolved texture provides sharper results. However, Fig. 20c shows that using a deconvolution of σ = 1 mm tends to overestimate the thickness variations of the stroma. Therefore, we chose to deconvolve our iris texture using σ = 0.5 mm (Fig. 20b), which, if not exactly matching original iridal luminance variations, provides virtual irides visually close to the original.

Figure 20
Fig. 20. Comparison between the luminance profile of the iris photograph (in red) and the luminance profile of the corresponding rendered iris (in blue): (a) rendered using a nondeconvolved subsurface map, (b) using a deconvolved map with σ = 0.5 mm, and (c) using a deconvolved map with &sigma = 1 mm.

Our rendering algorithm has been tested on a Pentium4 2.6 GHz with a Geforce 8800GTX yielding frame rates of about 35 fps with two point light sources and 12 fps with 25 light sources for a resolution of 800 × 600. Our real-time rendering method allows the user to interactively modify the stromal thickness and pigment concentrations while still generating visually convincing images of irides. It also enables a fast search for the pigment concentrations in about 1 minute, as illustrated in the tables in Fig. 19.

As our technique necessarily makes use of approximations, there are some limitations. Indeed, the freckles removal step may be the source of errors in the concentration recovery, when some iridal flecks are missed. This may lead to incoherent estimates of the iridal thickness and pigment concentrations. However, the use of an ortho-radial Gaussian filter (σr = 1 mm, σθ = 7 mm, μr = μθ = 0) applied to the image of difference and presented in Section 5 makes the recovery more robust. Even though only a few parameters have to be chosen by the user, this may be the source of artifacts. However, we believe that using the parameters σ and α provided in this paper leads to acceptable resulting irides close to the real ones.

While not physically accurate, our technique remains coherent with the ophthalmic literature. We found that the melanin within the ABL has more influence on the iridal color than the melanin within the stroma. This is in accordance with the statements in [6]. We also noticed that green irides, such as the one presented in Fig. 1, tend to have ratio βm higher than either that of blue or brown eyes. This higher pheomelanin ratio within green specimens has been noticed and measured in [31]. Considering an ABL with a constant thickness while only the stromal thickness is variable proved to be reliable to recover the overall color variations of the iris. Nevertheless, we add a freckle texture to handle the very localized particularities of the individual iris. Finally, we performed our cloning algorithm increasing the number of point light sources estimating the environment lighting without noticing significant changes in the melanin concentrations within neither the ABL nor the iridal stroma.



In vivo assessment of anatomical features is a complex matter with no simple solutions. We proposed a user friendly and affordable process to mimic the human eye using a simple digital camera and classical macro lens. We also took advantage of polarizing filters to avoid the light reflections on the cornea. We modeled the human iris based on its anatomical features. The light scattering within the iris creates color variations. These are not only due to variations of the pigment concentrations: the iridal reflectance also depends on the thickness variations of its layers. Therefore, we approximated the thickness of the iris layers using ophthalmological knowledge on generic irides and an unrefracted and deconvolved iris photograph.

While the estimation of the scattering features in layered materials such as skin is problematic, the iris morphology offers better dispositions: We took advantage of the iridal thickness variations and proposed a method to approximate the ABL and stromal melanin concentrations.

We achieved real-time rendering of the synthetic eye by leveraging graphics hardware and introducing precomputed refraction and caustic functions. We believe that the use of such functions could speed up the rendering of refractive objects which have symmetry properties.

Future work will focus on improving the estimation of the iridal morphology and scattering parameters for potential medical purposes. We would also like to achieve real-time estimation of multiple scattering in layered materials with nonuniform thicknesses and consider cloning of other organic tissues and extend our method to nonhuman eyes.


The authors would like to thank Ralph C. Eagle for his SEM images as well as Mohamed Sobhy from MPC for his helpful suggestions. Special thanks to P. O'Connells for his help and to the people who opened their eyes for them.


G. François is with IRISA/INRIA Rennes, France, and also with Orange Labs, 3 A pl 50e Régiment d'Artillerie 35000 Rennes, France. E-mail:

P. Gautron and G. Breton are with Orange Labs, 3 A pl 50e Régiment d'Artillerie 35000 Rennes, France. E-mail:,

K. Bouatouch is with IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France. E-mail:

Manuscript received 16 July 2008; revised 21 Nov. 2008; accepted 14 Jan. 2009; published online 26 Jan. 2009.

Recommended for acceptance by P. Dutre.

For information on obtaining reprints of this article, please send e-mail to:, and reference IEEECS Log Number TVCG-2008-07-0096.

Digital Object Identifier no. 10.1109/TVCG.2009.24.


1. "Subsurface Texture Mapping,"

G. François, S. Pattanaik, K. Bouatouch and G. Breton

IEEE Computer Graphics and Applications, vol. 28, no. 1, pp. 34-42, Jan./Feb. 2008.

2. "A Virtual Environment and Model of the Eye for Surgical Simulation,"

M.A. Sagar, D. Bullivant, G.D. Mallinson and P.J. Hunter

Proc. ACM SIGGRAPH, pp. 205-212, 1994.

3. "An Ocularist's Approach to Human Iris Synthesis,"

A. Lefohn, B. Budge, P. Shirley, R. Caruso and E. Reinhard

IEEE Computer Graphics and Applications, vol. 23, no. 6, pp. 70-75, Nov./Dec. 2003.

4. "A Model Based, Anatomy Based Method for Synthesizing Iris Images,"

J. Zuo and N.A. Schmid

Advances in Biometrics, vol. 3832, pp. 428-435, 2005.

5. "A Predictive Light Transport Model for the Human Iris,"

M.W. Lam and G.V. Baranoski

Proc. Eurographics Workshop, pp. 359-368, 2006.

6. "Qualitative Assessment of Undetectable Melanin Distribution in Lightly Pigmented Irides,"

G.V.G. Baranoski and M.W.Y. Lam

J. Biomedical Optics, vol. 12, no. 3, p. 030501,, 2007.

7. "Scleral Tissue Light Scattering and Matter Diffusion,"

V.V. Tuchin, I.L. Maksimova, A.A. Mishin and A.K. Mavlutov

Proc. SPIE, vol. 3246, pp. 249-259, 1998.

8. Ocular Anatomy and Physiology

T. Saude

Blackwell, 1993.

9. Optics

E. Hecht and A. Zajac

second ed. Addison-Wesley, 1987.

10. "Melanocytes and Iris Color. Electron Microscopic Findings,"

P. Imesch, C. Bindley, Z. Khademian, B. Ladd, R. Gangnon, D. Albert and I. Wallow

Archives of Ophtamology, vol. 114, no. 4, pp. 443-447, 1996.

11. "Is There Any Difference in the Photobiological Properties of Melanins Isolated from Human Blue and Brown Eyes?"

I. Menon, P. Basu, S. Persad, M. Avaria, C. Felix and B. Kalyanaraman

British J. Ophtamology, vol. 71, no. 7, pp. 549-552, 1987.

12. "Iris Pigmentation and Pigmented Lesions: An Ultrastructural Study,"

R.C. Eagle

Trans. Am. Ophthalmological Soc., vol. 86, pp. 581-687, 1988.

13. "Recovering High Dynamic Range Radiance Maps from Photographs,"

P.E. Debevec and J. Malik

Proc. ACM SIGGRAPH '97, pp. 369-378, 1997.

14. "Eyes for Relighting,"

K. Nishino and S.K. Nayar

Proc. ACM SIGGRAPH '04, pp. 704-711, 2004.

15. "Separating Reflections in Human Iris Images for Illumination Estimation,"

H. Wang, S. Lin, X. Liu and S.B. Kang

Proc. Int'l Conf. Computer Vision (ICCV), pp. 1691-1698, 2005.

16. "Ray Tracing through Nonspherical Surfaces,"

T.Y. Baker

Proc. Physical Soc., vol. 55, no. 5, pp. 361-364, 1943.

17. "Iris and Pupil,"

J. Nolte

Physiology of the Human Eye and Visual System, pp. 217-231, Harper and Row Publishers, Inc., 1979.

18. "Simultaneous Structure and Texture Image Inpainting,"

M. Bertalmio, L. Vese, G. Sapiro and S. Osher

Computer Vision and Pattern Reconition (CVPR), vol. 2, p. 707, 2003.

19. Ultrasound Biomicroscopy of the Eye

C.J. Pavlin and F. Foster

Springer, 1994.

20. "Efficient Rendering of Human Skin,"

E. d'Eon, D. Luebke and E. Enderton

Proc. Eurographics Symp. Rendering (EGSR), 2007.

21. "The Melanosome: Threshold Temperature for Explosive Vaporization and Internal Absorption Coefficient during Pulsed Laser Irradiation,"

S.L. Jacques and D.J. McAuliffe

Photochemistry and Photobiology, vol. 53, no. 6, pp. 769-775, 1991.

22. "A Spectral BSSRDF for Shading Human Skin,"

C. Donner and H.W. Jensen

Proc. Eurographics Symp. Rendering (EGSR), pp. 409-418, 2006.

23. Clinical Anatomy of the Eye

R.S. Snell and M.A. Lemp

second ed. Blackwell Science, 1997.

24. "Introduction to Light Scattering by Biological Objects,"

N.G. Khlebtsov, I.L. Maksimova, V.V. Tuchin and L.V. Wang

Handbook of Optical Biomedical Diagnostics, pp. 31-169, Soc. of Photo Optical, 2002.

25. Introduction to Algorithms

T.H. Cormen, C.E. Leiserson, R.L. Rivest and C. Stein

second ed. MIT Press,

26. Radiative Transfer

S. Chandrasekhar

Clarendon Press, 1950.

27. "Real-Time Relief Mapping on Arbitrary Polygonal Surfaces,"

F. Policarpo, M.M. Oliveira and J.L.D. Comba

Proc. Symp. Interactive 3D Graphics and Games (SI3D '05), pp. 155-162, 2005.

29. "Light Transmittance of the Human Cornea from 320 to 700 nm for Different Ages,"

T. VanDenBerg and K. Tan

Vision Research, vol. 34, pp. 1453-1456, 1994.

30. Tumors of the Eye

A.B. Reese

third ed. Harper and Row, 1976.

31. "Characterization of Melanins in Human Irides and Cultured Uveal Melanocytes from Eyes of Different Colors,"

G. Prota, D. Hu, M.R. Vincensi, S.A. McCormick and A. Napolitano

Experimental Eye Research, vol. 67, no. 3, pp. 293-299, 1998.


Guillaume François

Guillaume François received the engineering degree in computer science and the PhD degree in computer sciences from the University of Rennes 1, France, in collaboration with Orange Labs. He also worked in collaboration with the University of Central Florida. His research interests include high-quality real-time rendering and subsurface scattering.

Pascal Gautron

Pascal Gautron received the master's degree in computer science from the University of Poitiers and the PhD degree in computer sciences from the University of Rennes 1, France. He also worked in collaboration with the University of Central Florida and the University of Poitiers. His main research interest is high-quality real-time rendering using graphics hardware.

Gaspard Breton

Gaspard Breton received the PhD degree in computer science from the University of Rennes 1, France. He is a research engineer at Orange Labs. His interests are real-time facial animation as well as embodied conversational agents.

Kadi Bouatouch

Kadi Bouatouch received the PhD degree in 1977 and the higher doctorate degree in computer science in the field of computer graphics in 1989. He is an electronics and automatic systems engineer (ENSEM 1974). He is a professor at the University of Rennes 1, France, and a researcher at IRISA-INRIA. His research interests are global illumination, rendering of complex environments, real-time high-fidelity rendering, virtual and augmented reality, and computer vision. He is a member of the Eurographics, the ACM, and the program committees of several conferences and workshops, and a senior member of the IEEE and the IEEE Computer Society.

Cited by

No Citations Available


IEEE Keywords

No Keywords Available

More Keywords

No Keywords Available


No Corrections


No Content Available

Indexed by Inspec

© Copyright 2011 IEEE – All Rights Reserved