Computational Super-Resolution Full-Parallax Three-Dimensional Light Field Display Based on Dual-Layer LCD Modulation

Due to physical limitation in resolution of display device, it is usually very difficult to achieve a high spatial resolution and a high angular resolution simultaneously for a three-dimensional (3D) light field display. Here, a computational super-resolution full-parallax 3D light field display is demonstrated, which can achieve both high spatial resolution and angular resolution. The proposed display consists of a specially designed backlight unit with controlled scattering angle, two cascadedly arranged LCDs with a resolution of <inline-formula> <tex-math notation="LaTeX">$3840\times 2160$ </tex-math></inline-formula>, an aberration-suppressed compound lens array, and a holographic functional screen. The optical image formation processing of the display is analyzed and modeled as a whole process, and the super-resolution light field synthesis method is presented. By co-designing the optical elements with computation processing, our proposed display architecture can improve the spatial resolution and angular resolution with a factor of <inline-formula> <tex-math notation="LaTeX">$2\times $ </tex-math></inline-formula> in both horizontal and vertical directions, comparing with these of the conventional light field display with single LCD and standard lens array. Finally, a super-resolution full-parallax 3D light field display with a designed spatial resolution of about <inline-formula> <tex-math notation="LaTeX">$640\times 360$ </tex-math></inline-formula> in a display area of 21.5 inches and an angular resolution of <inline-formula> <tex-math notation="LaTeX">$166\times 166$ </tex-math></inline-formula> in a viewing angle of <inline-formula> <tex-math notation="LaTeX">$45\times 45$ </tex-math></inline-formula> degrees is constructed, and an excellent 3D visual experience with improved image quality can be perceived.


I. INTRODUCTION
Recently, glass-free three dimensional (3D) display has attracted lots of attentions, and various efforts have been made to achieve a natural and realistic 3D visual experience [1]- [16]. An ideal glass-free 3D display should restore the light field of objective world in multiple dimensions such as depth, color, brightness, contrast, etc.., similar to how humans observe the real world. However, most of the glass-free 3D displays are based on stereoscopic vision, which can only provide two different parallax images for viewers' eyes [1], [2]. Suffering from the disadvantages of unsmooth parallax, convergence-accommodation conflict, and small viewing angle, it is difficult for stereoscopic 3D display to provide viewer with a comfortable 3D visual The associate editor coordinating the review of this manuscript and approving it for publication was Chao Zuo . experience. Holographic 3D display which can restore all information of real world in theory, is considered as an alternative way to current stereoscopic 3D display, but it is still challenging to provide a dynamic 3D image with true color and high resolution [3]- [5]. 3D light field display (LFD) is another alternative way to current stereoscopic 3D display. By reconstructing the emitted light rays of real world with different intensities and colors in different directions, a 3D LFD can reproduce the light field of real world with all depth cues in a full-color and dynamic way, which provides viewer with a natural and comfortable 3D visual experience [6]- [16]. A 3D LFD is normally built by combining display devices with a series of optic elements, such as the integral imaging LFD based on LCD and lens-array [8]- [12], the projectiontype LFD based on multi-projection and holographic functional screen (HFS) [13], [14], and the compressive LFD based multi-layer LCD [15], [16]. Integral imaging LFD can VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ demonstrate a full-parallax and full-color 3D experience with a more compact structure comparing with projection-type LFD, and provide a larger display depth of field and viewing angle comparing with compressive LFD, but it suffers from the disadvantage of low spatial resolution due to the large pitch of lens-array. By introducing HFS into the configuration as shown in Fig. 1 (a), our previous proposed integral imaging LFDs overcame this disadvantage, and demonstrated a fullparallax and full-color 3D experience with improved spatial resolution [17], [18]. However, due to limited physical resolution of display [19], there always exists an inherent trade-off among viewing angle, spatial resolution, angular resolution, and depth range for a glasses-free 3D display. In a multi-view glass-free display, the number of multi-view pixels in display area is defined as spatial resolution, and number of view-dependent sub-pixels per multi-view pixel is defined as angular resolution [20]. A 3D LFD which can reproduce 3D scene with a high spatial resolution and a high angular resolution in a large viewing angle is always expected. To improve spatial or angular resolution, researchers have attempted to display 3D images by combining multiple LCDs or projectors [21]- [23], such as our previous demonstrated dynamic 3D LFD with a viewing angle of 90 degrees with compound lenticular lens-array and three projectors [23]. Even though those methods can improve the spatial or angular resolutions of 3D images by introducing more display units, the display systems are usually costly, energy inefficient, or even bulky. The resolution can also be improved using computational super-resolution methods [24], which can break up the physical resolution limitation by combining the available display device with computation processing. Those methods can synthesize high resolution images more efficient than simply stacking the spatial light modulator (SLM) units together, such as improving spatial-temporal resolution two times than the original display with two display units [25]. However, all the existed computational superresolution methods are intended for improving resolution of 2D display.
Here, a computational super-resolution full-parallax 3D LFD method is proposed, which can achieve both high spatial and angular resolution. The 3D LFD is comprised of a specially designed backlight unit with controlled scattering angle, two cascadedly arranged LCDs with a resolution of 3840 × 2160, an aberration-suppressed compound lens array, and a HFS. The optical image formation processing of the display is analyzed and modeled as a whole process, and the super-resolution light field synthesis method is provided. By co-designing the optical elements with computation processing, the 3D LFD with a designed spatial resolution of 640 × 360 in a display area of 21.5 inches and an angular resolution of 166 × 166 in a viewing angle of 45 degrees is demonstrated, which shows an apparently improved spatial resolution and angular resolution than those of the conventional 3D LFD with single LCD, and an excellent 3D visual experience with the improved image quality can be perceived.

II. COMPUTATIONAL SUPER-RESOLUTION 3D LFD
The computational super-resolution 3D LFD is built based on our previous proposed integral imaging LFD architecture, and the configuration is provided in Fig. 1(d). The proposed computational super-resolution 3D LFD is composed of a specially designed backlight with controlled scattering angle, two cascaded LCDs denoted by LCD 1 and LCD 2 , an aberration-suppressed compound lens array, and a HFS denoted by HFS 2 . The backlight is designed by combining a LED array, a Fresnel lens array, and a HFS denoted by HFS 1 . HFS 2 is placed at the center depth plane, which is determined by Gaussian imaging formula, where d h is the distance between the front LCD 2 and compound lens array, d l is the distance between compound lens array and HFS 2 , f is the focal length of compound lens.
In the remainder of this section, we describe the details of our proposed approach and analyze the performance and limitations.

A. OPTICAL IMAGE FORMATION
In this section, the image formation process is analyzed by considering 2D light fields and 1D layers, but extension to the full 4D light fields and 2D layers are straightforward. The whole process of optical image formation is shown in Fig. 2 (a). Firstly the light rays with a specially designed scattering angle are emitted by the backlight unit. Then these light rays pass through the two cascaded LCDs and modulate twice in a multiplication way, leading to emergent light rays with improved angular resolution. Afterward the emergent light rays pass through the compound lens in front of each elemental image, and focus on HFS 2 . Then each illuminated point on the HFS emits multiple light beams with various intensities and colors in different directions in a controlled way, as if they are emitted from the point of real 3D object at a fixed spatial position. The introducing of backlight with controlled scattering angle and two cascaded LCD layers increases the degree of freedom of our proposed 3D LFD, making it possible to achieve super-resolution light field reconstruction. The specially designed backlight with controlled scattering angle is composed of a LED array, a Fresnel lens array, and a HFS denoted by HFS 1 . All the LEDs are placed on the focal planes of the corresponding Fresnel lenses, as shown in Fig. 2(a), then the emergent collimated light from the Fresnel lens array goes through HFS 2 . HFS is an optical device which can diffuse the incident light in a controlled angle, which is holographically printed with speckle patterns exposed on proper sensitive material [13]. The emitted light field with controlled scattering angle by the backlight can be denoted as (2), where ω 0 is the diffusing angle of HFS 1 , ξ 0 is the angle between the emitted light and normals of the backlight, rect (·) is the rectangular function.  The reconstructed light field on viewer side is engineered to be view-dependent. To formulate the light field, we adopt a two-plane parameterization of the light field, as given in Fig. 2(a). Each light ray (x, v) can be defined by its intersections with HFS 2 and a relative plane at unit distance, x is the spatial coordinate of the intersection on HFS 2 , and v = tan(θ) is the relative spatial coordinate of the intersection on the relative plane to x, which can also denote the direction of the light ray. For greater generality, we assume that the LCD layers support a higher fresh rate than human eye, and M time-multiplexed frames can be presented to the viewer at a fresh rate above the critical flicker fusion threshold of human eye, such that the viewer perceives the temporal average of a M -frame sequence [15], [25]. Then the incident light field l(x, v) to HFS 2 emitted by the compound lens array can be given by the multiplication of the light field b with controlled scattering angle emitted by the backlight, the pattern g displayed on LCD 1 , and the pattern h displayed on LCD 2 , where b(ξ 0 ) is the light field emitted by the backlight defined in (2), g m (ξ 1 ) is the transparency of LCD 1 during frame m at position ξ 1 . h m (ξ 2 ) is the transparency of LCD 2 during frame m at position ξ 2 . φ (x, v) : R × R → R is the mapping function which maps the light ray (x, v) incident upon HFS 2 to the mapping function which maps the light ray (x, v) to the backlight layer. M is the number of time-multiplexed frames. Duo the fact that we have designed an aberration-suppressed compound lens array as given in Appendix section, an excellent MTF can be achieved within the designed viewing angle. Thus an assumption of perfect optics is taken to model the ray transfer process, the mapping operations can be derived based on ray transfer matrix [26], and can be given by, where d g is the distance between LCD 1 and the primary principal plane of compound lens array, d h is the distance between LCD 2 and the primary principal plane of compound lens array, d l is the distance between the secondary principal plane of compound lens array and HFS 2 , T s is the transfer matrix VOLUME 8, 2020 of the compound lens, and is given in the Appendix section.
The diffusion characteristic of HFS 2 is important to precisely recompose the emitted light field from the lens-array according to the system geometry, and the diffusing angle is determined by the shape and size of the speckle though controlling the mask aperture [13]. As shown in Fig. 2(b), assuming a HFS with a diffusing angle of ω 1 is used, the image observed on HFS 2 is integration of the incident light field over the angular domain, and the observed light field s (x, v) at the position x and direction v on HFS 2 can be denoted as: The integration process defined in (6) can be modeled as a direction-dependent 2D convolution with a kernel ρ, and the kernel is denoted as (7). ω 1 is the diffusing angle of HFS 2 . u (·) is the step function. b (·) is the backlight distribution, and input parameter is light ray direction denoted by v 0 = tan(ξ 0 ).

B. SUPER-RESOLUTION LIGHT FIELD SYNTHESIS
Super-resolution light field synthesis requires decomposing a target super-resolution light fields (x, v) into a M-frame sequence of patterns g m (ξ 1 ) and h m (ξ 2 ). This process can be formulated as the following nonlinear least squares problem.
arg min where s(x, v) is the emitted light field in (6).
To solve this optimization problem, we need to transform optical image formation defined in (6) into a discrete representation. The light field image formation process is extended to the full 4D light fields and 2D layers. Specifically, the light field formation process can be discretized as: where S ∈ R L is the observed rank-M light field, L is the total pixel number of the super-resolution light field, M is the number of multiplexed frames presented to viewer at a rate above the critical flicker fusion threshold. G m ∈ R N and H m ∈ R N are the discrete patterns displayed on LCD 1 and LCD 2 respectively, the 2D patterns are rearranged into vectors row by row, N is the number of pixels. ∈ R L×N is transfer matrix which permutes the row of G m according to the mapping operator φ (x, v). ∈ R L×N is transfer matrix to rearrange the row of H m according to the mapping operator ψ (x, v). The matrices and are sparse, usually one nonzero value per row, which can be constructed via ray-tracing according to (4) and (5). • denotes the Hadamard or elementwise product. B ∈ R L×L is a diagonal matrix with the value of 0 or 1, and the element value is set to zero only if the corresponding light ray intersects the backlight with a angle exceeds the diffusing angle limitation, the intersection angle is determined by the mapping function θ (x, v). P ∈ R L×L is matrix which denotes the integral operation of light rays in (6) with the constraint of the filter operator ρ (·) in (7). P is a Toeplitz matrix, with each row consisting of a shift copy of the point spread function of HFS 2 , P is also sparse since each observed pixel on HSF 2 depends on a small number of pixels in each of the front and rear LCD panel. Then the superresolution light field synthesis process defined in (9) can be expressed as: S ∈ R L is the target super-resolution light field. β is the brightness scaling factor with the range of [0, 1], the factor 1/M is absorbed in β. The non-negativity constraints of G m and H m ensure that the optimized patterns are physically feasible. Although this is a non-linear and non-convex problem, it is biconvex in G and H : fixing one results in a convex problem for updating the other. Such updates are 81048 VOLUME 8, 2020 usually performed in an alternating and iterative manner, the multiplicative matrix update rules are derived for our problem as: where denotes Hadamard or element-wise product, and denotes Hadamard or element-wise division. ε = 10 −12 is added to prevent division by zero. Although the projection matrices are introduced into the solver, the update rules in (11) are mathematically distinct but numerically equivalent to the non-negative matrix factorization problems [27], [28]. The above operators can be hardwareaccelerated with a GPU, and can be implemented in real time. To evaluate the maximum spatial resolution achievable, we assume that a 3D image with constant parallax are displayed at HFS 2 plane, which is also the center depth plane defined as (1). In this case, the same image can be observed in different viewpoint. Thus the observed light field s (x, v) defined in (6) can be denoted as:

III. ANALYSIS
v 0 can be defined as any certain direction in the viewing zone. Thus the emitted light field spectrum in terms of spatial frequency ω x and angular frequency ω v is given, l (ω x ) is the Fourier transform of the light field distribution l (x, v 0 ) , * denotes convolution operator. g m (ω x ) is the displayed light spectrum on HFS 2 emitted by LCD 1 , and h m (ω x ) is the displayed light spectrum on HFS 2 emitted by LCD 2 . The spectrum of the super-resolution light field s (ω x , ω v ) is the convolution of g m (ω x ) and h m (ω x ). According to the convolution characteristics, the maximum spatial frequency of s (ω x , ω v ) is the sum of the maximum spatial frequencies of g m (ω x ) and h m (ω x ), where ω smax is the maximum spatial frequency of s (ω x , ω v ). ω gmax is the maximum spatial frequencies of g m (ω x ). ω hmax is the maximum spatial frequencies of h m (ω x ). p is the pitch of the pixel of LCD. d h and d l are the distances defined in Fig. 3(a). Assuming that cosine patterns with the maximum spatial frequency are displayed on LCD 1 and LCD 2 , the results of the super-resolution is provided in Fig. 3(c).
Although the projected image of LCD 1 on HFS 2 is out of focus, the defocus blur does not change the fact that the maximum spatial frequency ω gmax achievable is equal to ω hma, , the maximum of the spatial frequency of achievable is ω smax = 2ω hmax . The maximum spatial resolution achievable is twice as maximum spatial frequency, thus the improvement in spatial resolution has a maximum factor of 2× in both horizontal and vertical directions. For a light field display, the maximum spatial frequency of light field changes according the depth of the reconstructed voxel, the expression for the maximum spatial frequency can be depicted in a plane oriented parallel to HFS 2 screen and separated by a distance d. This expression can be derived using a frequency-domain analysis of the emitted light filed [20], with where ω s (d) is the displayed maximum spatial frequency of the light field with a depth d. ω smax is the maximum spatial frequency of the emitted light field given in (14). p l is the pitch of compound lens, N 0 is the angular resolution of our proposed LFD. d l and d h are denoted in Fig. 3(a). The distribution of spatial frequency according to display depth of our proposed computational super-resolution LFD is given in Fig. 3(d). As given in (17), the angular resolution of our proposed LFD can be improved with the improvement of the maximum spatial frequency ω smax . Thus, the improvement in angular resolution has a maximum factor of 2× in both directions comparing with conventional 3D LFD with single LCD.

B. SUPER-RESOLUTION LIGHT FIELD SYNTHESIS RESULTS
Based on the upper bound analysis of display superresolution, the simulated result of super-resolution synthesis and comparison to conventional 3D LFD with single LCD are given in Fig. 4. In this experiment, the target light field with full parallax over a field of 45 × 45 degrees is given. The target light field is decomposed into two pairs of timemultiplexed patterns with the super-resolution light field synthesis method as shown in Fig. 4(a). The choice simulates 60Hz LCDs that creates a rank-2 light field approximation for an observer with a critical flicker frequency of 30Hz, which matches our prototype in Section III. The light field can be reproduced with a high image quality using this configuration. The quantitative performance evaluation indicator of structural similarity (SSIM) and peak signal noise ratio (PSNR) are adopted, and the PSNR of the reproduced light field is 28.9dB, and the SSIM is 0.9815. In comparison, the simulated result of displaying target light field with a conventional 3D LFD is also given in Fig. 4(b). The elemental images based on conventional image coding method [9] and reproduced light field are also provided. Apparently, by employing our proposed method, the resolution is significantly improved. In our proposed architecture, the diffusing angle of the backlight ω 0 given in (2) and the distance between the two LCDs determine the number of pixels in LCD 1 covered by one pixel in LCD 2 , as shown in Fig. 3(b). The covered pixels in LCD 1 are the ones which emit light rays passing though the same pixel in LCD 2 . We show a quantitative evaluation of the super-resolution light field synthesis performance according to the change of covered pixel number in Fig. 5(a). PSNR of the reproduced light field of the super-resolution light field synthesis experiment in Fig. 4(a) is measured. Due to the limited degree of freedom of the proposed super-resolution LFD architecture, the reproduced super-resolution light field shows a decreased quality with the increase of the covered pixel number, as shown by the blue curve in Fig. 5(a). The bottom bound of the reproduced super-resolution light field PSNR is the value of the PSNR of the reproduced light field by the conventional 3D LFD with single LCD. The upper bound of the reproduced super-resolution light field PSNR can be achieved when only two pixels in LCD 1 are covered by one pixel in LCD 2 .
The convergence of the proposed super-resolution light field synthesis algorithm is also quantitatively evaluated in Fig. 5(b). It takes about 110ms for one iteration with a GPU accelerated on an Intel Core i9-7980XE PC with a Nvidia GeForce RTX 2080Ti. After about 60 iterations, no significant improvements in image quality can be observed. A real-time light field super-resolution synthesis can be achieved with 9 iterations, and the achieved PSNR is about 27.5dB for the scene in Fig. 4.

IV. IMPLEMENTATION
The implementation of the proposed computational superresolution full-parallax 3D LFD is given in Fig. 6. The backlight and two LCDs in the display architecture are combined together as the super-resolution display unit, as shown in Fig. 6(a). In backlight unit, the LEDs are placed on the focal planes of the Fresnel lenses with a focal length of 15mm, the diffusing angle of HFS 1 is 50 degrees. Two BOE MV238QUM-N20 LCDs are affixed together as the cascaded dual-layer LCD unit using optically clear adhesive (OCA) substance, the OCA has the same index as the LCD cover glass, and the fabricated dual-layer LCD unit is shown in Fig. 6(b). The LCD has a native resolution of 3840 × 2160, a refresh rate of 60Hz and a display size of 23.8 inches. All diffusing and polarizing films are removed from the top LCD 2 , and the front polarizer is replaced by a clear linear polarizer with an angle of 135 degrees, which is crossed with the front polarizer on the bottom LCD 1 . The thickness of OCA substance is d s = 1mm, thus 7 × 7 pixels in LCD 1 are covered by one pixel in LCD 2 , given the diffusing angle of the backlight unit. The prototype is shown in Fig. 6(c). The compound lens is designed based on the glass material H-ZLAF90 with a refractive index of n = 2.0, and the effective focus distance of the compound lens is f = 13.28 mm, as shown in Fig. 11 in the appendix section. The diffusing angle of HFS 2 is 5 degrees. The compound lens array comprises 46 × 26 lenses, and the pitch of two adjacent lenses is 11.17 mm. The distance from the LCD 2 to the principal plane of compound lens is d h = 8.9 mm, and the distance between HFS 2 and compound lens array is d l = 160 mm.
Besides, the display gamma curve and geometric misalignment of the dual-layer LCD unit are calibrated. The gamma curves of both the native LCDs are measured by photographing flat field images (a series of uniform images with varying intensities) with a camera in RAW format. These curves are inverted such that each display is operated in a linear radiometric fashion. The alignment of the dual-layer LCD is processed with a mechanically alignment firstly, and then the possible misalignment is fine-tuned in software. The same image with five crossbars distributed on center and four corners are displayed on both LCDs, and the misalignment is corrected by warping the image displayed on the bottom LCD 1 to align with the image displayed on top LCD 2.

V. EXPERIMENTAL RESULTS
In this section, the performance of our proposed computational super-resolution 3D LFD is evaluated, both in simulation and experiment. Assuming a critical flicker frequency of 30Hz, our prototype with two 60Hz LCDs emits a rank-2 light field, and the human observer perceptually averages over two time-multiplexed frames. Besides, the results of conventional 3D LFD with a single LCD are also provided for comparison. The conventional 3D LFD shared the same structure with our proposed display, which is built by displaying the elemental images on the top LCD 2 , while the bottom LCD 1 is set to be fully transparent. The designed maximum spatial resolution of the reproduced super-resolution light field of our proposed display is 1.331 cycles/mm in both horizontal and vertical directions, and a super-resolution light field with a spatial resolution of 640 × 360 can be reproduced in a display area of 21.5 inches with a viewing distance of 2.1m. The designed angular resolution of our proposed display is 166 × 166. The designed viewing angle of our proposed display is 45 × 45 degrees. All the photograph results are captured using an EOS 60D camera, with an f-number of 14 and an exposure time of 1/10s.
To evaluate the resolution characteristic of the proposed computational super-resolution 3D LFD, a test pattern consisting of multiple stripes with varying spatial frequency is introduced, and displayed on the center depth plane with the maximum spatial frequency. The value of 640 in the test pattern corresponds to a spatial resolution of 1.331 cycles/mm. Assuming an observing distance of 2.1m in front of the HFS, the reconstructed test patterns are provided in Fig. 7. The reproduced super-resolution test pattern has a resolution of 640 × 360 in a display area of 21.5 inches, while the test pattern reproduced by conventional 3D LFD has a resolution of 320 ×180. As shown in Fig. 7, both the simulation result and display result of the computational super-resolution 3D LFD show a significantly improved clarity and consistency than those of the conventional 3D LFD. By introducing our computational super-resolution LFD, the SSIM of the reproduced test pattern to target pattern is improved from 0.9166 to 0.957, and the PSNR is improved from 20.52dB to 23.49 dB, compared with conventional 3D LFD. Due to the compressive nature of our display design, the target light field is the goal that is not achieved in the actual result in general, the actual VOLUME 8, 2020 discriminable stripes with maximum spatial frequency of our reproduced super-resolution test pattern have a value of about 500, and the corresponding maximum spatial resolution is about 1.040 cycles/mm. The actual discriminable stripes with maximum spatial frequency of the test pattern reproduced by conventional 3D LFD have a value of about 300, and the corresponding maximum spatial resolution is 0.624 cycle/mm. The actual spatial resolution improvement factor of our built prototype is about 1.667×.
The reproduced results of full parallax target light field with a field of view of 45 × 45 degrees is provided in Fig. 8. The simulation results and photography results of four different views are given. The depth distributions of the reconstructed light fields are also given. The results of our proposed computational super-resolution 3D LFD outperform those of the conventional 3D LFD in both simulation and photograph. Improved SSIM and PSNR values are achieved in different views with our proposed display, and those advantages are translated to visible improvement of spatial resolution in the reproduced light field, which can be observed in the close-ups. As we can see from the close-ups, the result of our proposed super-resolution 3D LFD shows an improved resolution than that of the conventional 3D LFD even exceed the depth of field given in Fig. 3(d). This improvement of resolution benefits from the higher angular resolution of our 3D LFD, more light rays with high angular frequency are combined as the observed view with the diffuser of HFS2. Besides, the reproduced light field shows an apparently full parallax, 3D structure and relative 3D position relation of different parts can be perceived. More experimental results with different scenes are also provided in Fig. 9, the resolution improvement of the reproduced light field is significant by introducing our proposed computational super-resolution 3D LFD compared to conventional 3D LFD. Different views of the corresponding super-resolution light fields are given in Fig. 10.

VI. CONCLUSION
In summary, a computational super-resolution full-parallax 3D LFD architecture is demonstrated, which can improve the spatial resolution and angular resolution by a factor of 2× in both directions compared with conventional 3D LFD. The proposed display architecture is comprised of a backlight unit with controlled scattering angle, two cascadedly arranged LCDs, an aberration-suppressed compound lens array, and a HFS. The whole process of the optical image formation processing in the display is modeled, and the superresolution light field synthesis method is presented. Based on the proposed computational super-resolution full-parallax 3D LFD architecture, a display prototype is constructed. The demonstrated prototype can reproduce the light field with a designed spatial resolution of 640 × 360 in a display area of 21.5 inches and an angular resolution of 166 × 166 in a viewing angle of 45 degrees. An excellent 3D visual experience with the improved image quality can be provided with this prototype.

APPENDIX
Due to the aberration of lens array, a conventional 3D LFD usually suffers from degraded 3D imaging quality. The aberration of lens can reduce the reconstruction accuracy of light beam from lens array. To address this problem, an aberration-suppressed compound lens array is introduced. The compound lens consists of two lenses, and the structure is optimized with damped least-squares method to decrease the aberrations. The optimized structure and corresponding parameters are illustrated in Fig. 11. The compound lens is designed based on the glass material H-ZLAF90 with a refractive index of n = 2.0, and the effective focal distance of the compound lens is f = 13.28 mm. The ray transfer matrix of the compound lens can be denoted as: where R 1 , R 2 , R 3 and R 4 are the radiuses of different surfaces. d 1 , d 3 are the thickness of the two lenses, and d 2 is the interval between the two lenses. Compared with the standard single lens with the same focal distance, the aberrations of the compound lens are well suppressed in the designed viewing angle of 45 degrees, and the modulation transfer function of the compound lens is significantly improved.