Impacts of Retina-Related Zones on Quality Perception of Omnidirectional Image

Virtual Reality (VR), which brings immersive experiences to viewers, has been gaining popularity in recent years. A key feature in VR systems is the use of omnidirectional content, which provides 360-degree views of scenes. In this work, we study the human quality perception of omnidirectional images, focusing on different zones surrounding the foveation point. For that purpose, an extensive subjective experiment is carried out to assess the perceptual quality of omnidirectional images with non-uniform quality. Through experimental results, the impacts of different zones are analyzed. Moreover, twenty-five objective quality metrics, including foveal quality metrics, are evaluated using our database. It is quantitatively shown that the zones corresponding to the fovea and parafovea of human eyes are extremely important for quality perception, while the impacts of the other zones corresponding to the perifovea and periphery are small. Besides, most of the investigated metrics are found to be not effective enough to reflect the quality perceived by viewers. Our database has been made available to the public.


Introduction
In order to bring immersive experiences to viewers, virtual reality (VR) systems employ omnidirectional content which contains 360-degree views of scenes.Unlike traditional content displayed using a flat screen, omnidirectional content is usually consumed using Head Mounted Displays (HMDs).Also, only a small part of the full content (called viewport) corresponding to the current viewing direction is actually seen by the viewer at a moment [1].
Because omnidirectional (or 360-degree) content has very high bitrate, a key challenge in omnidirectional content delivery is how to optimize system resources while still ensuring satisfactory user experience.For that, many encoding and delivery solutions have been proposed in the literature, where the (estimated) viewport is provided with high quality and the remaining part with low quality [2][3][4].Moreover, in VR systems, foveated imaging, which decreases quality of zones far from the viewer's foveation point [5,6], can be used to further reduce resource consumption [6,7].However, the estimated viewing direction could be very different from the actual one when the system delay is large [8].Even the viewer may suddenly turn to look at the back.In these cases, the actual viewport may have low quality in the central part and high quality in the periphery.In other words, the central part may have higher quality (called scenario S#1) or lower quality (called scenario S#2) than the periphery, both resulting in omnidirectional content with non-uniform quality.
It is well-known that human visual acuity is spatially variable [9,10].In particular, when a person gazes at a point, called foveation point, a zone closer to this point is perceived to be sharper than the others.This means that the human eyes have a higher sensitivity to distortions in the central than in the periphery.Hence, the understanding of the impacts of different zones on the perceptual quality is obviously of indispensable necessity in the context of omnidirectional content.
In the literature, there are only a few existing studies on subjective quality assessments of images/videos with non-uniform quality [7,11,12].However, most of these studies are devoted to traditional content [11,12].In [11], each image is divided into four zones of equal widths.The quality of these zones gradually decreases with a fixed step size.It is found that, when the step size is small, the difference of perceptual quality between the non-uniform and uniform videos is insignificant.In addition, the maximum value of the step size without causing significant quality differences depends on content characteristics.In [12], each image is divided into three zones, which are foveal, blending, and peripheral zones.Through experimental results, the finding is that participants barely notice quality decreases at the peripheral zones of the eccentricity larger than 7.5 degrees.Also, an evaluation of four subjective assessment methods is presented.It is indicated that the Absolute Category Rating (ACR) method is the best method to evaluate the subjective quality of non-uniform images.
In the literature, there have been some studies on subjective quality assessments of omnidirectional content [13][14][15][16].In these studies, various distortion types such as compression and Gaussian blur are considered.However, the distortions are distributed uniformly in [13][14][15][16].The work in [7] is the only previous study on omnidirectional content with non-uniform quality.In [7], the authors focus on answering the question of how to spatially reduce image quality without causing impacts on user perception.For that purpose, they propose to divide an omnidirectional image into three areas according to three regions of the human retina, namely the macula, the near periphery, and the far periphery.The image quality corresponding to each region is decreased step by step until participants notice a perceptual difference.The encoding parameters obtained just before that point are modeled and then used as a guide for spatially reducing image quality without perceptual loss.It is shown that this approach could save loading time about 90% comparing to a conventional approach using uniform quality.
Over several decades, a large number of objective quality metrics have been proposed [17][18][19][20][21].Some of these metrics take into account the foveation feature, hereafter referred to as foveal quality metrics [20,21].However, all these metrics are specific to traditional content.There has been no existing foveal quality metric for omnidirectional content so far.
In our previous study [22], a comparison between eight state-of-the-art quality metrics has been conducted.Experimental results show that PSNR turns out to be the most effective metric for quality assessment of omnidirectional videos.However, it is worth to note that stimuli used in that study have uniform quality.As shown later in this paper, PSNR is actually not effective when the quality is spatially variable.To the best of our knowledge, no extensive evaluation of objective quality metrics for omnidirectional images with non-uniform quality has been conducted in the literature.
In this study, our purposes related to user perception of omnidirectional content in VR systems include: • Subjective study on the impacts of retina-related zones on quality perception of omnidirectional images.
• Performance evaluation of existing objective quality metrics, especially foveal quality metrics, for omnidirectional images having non-uniform quality.
To that end, our major contributions are as follows.First, we present a detailed description of a VR viewing geometry and the human retina.This description helps in designing subjective experiments and in calculating parameters used in foveal quality metrics.Second, we carry out an extensive subjective experiment with 256 stimuli of non-uniform quality.The quality zones of the stimuli are designed based on five regions of the human retina.Third, using a simple zone-weighted formulation, we quantify the impacts of different zones on the perceptual quality.It is quantitatively found that the zones corresponding to the fovea and parafovea of the human retina are extremely important for quality perception.Also, the impacts of zones are strongly affected by content characteristics.Fourth, we evaluate the correlation of nineteen objective quality metrics against subjective scores.Experimental results indicate that these metrics, even the foveal ones, are not very effective when the viewport quality is spatially variable.
The remainder of the paper is organized as follows.A description of a VR viewing geometry and the human retina is presented in Sect. 2. Sect. 3 presents the details of the subjective experiment.The analysis of perceptual behaviors using the experimental results is provided in Sect. 4.Then, an evaluation of quality metrics is presented in Sect. 5. Section 6 concludes the paper and provides an outlook on future work.

Overview
In this section, the viewing geometry in VR systems is first presented.Then, the regions in human retina are described.

Viewing Geometry in VR Systems
Fig. 1 illustrates a typical viewing geometry in VR systems.Assume that VP is the displayed viewport, the lens in the HMD produces a virtual viewport VP that is further formed on the retina in the human eyes.Eccentricity e (degrees) is used to measure the angular distance from the central gaze direction to any point in the virtual viewport VP .
Let F (units of length) be the focal length of the lens.S 0 , S 1 , and S 2 (units of length) respectively denote the distances from the lens to the displayed viewport VP, the virtual viewport VP , and the eye.Based on lens equations, the distance from the lens to the virtual viewport S 1 is computed by Then, the distance from the eye to the virtual viewport is calculated by Let W p ×H p (pixels) and W l ×H l (units of length) respectively be the width and height of the displayed viewport VP in pixels and units of length.The width of the virtual viewport VP (in pixels and units of length) is given by the following equations.
Figure 1: Typical viewing geometry in VR systems Also, the height of the virtual viewport VP is calculated by and Assume that the foveation point is the center O = (x O , y O ) in the virtual viewport VP .Point O = (x O , y O ) in the displayed viewport VP corresponding to point O is determined by [pixels] (7) and Let M be a point at the position of (x M , y M ) (pixels) in the displayed viewport VP.The position of the virtual point M = (x M , y M ) corresponding to point M is and The distance from pixel M to the foveation point O is The eccentricity e of point M in the virtual viewport VP is given by It should be noted that parameters of a point on the virtual viewport are what actually used in a foveal quality metric.Moreover, given the knowledge of the human visual system, points on the virtual viewport can be divided according to the regions of the retina.

Regions in Human Retina
In the human retina, there are two types of photoreceptors, namely rods and cones, each plays an important role in human visual system.In particular, cones function most effectively in relatively bright light and are responsible for color vision and visual acuity.Meanwhile, rods have higher sensitivities to light, and thus they function mainly in dim light.
Fig. 2 shows the density of photoreceptors in the human retina.It can be seen that most cones are concentrated at the center of the retina, whereas rods are located away from the center.Visual information from photoreceptors are then collected by the so-called ganglion cells.The optic disk is where axons from ganglion cells exit the retina and convey visual information to the brain.
Based on the ganglion cell layer, the retina of human eyes can be divided into two main parts, namely macula and periphery [24], as illustrated in Fig. 3.In particular, the ganglion cell layer in the macula is several cells thick.Meanwhile, the periphery is only one ganglion cell thick.The macula is further divided into three regions, called fovea, parafovea, and perifovea.The periphery is in turn divided into two regions, namely near periphery and far periphery [24,25].These five regions of the retina are briefly described below.It is worth noting that there has been no standard definition of boundaries between these regions so far [26].In our research, the boundaries are determined based on [26][27][28][29].
The fovea is a small central region of the macula that represents 5 degrees of the central visual field or an eccentricity interval between 0 degree and 2.5 degrees.This region consists of densely packed cones.In addition, it has a layer of ganglion cells, which can be up to eight cells thick.Therefore, the fovea vision has the highest sensitivity to fine details.
The fovea is surrounded by the parafovea belt corresponding to an eccentricity interval between 2.5 degrees and 4 degrees.In the parafovea, rods are more numerous.Meanwhile, the thickness of the ganglion cell layer decreases from eight to four cells at its outer edge [25].
The region next to the parafovea is the perifovea with the corresponding eccentricity interval between 4 degrees and 9 degrees.In this region, the density of rods is higher than that of cones.The thickness of ganglion cell layer reduces to one cell at its peripheral edge [25].
In the periphery, the region corresponding to an eccentricity interval between 9 degrees and 30 degrees is the near periphery, and the rest is the far periphery.The dividing line corresponding to the eccentricity of 30 degrees is selected based on several features of visual performance.In particular, letter visual acuity decreases linearly with eccentricity from 0 degree to 30 degrees.For eccentricities larger than 30 degrees, the decrease is much steeper [9].
Based on the above description of the viewing geometry and the retina, stimuli used in the following subjective experiment are designed so that the zones in the virtual   viewports will correspond to the five regions of the retina.
It is worth noting that, in this paper, we focus on the contributions (or weights) of different zones in the perceptual quality, rather than the quality-reducing trends as in [7,11,12].

Experiment Description
For the experiment, we used eight omnidirectional images, denoted by I1∼ I8, as shown in Fig. 4. Two images I5 and I7 were obtained on Flickr under of the Creative Commons (CC) copyrights.The other six images were selected from the SUN 360 Database [30,31].The characteristics of these images are described in Table 1.It can be seen that the selected images cover various categories of capturing environment and presence of human.All these images were down sampled to the resolution of 8192×4096.We asked 10 participants to freely observe the source images and then point out attractive objects.Based on the obtained results, we selected a foveation point corresponding to a viewport for each image.
In order to generate stimuli of non-uniform quality, each image was first spatially divided into five zones, denoted Z 1 , Z 2 , Z 3 , Z 4 , and Z 5 .In particular, each zone represents an eccentricity interval as shown in Table 2.It can be seen that zones Z 1 , Z 2 , Z 3 , Z 4 , and Z 5 respectively correspond to the fovea, parafovea, perifovea, near periphery, and far periphery in the retina.Fig. 5 illustrates the boundaries of the zones in the viewports used in our experiment.
As described in Sect. 1, we consider two basic scenarios of spatial quality changes.In the first scenario (S#1), the center has higher quality than the periphery; and in the second scenario (S#2), the center has lower quality than the periphery.For each scenario, we used four quality variation patterns as shown in Table 3.In patterns P1, P2, P3, and P4, which belong to scenario S#1, the number of high quality zones gradually increases from 1 to 4. In the remaining patterns (i.e., P5, P6, P7, and P8), which belong to scenario S#2, the number of high quality zones gradually reduces from 4 to 1.
In this study, we used one high quality level correspond-  and 6.The difference between the two scenarios is due to the fact that blurring in zones close to the foveation point is easier to be perceived than in the others.The source and blurred images were then blended into stimuli of nonuniform quality.Specifically, the high quality zones in the stimuli consist of pixels of the source images, and the low quality zones are comprised of pixels of the blurred images.Similar to [12], to prevent noticeable boundaries between low and high quality zones, belts with the width of 5 degrees between two adjacent zones having a quality switch were used as transition belts.The quality levels in these belts smoothly change using a linear function.Totally, our database consists of 256 stimuli, which were rated in the below tests.
To display the stimuli, we used a device set of a Samsung Galaxy S6 smartphone and a Samsung Gear VR headset with the 96 degree field of view.The Samsung Galaxy S6 has the screen resolution of 2560×1440 and the display size of 5.1 inches.For the Samsung Gear VR headset, the focal length of the lens is F=62mm, and the distances from the lens to the displayed viewports and the eyes are approximately S 0 =25mm and S 2 =10mm respectively.
In the tests, we used the Absolute Category Rating method [32], which is shown the best method in [12].Before doing actual tests, participants were trained to get accustomed to the devices and the rating procedure.In addition, they were instructed to appropriately adjust devices to obtain the best experience.During the test process, the stimuli were randomly displayed one at a time.Note that,  for a stimulus, the corresponding viewport displayed on HMD was fixed during the test.Participants were asked to look straight ahead at each viewport displayed directly in front of them to keep focusing on the center, where has an attractive object such as a human face or a flower vase.After stabilizing the gaze direction, each participant verbally gave a score with the grade scale from 1 (bad) to 5 (excellent) which was recorded by an assistant.
For each stimulus, the viewing duration was decided by the participants themselves to obtain more reliable rating scores.Commonly, the participants spent about 5 seconds for rating a stimulus and then took a break of 5 seconds.To avoid the negative impacts of fatigue and boredom, the tests were divided into 6 sessions conducted in different weeks.Each participant took part in only two sessions.The duration of each session was no more than 10 minutes.There were totally 62 participants between the ages of 20 and 30.A screening analysis of the obtained results was performed following Recommendation ITU-T P.913 [32], and two participants were rejected.After discarding the scores of these two participants, each stimulus was scored by 20 valid participants.The mean opinion score (MOS) The 95% confidence intervals of the MOS values are shown in Fig. 6.We can see that the scores cover fully the value range from 1 to nearly 5. Generally, the confidence intervals are smaller at the two ends of the grade scale.This is because the participants are more confident in rating stimuli of very high (or low) quality.

Quantifying impacts of zones
In this part, we present a zone-weighted formulation which will be used to analyze the impacts of different zones on the perceptual quality of omnidirectional images.In general, the virtual viewport is divided into K zones {Z k |1 ≤ k ≤ K }, each consists of N k pixels with the corresponding eccentricities e ∈ [e k−1 , e k ).Currently, we use K = 5 as described in Sect.3.Each zone Z k is then assigned a weight {w k |1 ≤ k ≤ K } representing the impact of that zone on human perception of quality.Note that K k=1 w k = 1.
Let V(x M , y M ) and G(x M , y M ) respectively be the values of pixel M = (x M , y M ) in the displayed viewports of the original and distorted images.The values of the corresponding pixel M = (x M , y M ) in the virtual viewports of the original and distorted images are respectively calculated by the following equations.
The mean squared error (MSE) of pixels in zone Z k is computed by where The zone-weighted formulation, called ZWF, is given by where MAX is the maximum possible pixel value.Here we set MAX to 255 as the bit depth of pixels is 8 bits in our experiment.
In some previous studies [22,33], it was shown that fourparameter and five-parameter logistic functions are good mappings between objective quality metrics and MOS.In this work, we deployed the following five-parameter logistic function to map the ZWF values and the MOS values in our database.
where {β i |i ∈ {1, 2, ..., 5}} are parameters to be fitted.The values of the parameters β i 's and the weights w k 's were determined by means of least squares fitting as in [34].

Discussion
To quantify the impact of each zone taking into account the effects of content characteristics, the weights w k 's are derived for each source image by fitting using the above five-parameter logistic function with the stimuli of that   5. We can see that, for all the source images, the PCC values are very high and the RMSE values are very low.In particular, the lowest PCC value is 0.97 while the highest RMSE value is 0.27.This means that the fitting to obtain the weights is reliable.
From Table 4, it can be seen that, except w 1 and w 2 , all the other weights are small (i.e., ≤ 0.095).That means the zones outside the eccentricity of 4 degrees have little impacts on the perceptual quality.Among the weights, w 1 is usually highest, which is consistent with the fact that the fovea region of the retina has the highest cone density.Also, because w 1 ≥ w 2 ≥ w 3 ≥ w 4 ≥ w 5 , distortions closer to the center have more significant effects on the perceptual quality than distortions far from the center.
Based on Fig. 7, it is interesting that the value of w 1 0.99 0.99 0.99 0.97 0.98 0.99 0.98 0.99 RMSE 0.15 0.13 0.14 0.27 0.24 0.10 0.20 0.12 actually varies in a wide range.Also, with some images, the value of w 2 is insignificant.Usually, the higher the value of w 1 is, the lower the value of w 2 becomes.More specifically, with images I3 and I7, the values of w 1 are very high.This may be because the participants focus primarily on the small face at the center of the viewports.Such phenomenon was also observed in [11].In particular, it was found that a talking face is strongly attractive to human attention [11].In addition, in these viewports, there are no other interesting objects near the center.With images I1 and I4, the participants may also pay some attention to other objects near the center (e.g., another face in image I1), so the values of w 1 are lower than those of images I3 and I7.With images I5 and I8, the center's object is not very clear (small faces in image I8) or not very attractive (a house in image I5), resulting in lower values of w 1 .Especially, with images I2 and I6, the values of w 2 are comparable to those of w 1 .In these images, the participants may look at a large central area rather than zone Z 1 only.The reason is that, in image I2, the clock at the center is larger than zone Z 1 ; and in image I6, the object at the center does not stand out from the neighboring area.
From the above, we can see that the perceptual quality is affected by two key factors.The first is the sensitivity of human eyes.Especially, in the considered context, zones Z 1 and Z 2 are much more important than the other zones.The second is content characteristics.In particular, the values of w 1 and w 2 vary widely according to 1) the attractiveness and 2) the size of the central object, as well as 3) the presence of neighboring objects.

Evaluation of Quality Metrics
In this part, by using our database, we evaluate the performances of nineteen existing objective quality metrics (OQM).The goal is to examine whether existing metrics, especially foveal quality metrics, are effective for quality assessments of omnidirectional images with non-uniform Table 6: Descriptions of objective quality metrics tested in this study.PW: Whether or not the metric differentiates pixels' contributions.FF: Whether or not the metric takes into account the foveation feature.

Metrics PW FF Description MSE
No No Mean Squared Error, Calculated based on visble pixels of a viewport with equal weights

VPSNR
No No Viewport-PSNR, Calculated based on visble pixels of a viewport with equal weights SSIM [17] No No Structural SIMilarity, Calculated based on the concept of structural similarity MS-SSIM [18] No No Multi-scale SSIM, Calculated based on similar measures computed at different resolutions (or multi-scales) of a viewport UQI [19] No No Universal Image Quality, Modeling any distortion as a combination of three different factors including loss of correlation, luminance distortion, and contrast distortion VIFp [35] No No Visual Information Fidelity in the pixel domain (VIFp) and the wavelet domain (VIF), Calculated based on the connections between image information and visual quality VIF [35] No No NQM [36] No No Noise Quality Measure, Signal-to-Noise Ratio of the restored distorted image with respect to the model restored image IW-PSNR [37] Yes No Information content Weighted PSNR, Combining information content weighting with PSNR measures IW-SSIM [37] Yes No Information content Weighted SSIM, Combining information content weighting with MS-SSIM measures FSIM [38] Yes No Feature similarity, Combining low-level feature weighting with local similarity measures FSIMc [38] Yes No Feature similarity incorporating the chromatic information, Combining low-level feature weighting with local similarity measures RFSIM [39] Yes No Riesz Transforms based Feature Similarity, Combining low-level feature weighting based on Riesz Transforms with local similarity measures SR-SIM [40] Yes No Spectral Residual based Similarity, Calculated based on a spectral residual visual saliency model FWQI [41] Yes Yes Foveated Wavelet image Quality Index (FWQI), Calculated based on wavelet coefficients in the discrete wavelet transform domain using the foveation-based error sensitivity model as a weighting function.
WSNR [36] Yes Yes Weighted Signal-to-Noise Ratio, the ratio of the average weighted signal power to the average weighted noise power, where the weighting function is the contrast sensitivity function FWSNR [20] Yes Yes Foveal Weighted Signal-to-Noise Ratio, Combining weighting for each pixel by the local frequency at that pixel with WSNR measures FPSNR [42] Yes Yes Foveal Peak Signal-to-Noise Ratio, Combining weighting for each pixel by the local frequency at that pixel with PSNR measures F-SSIM [21] Yes Yes Foveal-SSIM, Combining weighting for each macroblock based on the local frequency of pixels in that macroblock with SSIM measures

Description of metrics
Table 6 shows the notations and descriptions of the nineteen metrics considered in this study.In this table, the PW column indicates whether a metric differentiates the contributions of different pixels; and the FF column indicates whether a metric takes into account the foveation feature of the human eye.Because the implementations of the FWQI, FWSNR, FPSNR, and F-SSIM metrics are not publicly available, we implemented them based on the corresponding publications [20,21,41,42].For the remaining metrics, we used the implementations provided by the original authors.
It is worth noting that all of these metrics were proposed to calculate for all pixels in a traditional image.In this study, these metrics were calculated for viewports only (i.e., visible pixels) of the omnidirectional images to reflect what is actually watched by viewers.To extract the viewports, we used 360Lib software developed by Joint Video Experts Team (JVET) [43].In addition, geometric parameters in these metrics were calculated based on the equations presented in Subsect.2.1.
In order to evaluate the performances of the OQM metrics, we used two performance metrics of Pearson Correlation Coefficient (PCC) and Root Mean Square Error (RMSE).Similar to [33], a nonlinear regression was applied to map the OQM values to the MOS values using the five-parameter logistic function (i.e., Equation ( 18)) mentioned in Subsect.4.1.

Discussion
Fig. 8 shows the PCC and RMSE values of the OQM metrics when fitting with all the MOSs in our database.It can be seen that all the metrics have very low PCC values (i.e., PCC < 0.70) and very high RMSE values (i.e., RMSE > 0.80).Even the foveal quality metrics (namely FWQI, WSNR, FWSNR, FPSNR, and F-SSIM) have bad PCC values (i.e., from 0.08 to 0.59).This means that the investigated metrics are not effective to assess the perceptual quality of omnidirectional images with nonuniform quality.
Similar to the previous analysis related to the ZWF formulation, it is important to understand the performances of the metrics for each source image.Table 7 shows the performances of the metrics when fitting with the stimuli of each source image.It can be seen that, for all the metrics, the PCC and RMSE values are drastically variable across different source images.The bold numbers show the metrics having the highest performance for each source image.
Among the investigated metrics, the FPSNR metric has the highest PCC values for five source images (i.e., I1, I2, I4, I6, and I8).In addition, its PCC values for six images are quite good (i.e., PCC > 0.80).Especially, with images I6 and I8, the PCC values are nearly perfect (0.99 and 0.98).However, for two images I3 and I7, its PCC values are very low (i.e., PCC < 0.70), even lower than those of  the MSE metric (i.e., 0.59 vs. 0.76 and 0.59 vs. 0.60), which is the simplest metric in practice.
As for the other quality metrics, their performances are mostly low.Even the other foveal quality metrics (i.e., except the FPSNR metric) have lower performances than the non-foveal and simple metrics.
To understand the actual behaviors of the foveal quality metrics that cause low performances, Fig. 9 shows the scatter plots of the values of these metrics versus the MOS values for image I7.In this figure, we use different legends to differentiate the stimuli of scenario S#1, where the center has higher quality, and the stimuli of scenario S#2, where the center has lower quality.It is well-known that higher values of these metrics mean higher MOS values and better perceptual quality.From Fig. 9, we can see that the MOS values in scenario S#1 are generally higher than those in scenario S#2.However, for the WSNR and F-SSIM metrics, most of their values in scenario S#1 are significant lower than those in scenario S#2.For the remaining metrics, with the same MOS value, their corresponding values vary in a wide range.These result in the low performances of the foveal quality metrics.
From the above analysis, we can see that the investigated metrics are not effective to evaluate omnidirectional images of non-uniform quality.Though the FPSNR metric (i.e the most feasible one) have very high performances in certain images, it performs even worse than the simple MSE in some other images.Moreover, the performances of all the quality metrics are not good across different images.This suggests that it is necessary to integrate content characteristics in these quality metrics.

Conclusions
In this paper, we have conducted subjective and objective quality assessments of omnidirectional images with nonuniform quality focusing on foveation feature of human eyes.Based on the obtained results and discussions, some findings can be summarized as follows.
• The perceptual quality is affected by two key factors, which are the sensitivity of human eyes and content characteristics.
• The zones of an image corresponding to the fovea and parafovea of human eyes are extremely important for the perceptual quality.
• Content characteristics including the attractiveness and the size of central object, as well as the presence of neighboring objects affect the quality perception.
• The nineteen objective quality metrics considered in this study (including foveal quality metrics) are not effective to evaluate omnidirectional images with non-uniform quality.
• The performances of the investigated metrics vary drastically across different contents.
For future work, further investigations with more content types and quality variation patterns will be conducted to derive better understanding of viewers' perceptual behaviors as well as the performances of existing metrics.

Figure 4 :
Figure 4: Eight omnidirectional images used in our experiment

Figure 5 :
Figure 5: Boundaries of zones in viewports used in our experiment

Figure 7 :
Figure 7: Weights of zones for each source image

Figure 8 :
Figure 8: Performances of objective quality metrics

Figure 9 :
Figure 9: Scatter plots of the values of the foveal quality metrics versus the MOS values for image I7

Table 1 :
Features of source images ImageDescription I1 indoor scene, large conference room, containing human faces I2 indoor scene, train station in Japan, containing human faces I3 indoor scene, small kindergarten classroom, containing human faces I4 indoor scene, meeting room, without presence of human I5 outdoor scene, natural landscape, daytime, without presence of human I6 outdoor scene, balcony, nighttime, without presence of human I7 outdoor scene, festival, daytime, containing human faces I8 outdoor scene, outside of a cathedral, at sunset, containing human faces

Table 2 :
Eccentricity intervals of zones

Table 3 :
Quality Variation Patterns (HQ: High quality and LQ: Low quality)

Table 4 :
Weights of zones for each source image I3 0.905 0.024 0.024 0.024 0.024 I4 0.759 0.063 0.063 0.063 0.052 I5 0.650 0.087 0.087 0.087 0.087 I6 0.404 0.404 0.064 0.064 0.064 I7 0.941 0.019 0.019 0.019 0.003 I8 0.545 0.204 0.095 0.095 0.061 image only.The obtained values of the weights are shown in Fig. 7 and Table 4.The correlation coefficients including Pearson Correlation Coefficient (PCC) and Root Mean Square Error (RMSE), which are used to quantify the performance of the fitting between the ZWF formulation and the MOS, are shown in Table

Table 5 :
Performance of fitting between the ZWF formulation and MOS

Table 7 :
Performances of metrics calculated with the stimuli of each source image.The bold numbers show the metrics having the highest performance for each source image.