A Novel Chicken Meat Quality Evaluation Method Based on Color Card Localization and Color Correction

Among all the chicken meat quality evaluation metrics, color is one of the most significant factors directly related to the freshness of meat inducing the purchase desire. Biochemical tests for evaluating meat quality may contaminate or damage the test samples. Visual rating method is subjectively inefficient and difficult to realize online detection. Colorimeter has the disadvantages of complicated time-consuming operations, high technical requirements and expensive instruments. This paper proposes a low-cost, contactless chicken meat quality evaluation method by examining the color image of chicken meat. Specifically, the meat image is acquired by the camera of a smartphone. To eliminate the chromatic aberration, a pre-defined color card is put beside meat and automatically localized to extract the captured color information for color correction. Finally, the corrected colors of all the experimental meat samples are analyzed by hierarchical clustering to achieve 3 different quality levels.


I. INTRODUCTION
Meat color provides an intuitive impression of freshness, ingredient composition as well as about the presence or absence of falsification. Although aroma and taste are related to the final evaluation of meat quality, color is one of the most important metrics that affect the appearance of meat products and buying inclination. It is not only a comprehensive indicator reflecting the differences in muscle biochemistry, physiology and microbiology, but also an important factor influencing the shelf life of fresh meat.
There are strict procedures and testing requirements on chicken while breeding and marketing. When the raw chicken is put into the circulation of food and non-staple food chain, the food safety department and food processors have paid much attention to the quality control of meat. Moreover, the color of raw chicken meat also attracts the attention of chefs and leads to different choices while cooking. The identification of the color of raw chicken meat will provide The associate editor coordinating the review of this manuscript and approving it for publication was Vishal Srivastava. a clear guide on the selection of raw chicken meat. From subjective judgement by the naked eye to evaluation through automatic measurement, meat color analysis is a practical problem in the topic of raw chicken meat quality evaluation.

A. MOTIVATION AND OVERVIEW
Computer vision system (CVS) has a broad application prospect in meat quality inspection, which is considered to be a promising method to objectively evaluate meat quality based on fresh meat characteristics [1]. The existing CVS device has a fairly good result on fresh meat evaluation, however it also has limits such as expensive instruments, rigid requirements and laboratory conditions. Considering the color of raw chicken meat is related to the meat quality, this paper tries to explore the relationship between the meat color and its quality evaluation, and propose a novel low-cost raw chicken meat quality evaluation method based on color analysis.
Different from existing methods, the proposed method extends the limitation of devices and conditions while taking VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ photos of raw chicken meat. Instead of using professional cameras and rigid illumination conditions, our method only need the camera of an arbitrary portable device (in our case, a smartphone is used). However, different devices capture the chicken meat with different hues. Especially, some smartphones tend to use the warm tone while others tend to use the cold tone. In this way, the color can affect visual effect of the captured image and make it unreliable to evaluate meat quality with color. To address this problem, this paper proposes to use a color card with pre-defined standard color blocks printed on it. The color card is put beside the raw chicken meat. The image covering both the chicken meat and the color card is captured by the smartphone camera. Then, a color card localization algorithm is proposed to automatically find the position of each color block and extract the color information. Finally, the color information of each color block is compared with the pre-defined standard color information in order to compute the parameters of the color correction model. The color of the whole image is corrected with the built color correction model. The corrected colors are supposed to be not affected by camera devices and can be used to evaluate the meat quality.

B. CONTRIBUTIONS 1) A NOVEL COLOR CARD LOCALIZATION METHOD
Template matching based algorithm [2] and SIFT feature matching algorithm [3] both have certain defects, which can not meet the requirements of color card localization in unconstrained environments. In this paper, a new color card localization method is proposed with three steps: global three-channel thresholding, contour extraction and color block localization. The proposed method has strong ability of recognition and can resist a variety of interferences. It can locate the position of the color card and each color block accurately without complicated computation. It is also a universal method to provide reliable preconditions for color correction.

2) A NEW RAW CHICKEN MEAT COLOR GRADING SCHEME
In this paper, a new chicken meat color grading scheme is proposed by combining the reliable methods such as color correction based on multivariate linear regression and unsupervised clustering with the color information extracted by the novel method of color card localization and color correction, which can realize the color grading of raw chicken meat.

3) THE COLOR GRADING IN UNCONSTRAINED ENVIRONMENT
the existing meat color grading using computer vision methods must be carried out in the laboratory environment because it cannot resist some interferences in real life. This paper introduces several image processing algorithms in the new meat color grading scheme, which can resist various interferences and realize the color grading of raw chicken meat in the unconstrained environment.

C. ORGANIZATION
The rest of this paper is organized as follows. In Section II, some related works for meat quality testing methods are briefly introduced. The detail of our method is carefully presented in Section III. To validate the proposed method, the experimental results are shown and analyzed in Section IV. At last, Section V makes a brief conclusion for this paper.

II. RELATED WORKS
Traditional meat quality testing methods have some defects, such as high cost, strong destructiveness and inability to provide quantifiable results [4], [5]. Color analysis with modern machine vision system proved to be effective in evaluating the freshness of meat products [6]. Furthermore, the feasibility of RGB model and HSV model was demonstrated for testing traditional PSE pork [7], [8]. Girolami et al. [10] investigated the limits of the colorimeter and proposed a technique of image analysis in evaluating the color of beef, pork, and chicken. The computer vision system with structured light was proposed to evaluate meat color and showed better precision and accuracy over the colorimeter [9]. Image analysis based methods were also used to predict the weight of live chickens [11] and predict the IMF content of fat meat [12]. Sun et al. [13] realized the online color and marbling detection of pork loin. Lu et al. [14] predicted the color fraction of pork accurately. Shiranita et al. [15] proposed a meat marble texture detection method for fresh meat quality identification and grading. Although image analysis based methods particularly the texture characterization algorithms [4] become viable as computation speeds increase, fully automatic meat image segmentation still remains a difficult problem in many instances of meat quality assessment. The meat quality classification model will require a substantial amount of images for training and testing, and independent meat quality data will also be required for good model calibration and validation. Besides, most of the previous testing experiments rely heavily on strictly controlled environment and are difficult to be applied outside the laboratory. To address this problem, this paper extends the limitation of capturing devices and environmental conditions while taking photos of raw chicken meat, and proposes a low-cost, contactless chicken meat quality evaluation method by examining the color image of chicken meat.

A. NOTATIONS AND DEFINITIONS
A color image is denoted by I with its three R, G, and B channel images denoted by I r , I g , I b . A single channel gray image is denoted by I s . A patch of the single channel image with size of m × n is represented by P ∈ R m×n , its i th row is denoted by P i ∈ R m×1 , its j th column is denoted by P j ∈ R n×1 . The element in the i th row and j th column is represented as P i,j . The patch matrix with the center coordinates of (x, y) in the image is defined as P (x,y) . The patch matrix with the center coordinates of (x, y) in the color card is defined as P (x,y) .The Frobenius norm of P is denoted

B. IMAGE ACQUISITION
In order to enable unconstrained data acquisition, this paper proposes to use image processing methods to deal with several common problems during image acquisition, such as rotation, illumination, occlusion, etc. Considering that it is difficult to carry professional photographic equipment whenever and wherever, this paper proposes a set of portable data collection devices, namely a paper card with 12 printed color blocks (color card) and a smartphone for image acquisition. The following experiments use images captured by a commercially available smartphone with model number of XiaomiMIX 2S. The reason of using color card is that colors captured by the smartphone need to be corrected using a color correction model and the information provided by the color card helps to build this model. The meat sample is raw chicken with feather stripped off. The image capturing steps are explained as follows: Firstly, placing the color card alongside the raw chicken meat. Then, photographing with the camera of smartphone to ensure both the color card and the raw chicken can be seen clearly.
In order to ensure the accuracy of automated color card localization, many factors are taken into account, such as the size and shape of the color card, the size and number of color blocks. The color card is made up of 12 color blocks, as shown in Fig. 1. Each color block has its pre-defined standard color and corresponding number which denotes its relative position. The color blocks are arranged in six rows and two columns so that we can use the positions of two arbitrary blocks to predict and verify the positions of the other ten blocks. The color card is supposed to be printed with professional printers and industrial grade materials which complys with the standard ISO 12647 -2:2013. Fig. 1 shows the designed color card and its specification. The capturing distance is about 1.5m. The captured image size is 3024 × 4032 in pixels. In the images, the pixel size of the color card covers about 2150 × 1090 in pixels. The pixel size of each color block is about 450 × 300 in pixels. Under such circumstance, the data collection can be completed, as shown in Fig. 2. Images of each raw chicken sample are taken from the moment of slaughter to the next 24 hours, at intervals of 6 hours. Image acquisition in unconstrained environment may be affected by many external environmental factors which makes it difficult to do color analysis. In this regard, several interferences that occur while collecting image in unconstrained environment are explained as follows: Firstly, the color card is placed not paralleled to the orthogonal projection plane. As shown in Fig. 2, when the photographer fails to adjust the color card properly, the color card will present as an inclined quadrilateral in the captured image according to the principle of perspective. The traditional way of automatic detection methods which take advantage of the shape will become invalid in this situation. Secondly, as shown in Fig. 3(b) and (c), affected by the capturing view angle and uneven illumination condition, the color information of color card will be spoiled by the reflected light. This situation will result in the loss of original color information and mis-detection of this color block. Thirdly, the color card has obvious shape characteristics, such as rectangle, edge and corner, which can be used by many image processing techniques for the localization of color card. However, some color blocks are occluded partially VOLUME 8, 2020 or totally by the holding hand. In this case, shape characteristics are not reliable while detecting color card automatically. As shown in Fig. 3(d), there is a small impurity attached on the surface of color card. The attachment will make it difficult to extract the true color of this block. Fourthly, different from the strictly controlled pure color background, the background of the captured image is influenced by various types of noise in real life situation. As shown in Fig. 2, besides the chicken and color card, there are several irrelevant objects, such as experimental tools, plastic gloves, etc. Lastly, the color card may be placed upside down. The color card has a positive direction which is denoted by a small white arrow and each color block has a corresponding number representing its relative position. If the color card is placed upside down, the positions of color blocks will be predicted wrong according to the incorrect reference information.

C. GLOBAL THREE-CHANNEL THRESHOLDING
In order to rapidly extract the color card area and eliminate the influence of reflection, partial occlusion and complex background, we propose a global three-channel thresholding method to extract a clear contour of the color card. The main idea is that we can predict the whole color card area once some of its color blocks have been detected. To find out which color block is the easiest to be extracted, the image thresholding is applied to extract the 12 color blocks separately. The upper bounds and lower bounds while thresholding need to be validated and adjusted to be a universally applicable thresholding model. Since the Otsu thresholding method which dynamically adjusts the bounds resulted in partially missing contours and interior hollows of the color blocks, as shown in Fig. 4, the bounds for thresholding are fixed after adjustment and validation. Considering each color block has a different pre-defined standard color, a thresholding process after investigating the upper bound and lower bound for each standard color is applied to binarize the whole image for each color block: in which I s (x, y) denotes the pixel value at coordinates (x, y) of the single channel image. The l r and u r denotes the lower bound and upper bound on the R-channel image. However, there are three lower bounds and three upper bounds need to be optimized. To simplify the parameter optimization, this paper proposes to use a single threshold δ to define all the six bound parameters: in which s r , s g , and s b denote the pre-defined standard color of a color block, and δ is the only left parameter which needs to be optimized to be applicable to most image samples. An appropriate δ can make the specified color block clearly extracted from the background. Different values of δ are tested at intervals of 10, as shown in Fig. 5. The statistical analysis results are as follows: when the threshold is set too small, color blocks cannot be segmented from the background. When the threshold is set too large, the positions of the color blocks are not accurate enough. The optimal δ separates most color blocks clearly and minimizes the noise in the background, as shown in Fig. 6. It is not ideal to predict the position of color card only by a single extracted color block. However, It is feasible to merge the 12 color block extraction results to vote for the color card localization.

D. COLOR CARD LOCALIZATION
The contours are extracted from the result image of thresholding. However, the contour extraction result image has different shapes of noise. To reduce the influence of noise and ensure the accuracy of localization, we choose to firstly localize the top two color blocks which have the best thresholding result and the least noise. Positions of other color blocks can be predicted by the localization result of the two top color blocks. Firstly, detecting a single color block can be expressed by the following formula: arg min in which P (x,y) denotes the pre-defined color block patch with a relative position of center (x, y), and P (x,y) denotes the patch with its center at (x, y) in the sample image. This model finds the position (x, y) where there is a best match to the color block patch in the standard color card image. When Eq. (3) reaches the minimum, the patch best matches P (x,y) has been found. The center position and size can be used to highlight the detection result.
There are several constraints can be added to find the minimum faster. Firstly, the minimum bounding boxes of all the contours extracted by thresholding can be used as the candidate regions of color block detection. By checking the positions of all these candidate regions, the best match will be found with less computation. Secondly, the candidate regions can be screened by taking advantage of the aspect ratio of the standard color block, which can further eliminate the influence of irrelevant impurities in the background. Considering the width and height of each color block is 3cm and 2cm, the aspect ratio of acceptable bounding rectangles should satisfy the following constraint: in which r(P (x,y) ) represents the ratio constraint function, Ratio represents the aspect ratio of the patch P (x,y) . Thirdly, the area and the rotation angle of bounding rectangles can also be used to screen the candidates. If the rotation angle or the area of the bounding rectangles is too large, it is probably not the color block area. In this paper, the bounding rectangles with rotation angle greater than 45 • are eliminated. The constraint on area proportion is expressed as follows: in which s(P (x,y) ) is area constraint, Area represents the area of the patch P (x,y) in pixels, and S is the pixel area of the whole image. Lastly, the color information inside a color block is supposed to be similar to the standard color. A pure color proportion constraint is expressed as follows: h(P (x,y) ) = φ(P (x,y) ), φ(P (x,y) ) > threshold, +∞, otherwise. (6) in which h(P (x,y) ) is the pure color proportion constraint, φ(P (x,y) ) represents the proportion of pure color in the patch, and threshold is the parameter representing the least pure color proportion which needs to be set according to statistical analysis. After adding the above constraints, the localization problem of either color block can be expressed as follows: arg min x,y r(P (x,y) ) · s(P (x,y) ) · h(P (x,y) ) · P (x,y) − P (x,y) F , (7) After comparing the localization results of 12 color blocks, two color blocks which have the smallest values of Eq. (7) are selected as the two most reliable blocks. According to the same spacing between color blocks and fixed relative position relationships of either two color blocks, the two most reliable blocks can be used to predict the positions of other ten color blocks. After localizing all the color blocks, the local outlier factor algorithm is used to continuously extract the color of each block and remove outliers until the extracted color result is stable. Then, the mean pixel value is calculated as the final color information of each color block.

E. COLOR CORRECTION
Since the camera parameters of different smartphones may be quite different and the different illumination conditions also result in the pixel value deviation, it is necessary to correct the color deviation of the image. This paper proposes to build a multivariate linear regression model for color correction. By comparing the extracted color values of 12 color blocks with the corresponding standard color values, the parameters of model can be calculated. Let a 11 , a 12 , . . . , a 1j denote the color transform parameters of the R-channel, then the corrected color can be expressed as follows: in which v 1k , v 2k , . . . , v jk denote the known color channel values and their combinations. When k takes values from 1 to the number of color blocks n, all the color blocks and their corresponding standard colors can be used to calculate the color transform parameters. To simplify the Eq. (8), the matrix form is expressed as follows: in which X represents the color value after correction, A is the color transform parameter matrix, and V is the color observation collected by the color blocks. The vector V could be chosen from the following combinations: , V 9 = r, g, b, r · g, r · b, g · b, r 2 , g 2 , b 2 , VOLUME 8, 2020 A small number of elements in V are unable to correct the color well, while a large number of elements in V will make it difficult to calculate the transform parameters. After experiments on a set of sample images, this paper adopts V 9 to compute the polynomial regression matrix A. After all the elements of A are obtained, all the pixels in the sample image can be corrected by Eq. (8).

F. UNSUPERVISED CLUSTERING
The pixel values of different meat samples are collected manually after automatic color correction. Since the raw chicken meat sample images are collected from the moment of slaughter to the next 24 hours at intervals of 6 hours, it is reasonable to set 5 different grades at most. In this paper, unsupervised clustering is applied to classify the meat samples into a specified number of clusters. The meat samples in the same cluster are supposed to be close in data point distribution and share the same characteristics. Therefore, each cluster is defined as a grade of meat freshness. To evaluate the clustering results, Calinski-Harabaz Index is calculated as follows: in which m is the number of training set sample, k is the number of clusters, B k is the covariance matrix between the clusters, W k is the covariance matrix of the data within the cluster. The larger the CH is, the more compact the cluster will be. Therefore, the largest CH value represents the best clustering result.

IV. RESULTS
We have collected high-resolution images of 106 raw chicken meat samples. By applying the global three-channel thresholding to all the meat sample images, the results show that the color blocks with number of 2, 4, 6, 8, 10 and 12 can be extracted with clear contours. However, there are still many small contours of abnormal objects and noise regions. The color card localization method is used to find the two most reliable color blocks with number of 2 and 8, and predict the positions of other color blocks with their relative positions.
The result examples of color block extraction are shown in Fig 7. To evaluate the proposed color card localization method, we also used a template matching based method to find the best match of the whole color card, and a SIFT feature matching based method to match the feature points of the pre-defined color card image with the sample images. As shown in Fig. 8, the proposed color card localization method obtains the best performance. Besides, the method of structured markers can indeed simplify the process of detecting color card. However, it is very easy to make the color card dirty or occluded when taking photos together with raw chicken meat. Once a part of Aruco markers [39] is occluded by other small things, such as chicken meat, fingers, or blood, the detection would fail. By using our color card, if some parts of the color blocks are occluded, we can still use the information of other color blocks to predict the location of the whole color card. After localizing the color card and all the color blocks in it, the local outlier factor algorithm is used to continuously extract the color of each block and remove outliers until the extracted color result is stable. Then, the mean pixel value of several stable sampling locations is calculated as the final color information of each color block. The pixel values extracted from all 12 color blocks are used to calculate the color transform matrix in the multivariate linear regression model for color correction. By comparing the deviation of color samples before correction and those after correction, as shown in Table. 1, the results demonstrate that the color correction is able to reduce the deviation between the color of the captured image and the pre-defined standard color. The pixel values of different meat samples are collected manually after color correction. Since the images are not labeled with ground truth of corresponding quality levels, it is difficult to train models by supervised learning [40]. We used the k-means clustering algorithm to divide the pixel values of all meat samples into different clusters, as shown in Fig. 9. In Fig. 9, from left to right, the number of clusters are 2, 3 and 4 respectively, and their CH scores are 1037.53, 1773.67 and 1401.00. The highest score is reached when the cluster number is 3, indicating that the classification results are the most compact and the differences between classes are the most obvious. Fig. 10 shows the color of cluster centroids when the cluster number is 3. The RGB values of the 3 cluster centroids are (46,52,88), (66,80,131), and (105,118,155). According to the clustering results, raw chicken meat clustering model is established, which can divide meat images at different slaughtering time into three different grades, ensuring that the differences among grades are obvious. With this  clustering model, a new meat sample can be clearly assigned to one of the three grades to demonstrate its quality evaluation result by its color. Specifically, if the color of a new meat sample is closest to the pixel value of the first cluster, this meat sample can be identified as the first grade.

V. CONCLUSION
This paper proposes a low-cost, contactless chicken meat quality evaluation method by examining the color image of raw chicken meat. Specifically, the meat image is acquired by the camera of a smartphone. To eliminate the chromatic aberration, a pre-defined color card is put beside meat and automatically localized to extract the captured color information.
The pixel values extracted from 12 color blocks are used to calculate the color transform matrix in the multivariate linear regression model for color correction. Finally, the corrected colors of all the experimental meat samples are analyzed by hierarchical clustering to achieve 3 different quality levels. With the 3 cluster centroids of this clustering model, a new meat sample can be clearly assigned to one of the three grades to demonstrate its quality evaluation result by its color.
MENGBO YOU received the B.E. degree in computer science from Northwest A&F University, in 2012, and the M.E. and D.E. degrees from Iwate University, in 2015 and 2018, respectively. He is currently a Lecturer with the Computer Science Department, College of Information Engineering, Northwest A&F University. His research interests include machine learning, image processing, object detection, and deep learning.
JIAHAO LIU is currently pursuing the B.S. degree with Northwest A&F University, Yangling, China. His research interests include image processing, deep learning, and machine learning.
JIAN ZHANG is currently pursuing the B.S. degree with Northwest A&F University, Yangling, China. His research interests include image processing, deep learning, and data processing.
MINGDONG XV is currently pursuing the B.S. degree with Northwest A&F University, Yangling, China. His research interests include data processing, classification, and big data security.
DONGJIAN HE received the B.E., M.E., and D.E., degrees in agricultural engineering from Northwest A&F University, in 1982, 1985, and 1998, respectively. He was a Lecturer with the College of Mechanical and Electronic Engineering, Northwest A&F University, from 1987 to 1992, where he was an Associate Professor, from 1992 to 1999. He is currently a Professor with the College of Mechanical and Electronic Engineering, Northwest A&F University. His research interests include computer graphics, image analysis, and machine vision. He is a member of the China Computer Federation, the Chairman of the Shaanxi Society of Image and Graphics, the Vice Chairman of the Electrical Information and Automation Committee of CSAE, and a member of the Council of the Chinese Society for Agricultural Machinery.