Machine vision on the positioning accuracy evaluation of poultry viscera in the automatic evisceration robot system

ABSTRACT In the poultry slaughtering, accurate viscera positioning is essential to reduce the damage of internal organs. The introduction of machine vision technique can help to locate the viscera and can provide new direction for the poultry evisceration. After a midline abdominal incision of the poultry, the internal organs are taken out from the poultry that placed on the conveyor line by the parallel robot. Based on machine vision, the recognition accuracy of opened poultry viscera directly affects the level of visceral damage and residue caused by gripping manipulator. However, visceral positioning is often influenced by different noise in the abdominal cavity, such as mucous membranes and blood stains. Thus, the image segmentation of poultry viscera is a complex process, and it is challenging to remove the noise. In general, existing image segmentation methods can hardly segment visceral regions well. To strengthen the anti-noise ability, we proposed an improved region-based active contour method with the level-set formulation. This method combined with several operations of color space conversion and top-low-hat transformation, which could extract the viscera contour and precisely removed the noise. The results showed that recognition accuracy of the heart-liver area and fat area in the viscera are 98.98% and 99.75%, respectively, while the overall viscera for poultry was 98.96%. The results of this experiment suggested that the proposed image segmentation algorithm could achieve the required accuracy for poultry viscera detection. Thus, the proposed visceral contour recognition method could be applied in poultry processing, providing critical information to guide the robot for automated evisceration.


INTRODUCTION
With the increasing costs of poultry slaughter and decreasing availability of skilled labor, poultry processing industry using traditional labor-intensive methods has become unsustainable. [1] Automated slaughter of poultry was frequently used to increase the output of poultry meat and decrease production costs. Numerous studies have discussed the implementation of automatic poultry slaughter system, which can significantly improve the productivity of poultry processing. [2][3][4] Poultry slaughter consists of poultry hanging, electric shock, bloodletting, feathers removing, evisceration, flushing, precooling and carcass segmentation. [5,6] As a critical link in poultry slaughter, evisceration is one of the most difficult to achieve for automated processing, so manual work is frequently used to perform the evisceration operation in China. [7][8][9] A possible explanation for the fact is that internal organs are easy to damage when automated machines perform the consistent evisceration for each poultry. [10] Therefore, it is very important to develop an automatic system to perform evisceration for poultry with different sizes. Currently, robotic system is a widely used automation system, and there have been numerous studies using the robotic system for automatic operation in agricultural products and food processing. [11][12][13][14][15] For instance, Xiong et al. [16] studied visual positioning for dynamic litchi clusters under natural environments and a picking robot was driven to pick litchi. Liu et al. [17] revealed accurate and intelligent automation solution for porcine abdomen cutting, while a pig was hung up by rear legs, a 6-DOF robot could successfully cut abdominal cavity without haslet damage. Troels et al. [18] presented a robot system for performing pick and place operations with pork, and the strategy using a structured light scanner to capture a point cloud of the grasped pork were showed the validity. From these studies, it was evident that the robotic system appeared to be equally suitable for poultry automatic evisceration.
Within the present situation, the classical mechanical approaches had been reviewed in recent years, machine vision was probably the most proper technique to be used for poultry processing with different sizes. [19][20][21] Many researchers used the technology to perform some visual detection in image classification and target recognition. [22][23][24][25][26][27] For instance, Xue et al. [28] developed an agricultural robot to navigate between rows in cornfields with a novel variable field-of-view machine vision method. Zhuang et al. [29] proposed a robust citrus fruit detection method based on a monocular vision system, which could accurately and reliably detect mature citrus in natural orchard environments for automatic fruit picking applications. Zhang et al. [30] presented a three-layer back propagation (BP) neural network method for cucumber fruit recognition in greenhouses based on machine vision. Different machine vision technology could result in different complexity and implementation accuracy of the algorithms involved in the target detection system. In this study, the viscera detection system of the robot was designed to accurately recognize and locate viscera for different poultry, which would play a vital role in automatic eviscerating system.
The overall objective of this study was to detect visceral region with complex abdominal lumen conditions of poultry on the conveyor belt based on machine vision technology. This visual inspection method can help the manipulator on the parallel robot to accurately locate the viscera, thereby reducing the damage of internal organs. In our previous study, the manipulator was inserted into the abdominal cavity from the anal incision of poultry to perform eviscerating. Because the viscera could not be revealed by the processing method, viscera may be damaged during processing. [7,8,21] However, this study can accurately identify the position of the viscera based on machine vision technology, which will greatly help to reduce the damage of the internal organ. In a word, this study would provide a valuable reference for poultry robotic evisceration.

Experimental materials and mechanisms
Fresh poultry was purchased from farmers' markets. After the poultry was slaughtered, feathers removed and washed, then poultry was used for this experiment. According to the poultry species, poultry was classified into chicken group and duck group. The poultry was placed on the conveyor after they were cut and opened along the midline of the abdomen (Figure 1(a)).
In order to avoid infections by poultry diseases when people were performing evisceration work, parallel robot could be used to conduct automatic evisceration instead of manually removing (Figure 1 (b)). The proposed parallel robot was comprised of a fixed main supporting frame and a moving platform linked by a rotating shaft and three parallel kinematic chains. [31,32] Servo motor and reducer were installed on static platform, and end-effector was installed on a moving platform. The servo motor was controlled by the servo drive and transmitted the power to the drive arm through the reducer. The driven arm was connected by a pair of connecting rods through four ball hinges, and two pairs of springs were tensioned between each pair of connecting rods to prevent the connecting rods from falling. The rotating shaft had a telescopic function, one end was connected with the servo motor on the static platform of the cross universal joint, and the other end was connected with the end effector on the moving platform through the cross universal joint.
The end-effector made a three-finger design, which mainly performed the grasping of poultry viscera by three fingers (Figure 1(c)). When the automated evisceration operation was performed, the industrial camera automatically took pictures of the poultry on the conveying line, and then the image information was transmitted to the computer. At last, the image processing software in the computer automatically processed the images of poultry viscera and located the coordinates. When the poultry carcass reached the designated position at a constant speed, the conveyor stopped, then the parallel robot performed the grasping operation and loaded the viscera into a container.
The three fingers of the end-effector could circumferentially rotate to adapt to the size of the internal organs. In addition, with the bending of the fingers, an envelope space was formed, which could completely grasp the internal organs of poultry. The movement of three fingers in the circumferential direction could change the posture, and the completion of this movement was realized by means of a pair of externally meshing cylindrical spur gears under the base. Furthermore, the bending action of the three fingers was realized by the key rope passing through the inside of the knuckles. The rotation of the motor drove the contraction of the key rope to complete the bending of the fingers, and a restoring force of the torsion spring was used to maintain an initial angle of the finger knuckles.

Machine vision system
Accurate location of the poultry viscera was critical to maximize the integrity of internal organs and minimize the damage of fragile internal organs. Furthermore, high precision for grasping viscera could satisfy the acquisition of edible viscera, which depended on the reliability of the machine vision system. In this study, in order to detect the viscera on the conveyor, an industrial camera (MV-CA050 -20UC) was placed 300 mm above the conveyor belt. The camera had a high dynamic range suitable for the natural lighting conditions. Machine vision lens (MVL-KF1228M-12MP) was used to provide the field of view to cover the width of the conveyor (500 mm). The industrial camera could minimize the stereo error for accurate locating of the viscera, while it could still maintain the required field of view. After the viscera images were imported into the PC (Processor Intel(R) Core (TM) i5-8250 U CPU @ 1.60 GHz, memory 8 G), the image processing software (Matlab R2016a) automatically analyzed the features of poultry images, and extracted the information of viscera.

Positioning algorithm for poultry evisceration
In order to avoid the influence of the blood stains and mucous membranes in the abdominal cavity on positional identification of poultry viscera, it was necessary to eliminate their interference. However, visceral region segmentation which relied on the global threshold segmentation algorithm might produce unsatisfactory results because of the weak edge information. In addition, the whole viscera were divided into two parts, the dark part was the heart and liver area, the light part was the fat area, and the segmentation of the dark part was more susceptible to external interference. The flow chart of chicken viscera contour acquisition is shown in Figure 2. Therefore, the variational level set model based on the C-V model was used to construct the energy function for the target area in this study. [33,34] At the same time, the top-low-hat transform was introduced to detect the image contour with weaker edges. This algorithm did not require a strict initial contour position and had good antiinterference performance, which could automatically detect the inner and outer contours of the viscera images. Assuming that the target image is I (x, y), the initial contour line is represented by the zero level set. The C-V energy function is: Among them, a 1 , a 2 represent the average value of the internal and external brightness of the target image, respectively. μ, v, λ 1 , λ 2 represent a positive weighting coefficient. Inside (ϕ) corresponds to the image of ϕ(x, y)>0, outside (ϕ) corresponds to the image of ϕ(x, y) < 0. The first two items of the C-V energy function make the boundary line of the heart and liver area have a certain smoothness, and the latter two make the initial contour gradually reach the edge of the heart and liver area after evolution. After the evolution is completed, this energy function obtains the minimum value. In order to use the variational method to solve the above energy functional, Heaviside function and Dirac function are introduced.
The level set function is converted as follows. Among them, Ω represent the entire image area. a 1 , a 2 in the above formula are solved to make C (a 1 , a 2 ,ϕ) obtain the minimum value.
The energy function is rewritten as follows.
ð Ω H ε ðφðx; yÞÞdxdy a 1 , a 2 remain unchanged, t represents time, n represents the external vector of the area, the partial differential equation expressed by the function ϕ is: The level set algorithm started from a closed contour line and evolved depending on the formula. Due to the relatively fixed position between the poultry and camera during the image acquisition, it was easy to ensure that the initial curve was within the heart and liver area. If a slight deviation occurred, the function could be automatically reinitialized.
Taking the initial circle contour line (x, y) as the level set curve with a value of 0, the method of evolving from inside to outside was beneficial to avoid most external interference factors. Due to the gray level of the target area was relatively uniform, the energy formula could be used to achieve convergence quickly. However, in the abdominal cavity, some deposited bloodstains often appeared at the borders of the heart and liver regions, and their similar colors would cause boundary leakage during the evolution process. Therefore, the images would be transformed into HSV color space images, and the S component was extracted to segment the images. Then, the planar disc-shaped structural element was set up to perform top-low-hat transformation.
The top-low-hat transformation was a morphological operation function based on expansion and erosion, and it was a good high-pass filter operator. [35] According to the difference between opening and closing operations, it was divided into top-hat transformation and low-hat transformation. The top-hat operation could detect the peaks in the image, and the low-hat operation could detect the valleys in the image. These two operations were very helpful for finding dark pixels in a brighter background and bright pixels in a darker background. The top-hat transform operator is defined as: The low-hat transform operator is defined as, Among them, f was the regional image obtained by the level set algorithm, g was the structural element. After the top-low-hat transformation, some voids and uneven gray levels would appear in the internal area, while the external interference area was separated. To solve the above problems, some operations such as deburring, hole filling and reconstruction were performed to restore the image. In the image evolution process, when the level set curve with a value of 0 was exactly the boundary of the heart-liver area, the energy function C (a 1 , a 2 ,ϕ) was the smallest, which was the best segmentation of the average brightness.
Additionally, compared with the image feature of the heart-liver area, the fat area was less disturbed in the abdominal cavity. Due to the color of visceral fat was significantly different from the meat and blood in the abdominal cavity, the global threshold segmentation method could be used to obtain the fat area. After comparing multiple component features in RGB, HSI and Lab color space of the viscera image, the b component image in Lab color space had less interference, which was beneficial to extract fat area after many experiments. Therefore, the original RGB image was converted into Lab space, and the threshold segmentation method was applied to the b-component image for image processing. Finally, the fat area was obtained by removing the small target interference, filling the hole and a series of morphological operations.

Image segmentation of heart-liver area for poultry viscera
Poultry viscera segmentation was a challenging process that aimed to separate the poultry viscera and other objects. In this study, the visceral region was divided into two parts for image segmentation and then merged them. The tough part was the image segmentation of heart-liver region, and the improved level set algorithm was used to separate from the other objects. The image processing process is shown in Figure 3. Figure 3(a) and 3(f) were, respectively, grayscale images of cherry valley duck and three-yellow chicken after the captured images were preprocessed. The circle on the image was the initial contour line before the curve evolves. In order to reduce the calculation time of the algorithm, the position of the initial contour curve was different because the two types of poultry were largely different in body size. The initial circle outline of cherry valley duck was slightly lower than three-yellow chicken. Even if the initial contour lines were not fully in the target area, the algorithm could automatically identify the target contour. Figure 3(b) and 3(g) were the recognition results of the variational level set algorithm, and the red contour line represented the final evolved target contour. Due to the various interference factors in the abdominal cavity, some leaks existed in the heart-liver area for the cherry valley duck and three-yellow chicken identified by the algorithm. The target region was automatically extracted from the abdominal cavity (Figure 3(c) and 3(h)) and it could be seen that the outline of the heart-liver area was not accurate enough. The segmentation results were put back into the original viscera image (Figure 3(d) and 3(i)). Due to some interference regions were still retained in the segmented image, the top-low-hat transformation and some morphological operations were introduced into the algorithm to remove interference. Figure 3(e) and Figure 3(j) were the final segmentation results of the algorithm used in this study. Obviously, the segmentation algorithm based on the improved level set algorithm could eliminate the interference which caused by most of the factors in the abdominal cavity, and avoid the over-cutting phenomenon that caused by the global algorithm such as threshold segmentation.

Image segmentation for whole viscera
As shown in Figure 4(a) and Figure 4(d), the fat area could be clearly segmented using the proposed method in this study. The algebraic addition operation was performed on the fat area and the heartliver area to obtain the whole viscera area (Figure 4(b) and 4(e)). The whole visceral center coordinates were displayed to guide the positioning of the robot during the process of grasping the viscera. The outline of the whole viscera was displayed in the original image, which presented that the algorithms could be used in this study to accurately segment the whole viscera (Figure 4(c) and 4(f)).
The whole viscera segmentation of the three-yellow chicken and cherry valley duck could accurately obtain the evisceration position of the manipulator. From the results of image segmentation, it could be seen that the proposed segmentation algorithm could obviously reduce the influence of various interference factors in the abdominal cavity of the poultry, and significantly improve the accuracy of image segmentation. Simultaneously, the heart-liver area and the fat area were segmented and then merged by using a two-step method, which could effectively process the whole viscera of poultry.

Calculation of positioning accuracy
In the automated evisceration robot system, 100 visceral images of three-yellow chicken and cherry valley duck were randomly collected to locate the viscera. In this case, a variety of image segmentation methods were used to recognize the visceral contours, such as OTSU multi-threshold segmentation method, K-means algorithm and improved level set algorithm. In order to evaluate the segmentation performance of the algorithm, the artificial segmentation of the heart-liver area of poultry by photoshop software was used as the true value of the detected image. Three performance indicators were used to quantify the difference between the segmentation results of the algorithm and the real segmentation results, such as segmentation accuracy IOU, over-segmentation rate OR and undersegmentation rate UR. The expressions of the three indicators were calculated as follows.
As showed in Figure 5, circle R and circle T represented the obtained image area by image segmentation algorithm and the real image area, respectively. FN represented the number of pixels that should not be included in the segmentation result, but they were in the segmentation result after the image was segmented by the algorithm. FP represented the number of pixels that should be included in the segmentation result, but they were actually not in the segmentation result after the image was segmented by the algorithm. TP represented the number of correctly segmented pixels, that is, the number of pixels contained in both the image segmented by the algorithm and the real image.   Table 1, the segmentation effect of the OTSU multi-threshold segmentation method and the improved level set algorithm were better than the K-means algorithm, while the improved level set algorithm had obviously the highest segmentation accuracy. The highest value of segmentation accuracy, over-segmentation rate and under-segmentation rate were 98.96%, 0.81% and 0.23%, respectively. The same number of chickens and ducks were tested using the proposed method in this study, and the results are presented in Table 2. It might be the reason that some color differences existed in the viscera and meat for different types of poultry, so the recognition effect of chicken was better than duck. In addition, the segmentation accuracy of the fat area was higher than that of the heart-liver area. Thus, the improved level set algorithm could obtain the ideal visceral positioning accuracy for evisceration in the poultry evisceration robot system, which would be beneficial to reduce the poultry visceral damage.

CONCLUSION
This study developed a segmentation method of poultry viscera in the automated evisceration robot system. Several image segmentation algorithms could be applied to detect the poultry viscera, such as OTSU multi-threshold segmentation method, K-means algorithm and so on. However, the segmentation effects of these algorithms were completely different, and they would present large oversegmentation and under-segmentation to a certain extent. The segmentation effect was quantified by three performance indicators: segmentation accuracy, over-segmentation rate and undersegmentation rate. The results showed that the proposed improved level set algorithm in this study was the most suitable to segment the poultry viscera and remove the noises by comparison of these algorithms. Furthermore, this image processing algorithm could help the robot to accurately locate and take the internal organs out from the abdominal cavity of poultry, thereby reduced the damage of the internal organs. This study can provide reference value for poultry automatic evisceration robot and accurate positioning of manipulators, which is beneficial to increase the productivity of edible viscera and meet the needs of the poultry industry.

Funding
This work was supported by National Natural Science Foundation of China(51905387).