A camera-based color calibration of tiled display systems under various illumination environments

ABSTRACT Tiled display systems are widely utilized for digital signage application exceeding the size of a single flat-panel display. Smoothly varying spatial non-uniformity in luminance and color may cause visible difference in the boundary of two adjacent sub-displays. It is necessary to suppress such visible artifacts. Tiled display systems can be installed under various illumination environments and require routine color calibration to compensate for temporal change in display characteristics. This paper presents a simple and cost-effective color calibration method for tiled display systems. Consumer-level digital camera is utilized as a measuring device after the proposed vignetting correction was applied. Experimental results indicate that the proposed color calibration method for tiled display system is quite effective in suppressing the visible artifacts across two adjacent sub-displays.


Introduction
As digital signage application grows, demand for large format displays is increasing. Tiled display systems are widely utilized for digital signage application exceeding the size of a single flat-panel display [1]. They consist of multiple flat-panel displays or sub-displays that are tiled together. Most of flat-panel displays exhibit smoothly varying spatial non-uniformity in luminance and color. Such spatial non-uniformity within a single sub-display is controlled by manufacturers' quality assurance process so that it is difficult to recognize the non-uniformity under normal operation mode.
However, when tiled together, changes in luminance and color across two adjacent sub-displays may not be smooth so that non-uniformities can be noticed in the boundaries of sub-displays [2]. Therefore, color calibration across sub-displays is quite important for tiled display systems [3]. Primary objective of color calibration for tiled display systems is to eliminate abrupt change in luminance and color in the boundary of sub-displays. Color calibration can be described by the following procedures: device characterization and minimization of non-uniformity. Device characterization is to determine transformation characteristics between input and output color coordinates. Therefore, output color coordinates on display is measured using optical instruments such as spectroradiometer for a number of RGB input combinations to develop the device characterization model. It is common to describe color transformation characteristics of a flat-panel display by a single device characterization model [4][5][6] that is spatial position independent. However, it is not enough for the tiled display system because degree of non-uniformities across the boundary of two adjacent sub-displays is dependent on the positions or pixel positions. Suppose that the device characterization is performed for each of the pixels on sub-display, the output color coordinates for any input color coordinates can be predicted at every pixel location. In other words, changes in luminance and color across the boundary of two adjacent sub-displays can be calculated at every pixel locations. Once the device characterization is achieved, the rest of the color calibration focuses on the minimization of such non-uniformity across sub-displays.
Color calibration is required before the tiled display system is installed for the first time. In addition, periodic color calibration is needed because luminance and color characteristics of the sub-displays change over time and their rate of change may not be uniform. Initial color calibration can be carried out under a controlled illumination environment such as darkroom. However, that is not the case for periodic color calibration of the tiled display system, especially when it is installed at outdoor location. Therefore, color coordinate measurement procedure is carried out under real illumination environment. It is obvious that output color coordinates are not the same when measured under different illumination environments. A new measurement method excluding the effect of illuminations on the output color coordinate is required.
In this paper, a practical color calibration method is proposed to satisfy the aforementioned requirements for tiled display systems. The proposed display device characterization method is illumination independent but position dependent. In other words, device characterization model is constructed using the output coordinate measurement under arbitrary illumination environments. However, the constructed device characterization model can provide the same output coordinates as measured in darkroom. In addition, the position dependency implies that different device characterization model is utilized for different positions within sub-displays in order to account for spatially varying luminance and color.
Instead of optical instruments such as spectroradiometer, consumer-level digital camera is utilized for measurement of color coordinates on sub-displays. In addition to cost effectiveness, digital camera can considerably shorten the time required for color coordinate measurement. Therefore, the reduction of lead time reduces the effect of variations of illumination in outdoor environment during the measurement of color coordinates. In order to utilize the consumer-level camera as a measuring instrument, vignetting effects that result in gradual fading out of an image are compensated. A simple vignetting correction method based on weights is proposed in this paper. This paper is organized as follows. In Section 2, the proposed camera calibration and vignetting compensation methods are described. In Section 3, the proposed display characterization method that is position dependent and illumination independent is explained. In Section 4, minimization of non-uniformity across subdisplays is presented. In Section 5, results of performance evaluations are discussed. Finally, Section 6 concludes this paper.

Camera characterization method
In this paper, consumer-level digital camera is utilized for measurement of output color coordinates on subdisplays. The measured output color coordinates will be utilized to construct device characterization model. Unlike optical instrument as spectroradiometer, camera is cost effective. Furthermore, it can reduce the time required for color coordinate measurement. As a result, the effect of variations of illumination in outdoor environment during the measurement of color coordinates can be reduced.
Image obtained by camera usually exhibits vignetting or light falloff [7]. Vignetting is a phenomenon in which an image's brightness at its periphery is reduced compared to its center position. In order to utilize consumerlevel digital camera instead of optical instrument, it is important to compensate for vignetting effect. In addition, camera should yield the same output color coordinates as the one obtained by an optical instrument.
These two requirements can be explained by an example illustrated in Figure 1. Figure 1(a) represents an ideal object whose brightness is constant. Assume that the object in Figure 1(a) is photographed by a camera. Figure  1(b) illustrates an image captured by a camera. In Figure  1(c), the solid line represents the luminance measured by an optical instrument along the horizontal line in the center of Figure 1(a). In addition, the dotted line in Figure 1(c) denotes the luminance values from camera image along the same line in Figure 1(b). Vignetting effect can be noticed from the dotted line in Figure 1(c). The solid line in Figure 1(c) can be utilized as ground truth data to be estimated or recovered from the dotted line in Figure 1(c).  It is assumed in this paper that the input and output color coordinates in the device characterization model are represented as RGB and XYZ coordinates, respectively. The proposed camera calibration method is illustrated in Figure 2. Practically, it is difficult to have an ideal object that yields constant XYZ coordinate. Alternatively, a reference image having constant RGB value is displayed and measured by optical instrument in a dark room. The two-dimensional data of measured XYZ coordinates would not be constant because of spatial nonuniformity of display. However, they can serve as ground truth coordinates to be estimated. In addition, the displayed reference image is photographed by a camera in a dark room. The RGB coordinates from the camera are converted to XYZ coordinates by the RGB-to-XYZ transformation at the center of camera that is determined in advance. It can be assumed that the center position of the camera is free from vignetting effect. The resulting XYZ coordinates reflect both of spatial non-uniformity of the display and vignetting effect of the camera, whereas the XYZ coordinates measured by optical instrument reflect the spatial non-uniformity of the display only. Therefore, vignetting correction method is designed based on these pairs of XYZ coordinates. Details of the proposed vignetting correction method are described next.

The proposed vignetting correction method
Camera vignetting compensation has been researched [8][9][10][11]. In [8], the location-specific ratio of intensity value is utilized as the Look Up Table (LUT). The disadvantage of this method is sensitiveness to noise. To solve this problem, modeling for the location-specific ratio of camera's intensity value by a hyperbolic cosine or Gaussian quadrics were proposed [9,10]. However, their performances are lower than the LUT method [8] because of modeling errors. A new method based on the wavelet-denoising was proposed [11]. It can improve the vignetting compensation performance and robustness against noise. However, it assumes uniform brightness of the reference patch. Thus, when ideal uniform reference is not available, its performance may be degraded.
In the proposed vignetting correction method, the reference image of constant white is displayed on the display and the XYZ coordinates of the reference image are measured using a 2D spectroradiometer in a dark room. Suppose that r × c points can be simultaneously measured by the 2D spectroradiometer. Usually, the number of pixels on display is greater than the number of XYZ measurements, r × c. Thus, the measured XYZ color coordinates are interpolated to generate the output XYZ color coordinates for each of display pixels. Assume that the display has M × N pixels. Let the interpolated XYZ values at the (i,j)th display pixel are X w (i,j), Y w (i,j), and Z w (i,j), where i = 1,2, . . . , M and j = 1,2, . . . , N. They are utilized as ground truth coordinates for the proposed vignetting correction method. The dotted line in Figure 3(a) illustrates the values of Y w (i,j) along the horizontal line of the display. Unlike ideal ground truth data illustrated as the solid line in Figure 1(c), the plot of Y w (i,j) is not a straight line because of spatial non-uniformity of display.
In addition, the displayed reference image is photographed by a camera in a dark room. As a result, RGB color coordinates are obtained for each of camera pixels. Usually number of display pixels is not the same as number of camera pixels. Therefore, the RGB coordinates of camera pixels are interpolated to generate the RGB coordinates for each of the display pixels. In order to compensate for vignetting effect, the RGB coordinates should be converted into XYZ coordinates. It is accomplished by the RGB-to-XYZ transformation of the camera that is determined in advance. Procedure to determine the RGB-to-XYZ transformation of the camera will be explained in Section 2.2.
It can be assumed that center of the camera is free of vignetting without loss of generality. In addition, suppose that display and camera are carefully aligned that there is no offset between center pixels of display and camera. When the RGB color coordinates at the center of the camera is converted to the XYZ color coordinates by matrix transformation specified at the center of the camera, the resulting XYZ coordinates would be the same as the X w (i,j), Y w (i,j), and Z w (i,j) at the center of displays. However, when the RGB color coordinates of the pixels far from the center of the camera is converted to the XYZ color coordinates by matrix transformation specified at the center of the camera, the resulting XYZ coordinates would be the different from the X w (i,j), Y w (i,j), and Z w (i,j) because of vignetting effect.
, and Z cam w (i, j) denote the XYZ coordinates obtained by applying the RGB-to-XYZ conversion specified at the center of camera. The solid line in Figure 3(a) illustrates the values of Y cam w (i, j), along the horizontal line of the display. It can be noticed in Figure  3(a) that Y cam w (i, j) coincides with the ground truth data at the center of the horizontal line. In addition, differences between Y w (i,j) and Y cam w (i, j) is increasing as location of pixels moves outward. It is due to the vignetting effect.
In this paper, ratios of the ground truth coordinate to XYZ coordinate reflecting the vignetting effect are calculated by the following equations: The weights calculated by Equations (1)-(3) are utilized to compensate for vignetting. Figure 3(b) shows the weight value of W X (i,j), W Y (i,j), and W Z (i,j) along the horizontal location. The ground truth coordinates can be estimated by the multiplying the weight value of W X (i,j), W Y (i,j), and W Z (i,j) to the corresponding where α specifies the photographed color sample. It can be formulated by the following equation: whereX α (i, j),Ŷ α (i, j), andẐ α (i, j) represent the XYZ values compensated for vignetting of a camera.

The RGB-to-XYZ transformation at the camera center
Camera yields device-dependent RGB coordinates, whereas the ground truth color coordinates are defined in device-independent XYZ coordinates. Therefore, the RGB coordinates from the camera needs to be converted to the XYZ coordinates. Relationship between the RGB and XYZ coordinates of the camera is position dependent because of vignetting effect. Therefore, camera device characterization is carried out at the center of camera plane where vignetting does not occur. A set of color samples are generated. Each of R, G, B, and gray channel is represented by eight samples whose RGB coordinates are equally spaced. Therefore, there are 8 × 4 = 32 color samples. Color samples are made of 40 × 40 display pixels. Each of color samples is displayed at the center of display in a dark room. The XYZ color coordinates are measured by a spectroradiometer. In addition, it is photographed by the camera and the resulting RGB outputs are averaged. It should be mentioned that the display and camera are carefully aligned that there is no offset between center pixels of display and camera.
Let the RGB value from the ith color sample is denoted by where α specifies color of the displayed sample. The corresponding XYZ value measured by a spectroradiometer is denoted by a where H is a 3 × 3 matrix representing the RGB-to-XYZ transformation. It is determined by the total color difference minimization method [12] using the following equation.
where n = 32 andB = {b 1 |b 2 | . . . |b n } denotes the intermediate XYZ values calculated by H and A. E i is the color difference between the b i andb ι calculated in L*a*b* space. Matrix H is determined by using a downhill simplex algorithm [13] to minimize the sum of the color difference for all color samples. In Equation (4), the X cam α (i, j), Y cam α (i, j), and Z cam α (i, j) can be formulated by the following equation: Therefore, Equation (4) can be rewritten as the following equation: where W is a 3 × 3 diagonal matrix containing the weights for vignetting compensation.

The proposed display characterization model
The proposed display characterization model is position dependent as well as illumination independent. Position dependent means that display characterization model is a function of display pixel position to represent the spatial non-uniformity within sub-display. Illumination independent means that display characterization model can be constructed using the output coordinate measured under arbitrary illumination environments. Unless measured in darkroom, the light output from the display and the reflection of the illumination on the display are measured together. In the proposed method, the device characterization model can provide the same output coordinates as measured in darkroom by excluding contribution of the reflection of the illumination. The proposed method to construct the display characterization model is illustrated in Figure 4. For the simplicity of explanation, it is assumed that all the measurements are made under illumination. For the display's characterization, the XYZ coordinates of the primary RGB and a set of different constant gray images are needed. Sample images of the primary RGB and constant grays are generated. Each of them is displayed and photographed by the camera. The resulting RGB coordinates from the camera is denoted by R cam α,il (i, j), G cam α,il (i, j), and B cam α,il (i, j), where α specifies the color sample, α ∈ {R, G, B, Gray}. The XYZ coordinates after the vignetting correction,X α,il (i, j),Ŷ α,il (i, j), andẐ α,il (i, j), are calculated by Equation (8). They represent the light output from the display as well as the reflection of the illumination on the display. Suppose that the display is turned off and the aforementioned procedure is repeated as illustrated in Figure 4. It can be assumed that the calculated XYZ coordinates for the turned-off display,X off,il (i, j), Y off,il (i, j), andẐ off,il (i, j), represent the reflection of the illumination on the display only. Based on this assumption, the XYZ coordinates in darkroom can be estimated by Equation (9) ⎡ ⎢ ⎣X whereX α,il (i, j),Ŷ α,il (i, j), andẐ α,il (i, j) denote the XYZ coordinates representing the light output from the display and the reflection of the illumination on the display. In addition,X off,il (i, j),Ŷ off,il (i, j), andẐ off,il (i, j) represents the reflection of the illumination only.
The effect of the aforementioned method can be verified by the example illustrated in Figure 5. Figure 5(a,b) shows the camera and display placed in outdoor environment. In Figure 5(a), display is turned on and one of color samples is displayed and photographed. However, a turned-off display is photographed in Figure 5(b). The solid line of Figure 5(c) represents luminance of white sample,Ŷ w,il (i, j), along the horizontal line of the display. The dotted line of Figure 5(c) represents luminance along the horizontal line on the surface of turned-off screen,Ŷ off,il (i, j). In Figure 5(d), the dotted line represents luminance of the same color sample measured by spectroradiometer in dark room. The solid line in Figure  5(d) denotes the results of Equation (9). The similarity of two plots in Figure 5(d) can be justification of the proposed method to exclude the effect of illuminant during the measurement of color coordinates. A 3 × 3 matrix of T representing the relationship between the input and output color coordinates of the display can be formulated by the following equation: where R s (i,j), G s (i,j), and B s (i,j) represent the scalar RGB of display. In display, the scalar RGB means normalized output luminance corresponding to the input color coordinates. In addition,X α (i, j),Ŷ α (i, j), andẐ α (i, j) represents the output XYZ values from display. The transformation matrix T consists of the XYZ values after black subtraction [6]. Procedure to determine the scalar RGB of display R s (i,j), G s (i,j), and B s (i,j) is described next.
The scalar RGB is determined by the Electro-Optical Transfer Function (EOTF). The EOTF represent the relationship between the scalar RGB and the input RGB coordinates by using LUT. The scalar RGB can be calculated by the following equations: where R in (i,j), G in (i,j), and B in (i,j) represent the input color coordinates. And, LUT

Minimization of non-uniformity across sub-displays
In the tiled display systems, changes in luminance and color across two adjacent sub-displays may not be smooth so that non-uniformities can be noticed in the boundaries of sub-displays [2]. Therefore, color calibration across sub-displays is quite important for minimization of non-uniformity. Camera-based calibration methods for the tiled display systems have been reported in [14][15][16]. In [14,15], non-uniformity in luminance among sub-displays was compensated. However, color difference between the sub-displays was not considered in this method. In addition, this method assumed all measurements made in dark room. In [16], a color calibration method under various illumination environments was proposed. The target values of the XYZ coordinates of full white at the center of sub-display were defined. The full white image was displayed on each of sub-displays and photographed by a camera. The resulting RGB coordinates at the center of the display were converted to the XYZ coordinates. Ratios of the target XYZ and XYZ from the camera image were calculated. The colors of the backlight were modified based on the calculated ratios so that the full white at the center of sub-displays should yield the same color coordinates. This method may not reduce color difference across sub-displays because this method focused on the white calibration at the center of the sub-displays. Additionally, the non-uniformity in luminance and color within sub-display remains uncompensated. In this paper, a color calibration method is proposed to satisfy all the target requirements such as white point, gamut and gamma, imposed on the tiled display systems. In order to compensate for the non-uniformity in luminance and color within and across the sub-displays, color calibration is performed on pixel by pixel basis. Furthermore, color calibration is independently and identically applied to each of the sub-displays. The proposed color calibration method is illustrated in Figure 6.
First, color calibration target should be determined in advance. In this paper, color calibration target consists of the white point specified by the maximum luminance and CCT (correlated color temperature), gamut and gamma. The maximum luminance of white point of the tiled system is the minimum luminance value when full white patch is display on all of the sub-displays. It is assumed that the gamut is determined by three XYZ coordinates of the full RGB inputs. The minimum gamut among all of the pixels in the tiled display system can serve as target for the gamut. Once color calibration target is fixed, the transformation from the input RGB value of an image to the desired XYZ coordinates that satisfy the color calibration target can be defined as in the following equation: where X tar α , Y tar α , and Z tar α is target XYZ, where α specifies the color of the input RGB coordinates (R in , G in , and B in ). And γ represents the gamma specified by the color calibration target. Y tar r , Y tar g , and Y tar b are calculated using white point and gamut of the color calibration target as in Equation (15).
where X tar w , Y tar w , and Z tar w represents the white point. Using Equations (14) and (15), the desired XYZ values of an image can be calculated. Next step is to determine the values of RGB that will yield the desired XYZ. The transformation matrix T in Equation (10) and the EOTF in Equations (11)- (13) are utilized to calculate the values of modified RGB that will generate the desired XYZ. The desired XYZ values are converted to normalized output luminance values of RGB by inverse of Equation (10). In Equation (10), the values of R s (i,j), G s (i,j), and B s (i,j) mean the normalized output luminance values of RGB in order to express the desired XYZ values. Finally, R s (i,j), G s (i,j), and B s (i,j) are converted to the RGB values. The calibrated or modified RGB values are obtained by the pixel-specific ETOF of the tiled display systems.
The aforementioned procedure is independently applied to each of sub-displays because the color calibration target is identically applied to all of the sub-displays. When the calibrated image is displayed on the tiled display system, all the pixel on the tiled display systems should satisfy the color calibration target. Therefore, the non-uniformity in luminance and color across the subdisplays is minimized. Furthermore, the non-uniformity in luminance and color within the sub-display is also minimized.

Results of performance evaluations
For the performance evaluation of the proposed method, 24 color patches of the Macbeth color checker illustrated in Figure 7 are utilized as testing color samples [17]. The camera utilized in this study is Canon EOS kiss digital X3 with an 18 mm lens. The camera's exposure setting is aperture f/22, ISO 200 and its shutter speed was 0.4 s. The reason of selecting this setting is that the higher aperture value reduces the vignetting effect. In addition, the low ISO value indicates less sensitivity to noise than high ISO value.

Accuracy of camera characterization at its center position
The transformation from device-dependent RGB coordinates to device-independent XYZ coordinates at the center of the camera is described in Section 2.2. Accuracy of such camera characterization is verified in a dark room. Figure 8 illustrates procedure for verification experiments. The procedure in Figure 8 is applied to the 32 training samples described in Section 2.2 and the 24 testing samples of Macbeth color checker. It should be mentioned that the training samples are utilized to estimate the transformation matrix H in Equation (8), whereas the testing samples are not utilized for the characterization model construction. A 46-inch LCD display with LED backlights is utilized in the experiments. The performance of the characterization model is evaluated by calculating the color difference between the ground truth data measured by the spectroradiometer and estimated values based on the proposed model. The color differences are calculated in the CIE L*a*b* space. Table 1 lists the color differences. The average color differences of the training and testing samples are 0.59 and 0.57, respectively. These figures indicate that the XYZ values converted from RGB values of a camera are almost the same as the XYZ values measured by the spectroradiometer.

Camera vignetting compensation performance
In this paper, camera vignetting effects are corrected by two-dimensional weights determined by Equations (1)-(3). The vignetting correction is performed by Equation (4) or (8). Two sets of 46-inch LCD displays with LED backlights are utilized in the experiments. One display is utilized to derive the two-dimensional weights. The other display, called here as a testing display, is utilized to verify the performance of the proposed vignetting compensation method. The performance of the proposed method is compared with the conventional wavelet-denoising LUT method [11].
The difference between the proposed method and the method in [11] can be summarized using Equation (8). Both utilize the same transformation matrix H obtained based on the criterion in Equation (6). The proposed method utilizes the weight matrix W determined by Equations (1)-(3). In [11], the weight matrix is obtained using the image of constant white captured by camera under normal indoor illumination environment. The proposed method and the wavelet-denoising LUT  [11] are applied to each of 24 testing samples of Macbeth color checker. The performance of the proposed method is evaluated by two different measures. First, the luminance difference between the ground truth data measured from the testing display using the spectroradiometer and estimated luminance data determined by the vignetting correction methods is calculated along the dotted line in Figure 9. Figure 10 illustrates four different plots of luminance values along the horizontal positions. The solid line represents the luminance data measured by spectroradiometer. It is utilized as the ground truth data. The plot of Y cam w (i, j) specified by Equation (7) represents the luminance data before the vignetting correction. The remaining two plots consist of the compensated luminance data by the proposed method and the waveletdenoising LUT method [11]. The luminance differences averaged over 24 testing samples of Macbeth color checker are listed in Table 2. The proposed vignetting compensation method yields the average luminance difference of 2.78. This indicates that the proposed method is better in performance of vignetting compensation than the wavelet-denoising LUT method [11].
Second, performance of the vignetting compensation is evaluated by the color differences at the nine points (P 1 ∼ P 9 ) illustrated in Figure 9. The color differences are calculated in the CIE L*a*b* for each of 24 testing samples of Macbeth color checker and then averaged. Figure 11 illustrates the color differences at nine different points. The proposed method yields smaller values of the color differences for all of the nine points. In addition, the proposed method does not exhibit significant positional variations. It can be noticed that the color difference at the center (P 5 ) is the smallest among nine points because there is almost no vignetting effects observed at the center position. Unlike the proposed method, the wavelet-denoising LUT method in [11] shows relatively wide variations in the color difference over nine difference positions. The wavelet-denoising LUT method in [11] utilizes white plain paper as the reference patch in   normal indoor lighting conditions under the assumption of spatial uniformity of reflected luminance. However, this assumption of spatial uniformity is not valid for an ordinary copy paper utilized in this experiment. This can be the reason why the wavelet-denoising LUT method in [11] results in positional performance variations.

Display color calibration performance in various illumination environments
The illuminance values are 0, 400, 1000, 10,000, and 45,000 lux. Note that 0 lux is the illuminance of a dark room and 400 lux represents the illuminance of sunrise or sunset on a clear day. The 1000 lux symbolizes an overcast day and 10,000 lux represents a full daylight (not direct Figure 11. The color difference between XYZ values obtained by spectroradiometer and camera with each method. sun). The 45,000 lux represents illuminance of direct sunlight. In this experiment, the color temperature of D50, D55, D65, and D75 are utilized [18]. The experiment with the 45,000 lux is performed in outdoor. The rest of the illumination conditions are simulated using the LED studio lights and metal halide lights. Two LCD displays illustrated in Figure 12 are utilized in this experiment. They exhibit different gamuts and the maximum values of luminance, 180 and 198 cd/m 2 . Again, 24 color patches of the Macbeth color checker illustrated in Figure 7 are utilized as testing color samples. Display device characterization model is constructed for each of displays according to the procedure described in Section 3. The color calibration target is listed in Table 3. The color correction procedure explained in Section 4 is applied to both of displays independently and identically. Factors that may affect the performance of the proposed color correction method are as follows: dependency on the illumination environments (illuminance and CCT), displayed colors and pixel positions. Performances of color correction are evaluated by the color differences calculated at the location illustrated in Figure 13. The color difference is calculated in the CIE L*a*b*. Figure 14 illustrates the color differences for different levels of illuminance and CCT. The color differences are calculated at the P A in Figure 13 averaged over 24 testing colors. In Figure 14(a), effect of the illuminance level on the color differences can be verified. The CCT for all of the five illuminance levels is the D65. The average color differences over five illuminance levels for the 'before calibration', method in [14,15], method in [16] and the proposed method are 22.89, 24.34, 5.86, and 2.34, respectively. In Figure 14(b), the results with four different CCTs are illustrated. In this case, the level of illuminance is 1000 lux. The average color differences over four CCTs for the 'before calibration', method in [14,15], method in [16] and the proposed method are 22.89, 22.54, 5.73, and 2.28, respectively. The experimental results show that the  proposed method is very effective to suppress center-tocenter color differences between displays. It is known that the color difference less than 3 is hard to perceive. The method in [14,15] is applicable only in a dark room. This yields the smallest color difference at 0 lux. However, the values of the color differences are much bigger than the proposed method. In addition, because it considers only the brightness difference and does not consider the color difference between the displays, the performance of color calibration is poor. The method in [16] yields the color difference that is reduced to around 5. But, this figure is greater than that of the proposed method. Figure 15 illustrates the color-specific calibration performance. The illumination condition for this experiment is D65 and 1000 lux. The horizontal axis represents 24 different color patches specified in Figure 7. The method in [16] focuses on the center-to-center white correction. This can be the reason why the color difference is small for the gray patches (19-24). However, because this  method does not consider the gamut variation between the two displays, the color differences for the chromatic color patches are bigger. In contrast, the proposed method results in significantly less color differences for all the 24 colors. It may be due to the fact that the proposed method performs color and luminance corrections with color calibration target, whereas the method in [16] is designed to minimize center-to-center white point difference.

Color correction performance at the boundary of sub-displays
Color calibration performances are evaluated at the four different positions specified in Figure 13. The illumination condition for this experiment is D65 and 1000 lux. Table 4 lists the average color differences. The  proposed method yields the smallest color difference of 2.22 at the center location. Unlike the method in [16], there is not much positional dependencies on color differences. This is due to the fact that the proposed method performs position dependent color corrections.

Conclusion
The display color calibration is quite important for tiled display system because undesirable artifacts can be perceived at the boundary of two adjacent sub-displays. Furthermore, routine color calibration for tiled display system under various illumination environments is often required. In this paper, a camera-based display color calibration method is proposed. It is cost effective and simple. The proposed display characterization model is position dependent and illumination independent. The proposed color correction method is independently applied to each of sub-displays. When the calibrated image is displayed on the tiled display system, all the pixel on the tiled display systems should satisfy the color calibration target. Therefore, the proposed method can minimize the non-uniformity in luminance and color across the sub-displays. In addition, it can reduce the smoothly varying spatial non-uniformity within the sub-display. In order to apply the proposed method for outdoor signage applications, effect of spatial distance and angle between the tiled display system and position of camera should be further examined.

Disclosure statement
No potential conflict of interest was reported by the authors.

Funding
This work was supported by the DMC R&D center, Samsung Electronics.