Near-real time forest change detection using PlanetScope imagery

ABSTRACT To combat global deforestation, monitoring forest disturbances at sub-annual scales is a key challenge. For this purpose, the new Planetscope nano-satellite constellation is a game changer, with a revisit time of 1 day and a pixel size of 3-m. We present a near-real time forest disturbance alert system based on PlanetScope imagery: the Thresholding Rewards and Penances algorithm (TRP). It produces a new forest change map as soon as a new PlanetScope image is acquired. To calibrate and validate TRP, a reference set was constructed as a complete census of five randomly selected study areas in Tuscany, Italy. We processed 572 PlanetScope images acquired between 1 May 2018 and 5 July 2019. TRP was used to construct forest change maps during the study period for which the final user’s accuracy was 86% and the final producer’s accuracy was 92%. In addition, we estimated the forest change area using an unbiased stratified estimator that can be used with a small sample of reference data. The 95% confidence interval for the sample-based estimate of 56.89 ha included the census-based area estimate of 56.19 ha.


Introduction
Monitoring forest changes and areas of changes at sub-annual scales using satellite imagery is an increasingly important part of initiatives aimed at reducing global deforestation, primarily because sub-annual results can support a fast response to illegal deforestation (Hamunyela et al., 2016). On the other hand, existing forest disturbance detection approaches suffer from several limitations. For example, they have aimed at producing only yearly products, most commonly an annual forest disturbance map. For example, Landsat time series analyzed with spectral trajectory systems were used to map yearly clearcuts in several regions including western Oregon (Schroeder et al., 2007) Canada (Hermosilla et al., 2015), Finland (White et al., 2018) and Italy . Another example consists in the Global Forest Change map GFC (Hansen et al., 2013). It was obtained using a machine learning approach in order to map tree cover extent, loss, and gain at the global scale but only on an annual basis.
A limitation of many remote sensing algorithms is that the output products cannot be obtained immediately at the end of the study year. Examples include studies based on the Landtrendr approach (Kennedy et al., 2010), a set of algorithms commonly used to predict forest changes based on spectral trajectories analysed using multitemporal optical images, most commonly a Landsat Time Series (LTS). Landtrendr has been used to reconstruct historical forest cover change in the Lower Amazon Floodplains (Fragal et al., 2016) and to study the dynamics of vegetation disturbance and recovery in surface mining areas . Landtrendr classification performance was improved by adding a secondary classification using random forests (Cohen et al., 2018). However, because Landtrendr searches for both a decrease in photosynthetic activity due to the disturbance and post-disturbance recovery, the accuracy of the method drastically decreases when the analysis is conducted for a recent year. Under these conditions, the post-disturbance spectral recovery trend cannot be easily interpreted for purposes of distinguishing between areas of permanent land use change and areas temporarily unstocked due to forest logging or forest fires (Kennedy et al., 2010).
To support illegal logging monitoring activities, detecting forest changes as soon as possible is needed as are characterizations of land surface dynamics with high temporal frequencies at fine spatial scales (Zhu et al., 2016). This information is essential for forest monitoring activities in heterogeneous territories such as Italy where clearcuts are often smaller than 1 ha Giunti, 2011) and where illegal harvesting of very small forest patches can be concluded in only a few days.
To date, a comprehensive method for monitoring forest change in near real-time is not available, primarily because traditional optical satellite sensors do not provide images with sufficiently short revisit times and sufficient spatial resolution to predict small-area forest change (Gevaert & Javier Garcia-Haro, 2015a). Fine resolution optical images useful for mapping forest change are acquired by multiple satellite platforms with a variety of temporal frequencies: CBERS (26 days), SPOT (26 days), IRS (25 days), LANDSAT (16 days) and Sentinel-2 (2-5 days depending on latitude). Although these information sources can be used for efficient annual forest change mapping using change detection methods, none acquires images frequently enough for near real-time monitoring (Xin et al., 2013) or for supporting operational fast reaction measures, especially in cloudy regions (Hirschmugl et al., 2017).
Further, moderate resolution imagery such as MODIS has been demonstrated to be unsuitable for forest disturbance mapping (Hammer et al., 2014;Hansen & Loveland, 2012;Morton et al., 2005) because it misses as much as 50% of forest disturbances when compared to fine resolution satellite data. To take advantage of both MODIS revisitation time (1 day) and Landsat 30 meters spatial resolution, several data fusion methods have been tested in recent years (Feng Gao et al., 2006;Fu et al., 2013;Gevaert & Javier Garcia-Haro, 2015a;Huang & Zhang, 2014;Song & Huang, 2013;Wu et al., 2012;Zhu et al., 2010). However, because predicting forest change is based on comparison of data for the same area at different times (Lhermitte et al., 2011), and because a substantial proportion of the spectral signal of each moderate resolution pixel may be contaminated by areas not associated with forest change (Tan et al., 2006), we agree with Townshend et al. (2000) that a moderate resolution per-pixel comparison is problematic for monitoring forest change.
The number of valid observations can be increased using Synthetic aperture radar (SAR) images which can penetrate clouds (De Sy et al., 2012). For example, Reiche et al. (2018) obtained an average of approximately four valid per-pixel observations per month by integrating Sentinel-1, Landsat and ALOS-2 PALSAR-2 imagery. They detected large forest harvestings in Bolivia with a mean time lag (MTL) between event occurrence and event detection of 31 days ± 6.35 days and a user accuracy of 88%. A direct relationship between MTL and accuracy was reported, specifically for an MTL of 22 days ± 6.35 days, the user accuracy decreased to 53.3%. Thus, accuracies associated with the shorter MTLs required for near-real time monitoring are not sufficient. Further, the efficiency of radar systems is known to be strongly affected by environmental and topographic conditions, particularly orographic obstacles that affect the quality of estimated radar products (Vulpiani et al., 2012).
In summary, automated methods based on active or passive sensors that can accurately predict forest change in near-real time over large heterogeneous landscapes still suffer from multiple shortcomings.
The new PlanetScope nano-satellite constellation represents a potential game changer in this topic. The PlanetScope mission (https://www.planet.com/) was initiated by Planet Labs Inc., an Earth imaging company based in San Francisco, USA, using multiple launches of groups (flocks) of nano-satellites designated CubeSat 3 U (Leach et al., 2019). These nanosatellites called "Doves" are approximately 10-cm x 10cm x 30-cm and are equipped with a relatively simple multi-spectral camera that acquires data in four bands in the visible and NIR channels (between 455 and 860 nm). Starting from the first flock deployed 22 June 2016, the number of satellites has steadily increased to 149 on-orbit satellites as of September 2019, thereby offering a capacity for collecting data for 200 million km 2 /day. PlanetScope satellites are all in the same orbit at 400 km height, considerably lower than Landsat (700 km) and Sentinel-2 (780 km) satellites. PlanetScope satellites offer an unprecedented combination of 3-m spatial resolution and 1-day temporal resolution and represent a crucial technological advance for developing near-real time remote sensing systems. Despite these potential advantages, a Scopus-based literature search revealed no applications of PlanetScope imagery for forest disturbance monitoring and mapping.
Currently, Italy has no system for recording forest logging at the national level, despite the urgent need for official statistics to support forest planning strategies, especially in the context of greenhouse gas assessments. The only current consistent source of information available at the national level is the National Forest Inventory (NFI) which, however, is regularly updated only every 10 years (Tedeschi & Lumicisi, 2006). The Italian office of statistics (ISTAT) produced forest logging statistics based on administrative information until 2017, but ended the service following criticism related to the underestimation of logging activities (Chirici et al., 2011;Cruciani, 2017).
In this study, we present a new Thresholding Rewards and Penances TRP algorithm using PlanetScope imagery for near-real time forest change detection. By means of a reinforcement learning concept, TRP outputs updated forest change maps in near-real time, i.e. each time a new PlanetScope image is acquired. We tested TRP in five randomly selected study areas in Tuscany, Italy. All forests were coppices harvested by clearcuts. One study area was used to develop and calibrate TRP and the remaining study areas were used as never-seen-before data to validate the algorithm with respect to both map accuracy and the precision of estimates of areas of change.
The primary aims of this work are twofold: (i) to assess the accuracy of the TRP maps and (ii) to construct confidence intervals for clearcut area of the form μ � 2 � SEμ ð Þ where μ is the estimate of area and SEμ ð Þ is the standard error of the estimate. Thus, the focus of the second aim is μ and SEμ ð Þ:

Study area and reference data
The test was carried out in five randomly located quadrat study areas in Tuscany for a total of about 4600 ha (Figure 1). The areas were constructed by randomly selecting five forest plots of the local forest inventory of Tuscany that then served as quadrat centers. All forests within the study area consist of broadleaved species managed as coppices and harvested by clearcuts with a maximum size, by law, of 20 ha (Regione Toscana, 2003/08/08, n.48\R).
In each area, we first masked out non-forest areas and temporarily unstocked areas that had been logged before the study period. The result was a total of 4450 ha, or equivalently 97% of the total area, covered by forests of which 3783 ha were undisturbed and considered for the current study.
A reference geodatabase was constructed using the method described herein and in . In cooperation with the local forest authority, we mapped all forest disturbances that occurred between 1 September 2018 and 5 July 2019 by identifying forest loggings but not areas disturbed by forest fires or wind damage. We obtained a reference geodatabase of 42 vector polygons (88 ha): 10 clearcuts for 32 ha in area number one, 6 clearcuts for 11 ha in area number two, 6 clearcuts for 24 ha in area number three, 15 clearcuts for 13 ha in area number 4 and 5 clearcuts for 8 ha in area number 5 ( Figure 1). Reference mapping was conducted by photointerpretation of PlanetScope imagery, followed by a field verification campaign for all polygons mapped as logged. As a result, we obtained a census-based reference geodatabase considered free of both commission and omission errors. Among the five areas, we randomly selected one area to calibrate Figure 1. Top, the location of the five study area; bottom, reference clearcuts mapped for each study area.
our algorithm (blue area in Figure 1) and the remaining four areas to validate the method and assess its performance.

PlanetScope imagery
For photointerpretation work and development of our automated procedure, we used only PlanetScope images freely available on-line (Leach et al., 2019). The API Planet Tile Services (https://developers.pla net.com/docs/basemaps/tile-services/) was used on a daily basis to access the 8-bit RGB visible spectrum bands. The original images have a 3-m resolution which, after a post-processing geometric coregistration, were converted to images with a pixel size of 3.46 m. Images with percentages of valid pixels less than 70% (Wiering et al., 2011) were filtered out. We refer to the remaining images as Valid Images (VI). The filtering entailed constructing a mask for each image by identifying dense clouds which were defined as pixels with digital numbers greater than a given threshold (184) for at least one band and shadows which were defined as pixels with values less than a given threshold (33) for at least one band. The result was a geodatabase of 572 GeoTIFF images ( Figure 2) acquired between 1 May 2018 and 5 July 2019: 99 images for area 1, 124 images for area 2, 100 images for area 3, 77 images for area 4 and 172 images for area 5.
We used the open source JavaScript library Leaflet (Cheng et al., 2018) for all pre-elaboration steps. For purposes of using the image data to distinguish between forest change and undisturbed areas, we tested several indices that could be calculated with the visible band data at our disposal. The selection of the "best" index was performed by calculating several indices from the remote sensing Index Database (https://www.indexdatabase.de/) over the first area and by checking through photointerpretation the index for which the clearcuts were more evident. We selected the Hue index https://www.indexdatabase.de/ db/i-single.php?id=34 (eq. 1).
Hue idx was used to study the degradation of arid natural environments in Tunisia (Escadafal et al., 1994) and, although calculated using a slightly different formula, to model soil organic matter content to support soil fertility management plans in Nepal (Mandal, 2016). However, as far as we know, it has never been used to map forest change, and we have no information on its sensitivity to spectral change due to non-clearcut causes such as burned areas, areas damaged by windstorm.
In contrast to the Normalized Burn Ratio (NBR) index (Key & Benson, 2006), the Hue idx has values for clouds that are considerably less than values for harvests. This feature eliminates the need to mask out clouds from our images. Thus, additional errors due to the cloud removal algorithm were not introduced, and the number of valid observations using even pixels partially covered by only moderately dense clouds was increased.

Normalization
Because images can be acquired with different illumination conditions, normalization is a relevant issue when images acquired with optical sensors are used for time series analysis (Leach et al., 2019). We used a softmax normalization method by applying the softmax function (eq. 2) to each pixel, Hue px , of each image where the denominator term is the sum of all pixels in the image. The softmax function is a well-known normalization method for data mining applications (Torgo, 2016) but we weren't able to find references of previous application in remote sensing image normalization. The normalization process resulted in a stack for each area in which each pixel of each image has values ranging between 0 and 1. For future reference, we refer to this stack as Softmax Normalized images Stack (SNS).

The classification algorithm
The Thresholding Rewards and Penances TRP concept is based on a group of algorithms characterized as reinforcement learning algorithms (Hammoudeh, 2018;Wiering et al., 2011). Each time a new PlanetScope image is acquired, pixels are analyzed independently and receive either (1) a reward" or (2) a "penance" where (1) indicates that probably a clearcut has been detected by that pixel or (2) the pixel did not register anything unusual. Rewards and penances do not depend on comparison with the reference dataset as occurs in supervised learning but depends on a basic threshold applied to an image. More specifically, a pixel receives a reward if there is a change value in the SNS greater than the threshold, otherwise it receives a penance. When the awards a pixel receives reach a target, the pixel is finally classified as forest change. This kind of algorithm requires that the system has "memory" and, in our case, that the history of each pixel has been recorded. We did it in a specific layer which we called the Memory Layer ML. Below, and in Figure 3, we detailed the TRP work flow.
In the SNS images, pixels associated with forest changes have normalized Hue index values that are larger than values for undisturbed, stable forest. However, images for which clearcuts are covered by dense clouds are not useful. Furthermore, some commission errors occur when increased Hue index values are due to noise resulting from atmospheric conditions and sun angle changes during the year. We used a very dense time series typical of the PlanetScope imagery to increase the signal-to-noise ratio and to remove such errors using TRP, thereby producing a more stable forest change map each time a new PlanetScope image is available.
In the first step, TRP produces a new synthetic time series, ΔSNS, by calculating differences between the Hue index for the i-th image (SNS i ) in the SNS and the median, s of the distribution of Hue indices for images acquired for the May-August 2018 period (eq. 3), the leaf-on season for the deciduous species targeted by the study. We considered s as the pre-disturbance condition EUROPEAN JOURNAL OF REMOTE SENSING because our reference clearcuts were conduced starting from September, and we decided to predict clearcuts starting from September.
In ΔSNS i , pixels that are unchanged since the previous summer have small absolute values of SNS i À s; otherwise pixels belonging to forest changes have large values as long as the pixels are not in low light condition or covered by clouds. TRP uses three input parameters: i) th is a threshold value between 0 and 1 of ΔSNS ii) pn is the penance with values between −1 and 0; and iii) tg is the target whose values may be greater than 1. The threshold th is applied to each image in the ΔSNS time series. Pixels with values greater than th receive a reward (1 is added), while pixels with values less than th receive a penance (pn is subtracted). Rewards and penances are progressively calculated and recorded in a new raster image called the Memory Layer (ML) each time a new PlanetScope image is acquired and processed. Finally, only ML pixels with values of at least tg are classified as forest change. The classification algorithm is outlined in Figure 4.

Optimization
Optimization of the algorithm entails searching for values for the th, pn and tg parameters that produce the greatest values of the Matthews Correlation Coefficient (MCC) (Matthews, 1975), Þ ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives and FN is the number of the false negatives.
To avoid overfitting and to ensure model generalizability, we heuristically searched for the optimal combination of the three parameters using the reference data for only the first of the five areas, whereas the other four areas were used for independent validation. We ran the algorithm with 100 different parameter combinations consisting of th values between 0.1 and 0.4 with a step of 0.1, pn values between −0.2 and −0.8 with a step pf 0.15, and five values of tg between 1.5 and 6.5. The range of parameters values tested was based on our knowledge of the algorithm and on preliminary analyses.

TRP performance assessment
TRP performance was evaluated using the four validation areas which comprise an independent dataset. Using a confusion matrix analysis, user's and producer's accuracies were estimated as shown in Table 1. Three values for each of the total clearcut (CC) and Forest areas were calculated and compared. First, the total CC and Forest areas were calculated using the reference data. Because we have a census-based reference dataset, and because it is assumed to be free of error, these CC and Forest area serve as true values. Second, we calculated the sum of the areas of the map units classified as CC and Forest. These are intuitive Figure 4. PlanetScope VI availability during the study period in the different areas and for the different months where green denotes the average over the five areas. In the upper right corner is the histogram of VI available over all the months and all the study areas.
estimates, but this estimator, sometimes characterized as pixel counting, is biased because it does not account for classification error. Third, we used the samplebased, unbiased stratified estimator of the CC class area formulated as Olofsson et al. (2014), with standard error ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where A tot is the total area, wt 1 is the proportion of the total area in the CC map class wt 2 is the proportion of the total area in the Forest map class, and b p 1 and b p 2 are the proportions of the reference CC observations in the CC and in the Forest map classes, respectively ( Table 1). The estimator of the area of the Forest class is formulated analogously but with p 1 and p 2 calculated for the Forest map class rather than the CC map class. The third value was based on a stratified random sample consisting of 500 pixels randomly selected from map clearcut areas and 500 pixels randomly selected from undisturbed forest areas, i.e n 1 = n 2 = 500 pixels in Table 1. Although we have a census-based reference dataset, using this approach we simulated to have a reference dataset of just 1000 points sampled from the predicted clearcuts map. A stratified sample was used because the combination of simple random or systematic sampling and a small CC map area would produce unacceptably small sample sizes for the CC map class. This stratified sample-based estimator is an efficient approach that can be used when acquisition of a census-based reference set is not feasible.

PlanetScope image availability
The number of PlanetScope VI for the five areas ranged between 77 for area 4 and 172 for area 5. The revisit time is strictly related to cloudiness and the number of in-orbit PlanetScope satellites. During the study period, increases in the number of satellites did not substantially affect the number of VI available. We acquired an average of 8 VI per month, ranging between 1 and 20 images per month ( Figure 4). As expected, meteorological conditions played a major role in the availability of valid observations. November and December, typical rainy months in Italy, have the smallest numbers of VI with averages over the five areas of 3.6 and 4.6 images, respectively. May was also a rainy period in 2018 and resulted in a limited number of VI, averaging between 1 and 8 over the five areas.

Calibration
The optimal TRP parameter combination was th = 0.30, pn = −0.35, and tg = 1.5. As expected, the algorithm performance was sensitive to the th parameter. Accuracy calculated using MCC was always greater than 0.75 for th= 0.30, independently of the other parameters. When th increased, the number of commission errors decreased, but the number of omission errors increased. Contrarily, for small values of th, MCC was consistently small but with almost no omission errors. The parameters pn and tg had substantial effects on the time lag between clearcuts and their detection with the TRP algorithm. However, because MCC was evaluated at the end of the time series, they did not consistently affect performance. In general, with greater parameter values, commission errors were fewer, but a longer image time series is needed to achieve small numbers of omission errors; with smaller algorithm parameter values, fewer forest disturbances were detected. With smaller numbers of images in the time series, numbers of omission error were small, but the risk of commission errors was large.

Validation
The forest change maps obtained with parameters th = 0.30, pn = −0.35, and tg = 1.5 at the end of the study period in the four validation areas, and related omission and commission errors, are reported in Figure 5.
The validation analyses results are shown in Tables 2 and 3. As is apparent in Figure 5, omission errors due to false negatives were always located at the borders of the clearcuts consisting in 4.7 ha (see Table 2) while commission errors due to false positives (8.6 ha) were small isolated patches. The similarity in results for different areas in Table 2 suggests that our Total n �1 ¼ n 11 þ n 21 n �2 ¼ n 12 þ n 22 Producer's accuracy pa 1 ¼ n11 n�1 pa 2 ¼ n22 n�2 # N h is the total number of map units in the h th map class calibration for the first area is also appropriate for the other areas and that our model is potentially generalizable and applicable for different and larger areas.
In Table 3 we report comparisons between the map-based estimate, sample-based estimate and census-based measure. For the 3035 ha of forest area in the four validation areas, 56.19 ha (1.8% of the forest area) were classified as forest change on the basis of the reference data. Our approach produced a forest change map that depicted 60 ha of clearcut, an overestimation of 6.6%. The key results were that all three CC areas were similar as were census-and sample-based user's accuracies. Map and sample producer's accuracies were slightly different because CC and Forest sample sizes are unbalanced. Indeed, using a random sample, errors in the CC class (FN) becomes extremely rare (the CC Producer's accuracy increases) while Forest sample units correctly classified (TN) inevitably decrease more than misclassified samples (FP) (the Forest Producer's accuracy decreases). The similarity in the census and sample-based CC areas is  as expected because the sample-based stratified estimator is unbiased. Similarity with the map-based estimate is attributed to the large map accuracies. Although the census area and accuracy estimates are without uncertainty, users may not wish to commit the time and effort to acquire a complete census of reference data. If not, the sample-based stratified approach described by Table 1 and Eqs. (5) and (6) is a viable alternative. The standard errors reported in Table 3 correspond to sample sizes of 500 for each map class (sampling intensity = 0.04%). Larger sample sizes would produce smaller standard errors, while smaller sample sizes would produce larger standard errors. Sample sizes should be large enough to produce at least 10-20 observations in each cell of Table 1.

Discussion
The TRP algorithm using PlanetScope imagery represents an efficient method for mapping forest changes in near real-time and to support forest police controls for quickly identifying illegal activities. Because of its daily frequency and fine spatial resolution, the new PlanetScope mission represents a game changer for near-real time forest disturbance monitoring and TRP makes the most of this new technological advancement. An advantage of our method is that it does not require long time series but just a few days. In fact, on 5 July 2019 we were able to map clearcuts conducted since September 2018 in addition to some clearcuts conducted early in July 2019. The promptness with which we obtained the map and the fine resolution of our products complicate comparisons with results obtained using different approaches. For example, so far, the GFC (Hansen et al., 2013) is not available for 2019 and has no comparable spatial resolution.

TRP preprocessing
For imagery preprocessing, we found that the softmax normalization method (eq. 2) worked well for maximizing the spectral distance between harvested and undisturbed forests in each image. We also maximized the number of PlanetScope valid observations without a-priori cloud masking because the Hue Index (eq. 1) detects forest change despite rarefied clouds. However, clearcuts covered by dense clouds in several subsequent images received too many penances by our algorithm and were temporarily classified as "undisturbed forest". Therefore, we had to exclude some images with too few valid observations (less than 30%) which resulted in discarding some PlanetScope images that may be cloud-free over a specific clearcut area.

TRP calibration
Our procedure worked well for all five areas, even though the algorithm was optimized for only one area and then applied to four independent validation areas. The calibration of TRP using just one area, together with the good results obtained in the remaining four, is a reliable demonstration that TRP was able to predict forest changes in a different area than the one used for calibration. Additional evidence of TRP generalizability is that the results obtained in the four independent validation were even slightly better than those obtained in the training area. This demonstrates that the model was not overfitted.
The fact that the model was not overfitted was expected because TRP is an unsupervised algorithm and the TRP calibration consists in a simple threehyperparameters selection. On the other hand, we stress that we selected the three parameters to maximise MCC in the training area, but different results could be obtained depending on the final purpose (e.g. minimizing the execution-detection time lag or minimizing omission or commission errors) or on the training area. For example, by calibrating the target and the ratio rewards/penances, a user can decide to maximize the clearcut map accuracy or minimize the execution-prediction time lag.

Map accuracy
Each time a new PlanetScope image was acquired the TRP algorithm produced an updated map of forest disturbances and reached at the end of the study period a user's accuracy of 86%, a producer's accuracy of 92% and a MCC of 0.88.
TRP accuracy tended to increase for each new image but for some areas did not stabilize at the maximum level by the end of the study period. This means that an even more accurate map can be obtained in August or September. The small false negative rate (8%) was satisfactory, particularly because omission errors were located only at clearcut boundaries, possibly a result of minor errors in the co-registration of different images or in the digitalization of the reference clearcuts. However, all forest changes were detected at least in part.
Even the false positives rate was satisfactory but slightly greater (14%). Those commission errors most probably can be attributed to the need of capturing forest changes as soon as possible and to the limited number of PlanetScope images available between forest change occurrence and prediction. However, if needed, the commission errors can be decreased adopting a different TRP calibration as detailed in section 3.2. at the expense of greater execution-prediction time lag.
In addition, many commission errors are rarefied pixels or polygons with dimensions generally smaller than the sizes of actual forest changes that can be excluded using a simple post-processing GIS step. Further, rare commission error polygons had shapes that were very different from real forest changes and could be easily deleted in a second phase based on the perimeter, compactness, shape index and fractal dimensions (Hermosilla et al., 2015).

Clearcut area estimation
In this study we demonstrated a robust statistical method that can be used to estimate the forest harvested area and the corresponding standard error using a predicted forest change map and a stratified random sample of points selected from that map. Specifically, with a sampling intensity of 0.04% we obtained an estimate of the harvested area just 1% larger than the true value with a standard error of 10%. Such forest removal statistics are requested for forest policy reporting, both to evaluate the level of sustainability of forest management and to assess production of ecosystem services.
Forest changes are of considerable interest for greenhouse gas (GHG) inventories. In fact, Italy is asked to report forest harvested area to the Intergovernmental Panel on Climate Change IPCC. As such, consideration should be given to the IPCC good practice guidelines for GHG inventories: (1) "neither over-nor underestimates so far as can be judged," and (2) "uncertainties are reduced as far as practicable" (Eggleston et al., 2006, Volume 1, Chapter 1, Section 1.2; GFOI 2016, p. 15). From a statistical perspective, satisfaction of these criteria requires use of unbiased estimators and, at minimum, rigorous estimation of uncertainty. In particular, there can be little assurance that uncertainties are reduced until they are first rigorously estimated. Our sample-based, stratified estimator of the clearcut area complies with the first guideline, whereas the map-based estimator does not because it does not account for map classification error. In addition, the stratified estimator of the standard error is statistically rigorous and complies with the second guideline.

Enhancements
Despite the promising results achieved, some aspects merit consideration. First, Because of the fine geometric resolution of PlanetScope images, our approach requires a very detailed, updated and accurate forest mask to a-priori exclude areas where the spectral change is not due to a clearcut. For this study we revised a local forest mask that had been used for other studies  but the lack of such a forest mask can limit the potential application of this approach over large regions. Second, the API Planet Tile Services (https://developers.planet.com/docs/base maps/tile-services/) is useful, free method for daily access to the PlanetScope RGB imagery but the full set of the four spectral bands, as well as the full image depth, are not available. For operational purposes a specific budget should be reserved for the acquisition of raw imagery. On the other hand, at present, the exact cost for such imagery is not available and it must be contracted with the data provider on a case-by-case basis. Therefore, it would be reasonable to test our method using Sentinel-2 images, which are free of charge and available with a revisit time of 2-3 days at mid-latitudes. In contrast to the PlanetScope mission, Sentinel-2 mission data, Bottom Of Atmosphere (BOA) and the complete set of bands are free. Also, Sentinel-2 imagery are available on Google Earth Engine GEE, a cloud platform that offers the opportunity to analyse huge amounts of geospatial data on a planetary scale. Using Sentinel-2 images, an implementation of TRP on GEE should be considered.
Finally, although the study has demonstrated the efficiency of our TRP algorithm for near-real time detection of clearcuts, a more in-depth evaluation of the effects of those parameters on the mean time lag and the accuracy rate would be beneficial. Such an effort would contribute to greater understanding of the degree to which the procedure can be generalized for large regions or for different type of forest disturbances like forest fires or wind damages, which are increasingly frequent and disastrous.

Conclusions
Two primary conclusions were drawn from the study. First, starting from September 2019, our TRP algorithm detected new forest disturbances whenever a new PlanetScope image became available and by July 2019 reached a user's accuracy of 86% and producer's accuracy of 92%. Second, the combination of the TRP-based map, the confusion matrix, and the stratified sample-based estimator constitute a statistically rigorous approach for estimating CC area that complies with the IPCC good practice guidelines for GHG inventories. Accordingly, using the TRP clearcut map and photointerpreting 500 pixels (sampling intensity of 0.04%) we obtained an estimate of the harvested area just 1% larger than the real value and with a standard error of 10%.

Disclosure statement
No potential conflict of interest was reported by the authors.