Efficient dead pine tree detecting method in the Forest damaged by pine wood nematode (Bursaphelenchus xylophilus) through utilizing unmanned aerial vehicles and deep learning-based object detection techniques

Abstract Pine wood nematode (Bursaphelenchus xylophilus) is an invasive pathogen in South Korea, where it has caused pine wilt disease (PWD) with extremely high mortality of native pine species (Pinus densiflora, Pinus thunbergii, and Pinus koraiensis). Since the disease spreads by its vectors, native pine sawyer beetles (Monochamus alternatus and Monochamus saltuarius), the cost of monitoring the expansion has been rapidly increasing. Furthermore, it is even more costly to eliminate new and isolated infections since unremoved infected trees act as new sources of infection through the preferred oviposition of the beetles on such trees. The methodology of combining unmanned aerial vehicle (UAV) and object detection based on deep learning provides the opportunity to solve such problems, as UAV with RGB camera can provide high spatial resolution aerial image and digital surface model (DSM), which can be used for object detection with excellent results. In this study, we evaluated the performance of this method to detect dead pine trees in PWD-damaged areas. In particular, to ensure low omission error of monitoring, YOLOv3 was employed for object detection as the model design is focused on minimizing the omission error. We also modified the model so that the positions and crown diameter could be estimated. Four detection models were trained using four different combinations between aerial images (R, G, B) and DSM from UAV. Among them, the model from RGB showed the highest performance (recall: 0.9909, precision: 0.8438) and was selected as the optimal model. Our results suggest that our method can contribute to low-cost and effective monitoring of the dead pine trees while maintaining low omission error, which is critical for PWD management.


Introduction
Pine wood nematode (Bursaphelenchus xylophilus, PWN) is the pathogen responsible for pine wilt disease (PWD), which is fatal for pine trees (Bergdahl 1988;Kishi 1995). In recent decades, PWN has rapidly spread worldwide, causing massive waves of pine mortality, especially in East Asia and Iberian Peninsula (CABI 2021). This has been a particular problem for the forestry industry in South Korea as PWD has spread nationwide since the first report in 1988 (Yi et al. 1989). The two native pine sawyer beetles, Monochamus alternatus and Monochamus saltuarius act as vectors of this pathogen whose typical host tree species are Korean pine (Pinus koraiensis Siebod & Zucc.), Japanese red pine (Pinus densiflora Siebod & Zucc.), and Japanese black pine (Pinus thunbergii Parl.) (Kishi 1995;Han et al. 2008;Kim et al. 2009). PWD is especially lethal for P. densiflora and P. thunbergii, which die shortly after infection.
The deadly dispersal cycle of PWD starts from pine sawyers' ovipositing on recently dead pine trees (Linit 1988). Considering the positive feedback loop for PWN infection of pine trees, it is crucial to remove infected dead pine trees before they act as a new source of infection to prevent the further spread of PWD (Shibata 1986;Yoshimura et al. 1999;Kwon et al. 2011). For this reason, the Korea Forest Service (KFS) has put tremendous effort into searching and removing every dead pine tree in PWD-damaged areas (KFS 2020a). However, field-based patrolling and monitoring by field crews are time and cost demanding and often limit the detecting possibility depending on the accessibility and viewing position of crews. This results in some PWD-infected trees going undetected and becoming a new infection source the following year.
Traditional remote sensing technology, satellite or aerial image, can be the alternative (Lee et al. 2007;Kim et al. 2010Kim et al. , 2015Park et al. 2019), but it has limitations on high cost for spatial and temporal resolution.
The limitations can be overcome with the use of unmanned aerial vehicles (UAVs) with various sensors attached, which have been successfully used in ecological monitoring in the past (Anderson and Gaston 2013). Weather permitting, a relatively low-cost UAV can be deployed quickly at lower altitudes to acquire images with a spatial resolution high enough to identify individual trees. Furthermore, photogrammetric techniques, such as structure from motion (SfM), can employ UAV data to construct orthoimages and digital surface models (DSMs) to further improve object detection, and many commercial or open-source photogrammetry programs were automated for non-expert users (Iglhaut et al. 2019;Jiang et al. 2020). Previous studies have demonstrated that the combination of orthoimages, DSMs, and machine learning methods can significantly improve the accuracy of land cover classification (Kim and Choi 2017;Al-Najjar et al. 2019).
Based on UAV images, various methodologies have been implemented to detect dead pine trees in PWDdamaged areas, including direct visual interpretation , conventional machine learning , and object detection using deep learning Tao et al. 2020). In particular, deeplearning-based object detection, a technology for detecting or tracking target objects in images or video, has developed rapidly with promising results, including shorter processing times and higher accuracy (Bochkovskiy et al. 2020;Kim et al. 2020). The technology has been successfully used to detect a variety of objects and structures based on aerial and satellite images (Zhang et al. 2014;Cheng et al. 2016;Long et al. 2017;Van Etten 2018), including the dead pine trees Tao et al. 2020).
The deep learning-based object detection models usually use a combination of accuracy and omission error for evaluating their performances (Jensen et al. 2016;Deng et al. 2020;Liu et al. 2020). However, a detection model for PWD requires a very low omission error because the goal is to remove as many infected trees as possible to break the PWN death cycle. Previous research has shown that the spread of invasive organisms, such as PWD is often accelerated by new sources of infection caused by long-distance dispersal events, highlighting the importance of locating and eradicating these events (Muirhead et al. 2006;Kwon et al. 2011). Unfortunately, these events are generally isolated in their distribution, making them difficult to eliminate, thus it is important to minimize the omission error in PWD detection models to effectively identify and remove potential new sources of infection. For this reason, a focus on recall (i.e. the ability to detect all positive PWD deaths) may be the most appropriate criteria for judging the effectiveness of a PWD detection model in order to ensure low omission error.
As an alternative to field reconnaissance, a combination of UAV images, DSMs, and deep-learning-based object detection can be employed to locate new dead pine trees in PWD-damaged areas with far greater coverage and accuracy. In this study, we tested and evaluated a detection model for the dead pine trees based on deep learning that utilizes UAV images and DSM-based vertical information. In addition, we designed the detection model so that the position and crown diameter was automatically extracted from the detection results. We also focused on minimizing the omission error due to the importance of breaking the PWN death cycle, thus it was used as the criteria for selecting the optimal model.

Acquiring and processing UAV data
The study sites were Neujiri-oreum and Biyang-do in Hallim, Jeju Island, South Korea ( Figure 1). Both sites include pastures and mixed forests consisting of conifer and broadleaved trees. P. densiflora and P. thunbergii are dominant conifer species in the area. PWD has severely damaged the forests in Jeju Island despite the prevention efforts of the Korea Forest Service (KFS) and the local government (Table 1) (KFS 2016(KFS , 2017(KFS , 2018(KFS , 2019(KFS , 2020bJeju 2020). We acquired UAV aerial photos in September 2017 before prevention activities.
DJI Mavic Pro (SZ DJI Technology Co., Ltd, Shenzhen, China), a quadrotor UAV equipped with a 12.35-megapixel RGB camera, was used to acquire images with an overlap ratio of 90% at 100-150 m above ground levels. Pix4Dmapper (Pix4D SA, Lausanne, Switzerland), an automated photogrammetry software, was employed to create high-resolution orthoimages and DSMs ( Figure 2). The orthoimages and the DSMs had a resolution of 10 cm covering 17.5 and 15.8 ha in Neujiri-oreum and Biyang-do, respectively.

Object detection model and image preprocessing
We employed the You Only Look Once v3 (YOLOv3) algorithm for the dead pine tree detection model because it has been designed to minimize the omission error by detecting multiple bounding boxes for each target (Redmon and Farhadi 2018). Because YOLOv3 has been optimized for three-channel images, we created four different combinations of the three-channel image dataset by replacing each color from the RGB channel with a DSM (i.e. RGB, RG þ DSM, RB þ DSM, and GB þ DSM). We set the model input size to the maximum tested (608 pixels) to maximize the accuracy and segmented the orthoimages into similarly-sized 500 Â 500 pixel (50 Â 50 m) windows to minimize information loss due to resampling (Redmon and Farhadi 2018), with a 50% overlap used to reduce training data loss at the boundaries ( Figure 3) (Van Etten 2018).
In addition, the segmented images were rotated by 90, 180, and 270 so that the model considered the potential rotation of the targets during training (Zhang et al. 2014;Cheng et al. 2016;Long et al. 2017;Van Etten 2018;Deng et al. 2020). As a result, 1,872 samples were generated for each dataset. We used 70% of the dataset as a training set, 20% as a test set, and the other 10% for validation. The training and test data were used for the training process, and validation data were used to evaluate the performance of the detection model.
All dead pine trees in study areas were identified on the high-resolution orthoimages. We labeled the various types of dead pine trees into a single class, although they are visually different depending on their time of death; trees that had recently died were greenish orange in appearance, and those that had been dead for longer were reddish-brown ( Figure 4).

Model training and evaluation
YOLOv3 was employed in the base model to detect the dead pine trees. Four variations of this model were trained using the four datasets RGB, RG þ DSM, RB þ DSM, and GB þ DSM. We trained each variation for up to 300,000 epochs and set the learning rate at 10 À3 . The trained model weights were saved at every 10,000 epochs. After training the model, we compared the variations of the model based on their precision, recall, and average precision (AP) to select the optimal model configuration. We considered recall to be the most important index due to its sensitivity to omission error; therefore, the results were ranked based on their recall performance to determine the optimal model configuration, while precision and AP were also checked to ensure that the overall performance of the detection model was acceptable.
Process for the extraction of dead pine trees from UAV images We applied the optimal model to extract information for the dead pine trees ( Figure 5). The extraction  process detected the dead pine trees using the optimal model and recorded the position (center coordinates) and crown diameter of the trees based on the detection box. Because the boundaries of the segmented images could lead to losses and duplicate detections, we designed the process to include overlap and buffers for all segmented images (Van Etten 2018).
We evaluated the information from the extraction process with the labeled data as a reference to confirm the performance of the optimal model. Precision and recall were used to evaluate the detection accuracy, and both the intersection over union (IoU) and root mean square error (RMSE) was used to evaluate the accuracy of the features represented by the location and crown diameter. The IoU is the ratio of the intersection over the union for the mapping results and the reference data.

Model weights and optimal model selection
We selected the model weights for each combination of channels that showed the highest recall values in the validation ( Figure 6). All versions of the detection model produced a satisfactory performance (Table 2), with that based on the RGB dataset outperforming the others. The recall and precision of the optimal detection model were very high (0.9908 and 0.8438, respectively), with a low omission error (0.0092). We had anticipated that incorporating a DSM would improve the detection of individual dead pine trees compared to the RGB-based model, but the results suggested otherwise. The inclusion of a DSM (i.e. the RG þ DSM, RB þ DSM, and GB þ DSM datasets) led to a lower accuracy for all indices ( Table 2).

Extraction of dead pine trees from the UAV images
We employed the optimal model to detect the dead pine trees at both study sites ( Figure 7) and extracted their position and crown diameter. A total of 190 trees in Biyang-do and 140 trees in Neujiri-oreum were detected as dead pine trees. The recall and precision of the extracted information were high (0.9969 and 0.9787, respectively) despite the addition of functions to prevent losses and duplicate detections (Table 3). Only one dead pine tree remained undetected in this case, while seven objects, including patches of bare land and some deciduous species, were wrongly classified as the dead pine trees. This high accuracy confirmed that our extraction process effectively prevented the losses and duplicate detections that can arise due to the boundaries of the segmented images.
We also evaluated the accuracy of the location and size of detected dead pine trees using the RMSE and IoU (Table 4). The model produced a very low RMSE for both the location and crown diameter (0.3362 and 0.6105 m, respectively), while the average IoU was 0.8372, which represented an agreement of around 84% between the mapping results and the reference data. Figure 5. Flowchart of extraction process to detect dead pine trees in study sites from orthoimages. The dashed line represents the boundary of segmented images. Yellow boxes denote the detection results using the optimal model. Losses and duplicate detections (red boxes) are filtered using a buffer (the shaded area). A clear detection (blue box) records the position and crown diameter based on the detection box.

Discussion
We detected dead pine trees in PWD-damaged areas using a deep-learning-based computer vision method with orthoimages taken with a UAV as input. We selected the optimal model configuration from four options that used different image combinations and successfully extracted the dead pine trees, their locations, and crown widths from the UAV images.
The model in the present study represents an improvement over some previous studies in omission error that have reported the detection of PWD over large areas using object detection algorithms based on orthoimages from UAV Tao et al. 2020). These studies evaluated model performance using AP or precision, but detection models are also prone to omission error. As previously discussed, preventing new sources of PWD is crucial to its control, thus our approach ensured that our optimal model had a low omission error. We employed YOLOv3 for the detection of the dead pine trees to guarantee this lower omission error, which produced an optimal detection model with excellent performance (recall 0.9908, omission error 0.92%, and precision 0.8438).
Automated photogrammetry software and UAV allow the convenient method to acquire orthoimages and DSMs for PWD monitoring, even at a low cost and with non-expert skills. The field crew for PWD monitoring can use the method with just a little training. For convenience, the accuracy of position and height (DSMs) information had depended on the global navigation satellite system of the UAV without evaluation and correction for the accuracy to avoid expert equipment and skills. The evaluation requires additional fieldwork for each flight and expert equipment and skills, such as ground control point or realtime kinematic for position and ground or aerial light detection and ranging (LiDAR) for DSMs. Although the accuracy of DSMs from orthoimages is lower than that from LiDAR (Lisein et al. 2013), the DSMs still can improve the detection capabilities (Al-Najjar et al. 2019).
The UAV has a limitation on local-scale monitoring since the method can acquire images for relatively small areas than traditional satellite-or aerial-based remote sensing platforms, as the UAV has a short duration and low altitude of the flight. Traditional remote sensing is still efficient for local or larger scales. Therefore, the manager or researcher essentially considers the goals and spatial scale of monitoring to select suitable remote sensing platforms.
Although we expected that the use of a DSM would improve the detection capabilities of our model  because individual dead pine trees could be recognized in the DSM images, the version of the model that employed the dataset with only RGB channels had a higher accuracy than those versions with a DSM. We believe that the lack of impact from the DSM on model performance was due to the complex terrain in the study area ( Figure 2). Because most of the variation in the DSM was due to the rugged topography, its contribution to the detection of trees was limited. Several previous studies have attempted to obtain more meaningful information from DSMs by combining them with digital terrain models (DTMs) by removing the DTM from the DSM, a canopy height model (CHM) can be generated, which may be useful in detecting individual canopies of the dead pine trees (Lisein et al. 2013). However, DTMs require detailed measurements of the surface from sophisticated and expensive equipment, such as airborne LiDAR, that can reach the ground surface by penetrating the canopy (Lisein et al. 2013;Mielcarek et al. 2018). Also, multispectral and hyperspectral sensors can provide more precise information (Yu et al. 2021), but at a much greater cost. We believe that our method provides a practical solution based on the use of low-cost and user-friendly UAVs that can produce accurate results for the detection of dead pine trees.

Conclusion
In this study, we successfully used a UAV and deep learning to detect dead pine trees in PWD-damaged areas. UAV data coupled with a deep-learning-based model allowed for highly accurate and rapid detection at a low cost. In particular, the YOLOv3 algorithm produced a very low omission error (0.92%), suggesting that it can very effectively detect new, isolated cases of the dead pine trees, which is very difficult and expensive to achieve using conventional field-based methods. The RGB-based detection model produced the strongest performance when compared to versions of the model that included a DSM, emphasizing the practicality of both the operation and processing costs of our proposed approach. While the use of a CHM may improve the model results, our optimal model provided the efficiency and accuracy required to monitor PWD. In addition, with the additional information about the position and crown diameter of the dead pine trees, managers can easily estimate the labor required and evaluate other operational needs for PWD control and management.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
This study was carried out with the support of "R&D Program for Forest Science Technology (Project No. 2019150B10-2123-   "Reference" refers to the identified dead pine trees on high resolution orthoimages (True: actual dead pine trees, False: others); "Detection" refers to the model results (Positive: dead pine trees, Negative: others). IoU: intersection over union. Ratio of intersection over union for the results and the reference data; RMSE: root mean square error between the results and the reference data.