Exploring unmanned aerial systems operations in wildfire management: data types, processing algorithms and navigation

ABSTRACT Wildfire, also known as forest fire, is a common natural or man-made disaster that has caused devastation to both man-made structures and natural ecosystems throughout the world. Unmanned Aerial Systems (UAS)-based remote sensing (RS) provides valuable support for wildfire management efforts, enhancing both spatial and temporal resolution. The aim of this paper is to summarize the current applications of UAS operations for combating the wildfire worldwide. RS applications of UAS has been explored during the three stages of wildfire, including Pre-fire, Active-fire, and Post-fire, with a particular emphasis on the types of information collected and data processing methods. The pre-fire section assesses fire potentials, the active-fire stage focuses on fire surveillance and propagation, while the post-fire stage focuses on studies assessing wildfire impact using UAS. In addition, the review provides a comprehensive overview of UAS navigation techniques adapted to fire surveillance. The literature review was conducted using three bibliographic databases, including Science Direct, IEEE Explore, and Scopus, and 186 articles relevant to UAS applications in wildfires were manually gathered. This literature review encompasses 119 articles focused on RS, 40 articles related to navigation, and the remaining articles covering reviews, concept proposals, UAS designs, and solutions addressing technical limitations of UASs in the context of wildfire management. This review offers a comprehensive overview of recent UAS operations for future researchers aiming to utilize UAS technologies for efficient wildfire management. The findings highlight the need for further investigation into the onboard computational capacities of Artificial Intelligence (AI) algorithms, precise fuel load estimation, considering individual vegetation, and field experiments to evaluate and validate navigation algorithms.


Introduction
Wildfires (forest fires or bushfires) are uncontrollable fires that occur in wildland vegetation, leading to widespread devastation in different regions of the world every year (Stoof and Kettridge 2022). Wildfires can severely damage both man-made structures and natural habitats and release massive quantities of pyrolysis gases and particulate matter (PM) into the atmosphere (Chunitiphisan et al. 2018;Herzog et al. 2022). In the recent past, the world has experienced catastrophic wildfires that have caused extensive damage, including the Australian 'Black Summer' bushfires (2019-2020), massive wildfires in California (Safford et al. 2022), and the Amazon rainforest wildfires in 2019 (Ma et al. 2022). However, it is not only these highly publicized wildfires that cause damage, and every year millions of acres of wildlands are reported burned all over the world. In the European and Mediterranean regions, over 1.35 million acres of land were burned in 2021 (San-Miguel-Ayanz et al. 2022). There was also a significant amount of wildfire activity in Canada and United states in 2022, with over 4 million acres and 7.6 million acres of burned land, respectively (CIFFC 2022, NICC 2022. To prevent the loss of life and property during such catastrophic events, planning and taking preventive actions are imperative. The use of remote sensing (RS) techniques has increased in recent years to gather information allowing for analysis to better estimate wildfire impact, risk, and occurrence. RS platforms including satellite, manned aircraft, and Unmanned Aerial System (UAS) can provide environmental measurements in wildlands. UAS possess distinct advantages, including their ability to perform diverse tasks, adapt to changing conditions and spatial/temporal scales where they can be deployed quickly and on-demand with hyperspatial resolution. It provides a safe, cost effective and flexible approach for studying wildfires and avoid physical presence of the ground crew in dangerous forests (Mo, Peters, and Lei 2021;Saadat and Husen 2018). These tools have been utilized in three stages of wildfire (Active-fire, Pre-fire, and Post-fire) (Szpakowski and Jensen 2019). Active stage, pre-fire stage and post-fire stage studies involve employing UAS during the fire, before the fire and after the fire, respectively.
In recent years, there has been increasing interest in wildfire and UAS related studies, as evidenced by the growing number of published articles on this subject. Among them, review articles by Yandouzi et al. (2022), Akhloufi et al. (2021) and Alexandrov et al. (2019) have specifically investigated active-fire applications. Akhloufi et al. (2021) examines previous work related to sensors, algorithms, and coordination strategies used for fire monitoring. Yandouzi et al. (2022) focus on Deep Learning (DL) algorithms used for wildfire detection through UAS aerial images. Similarly, Alexandrov et al. (2019) compares the usage of Machine Learning (ML) for fire and smoke detection. Jeyavel et al. (2021) presented a literature review on challenges to be addressed in UAS-based fire extinguishing. These studies focus solely on a few aspects of the wildfires active stage and are limited in addressing the broader perspectives of wildfire applications, including the prefire and post-fire phases. Szpakowski and Jensen (2019) provide RS applications that pertain to the mentioned three stages of wildfire, including fire risk and fuel mapping, fire surveillance, estimation of burned areas, determining the severity of burns, and monitoring of vegetation recovery following a fire. However, there has been very limited attention paid to UAS-based RS. This review places more emphasis on UAS-based techniques with a particular focus of the potential of this technology for wildfire management.
In addition to highlighting UAS-based RS application under three distinct stages of wildfire, this review also highlights path planning algorithms designed for wildfire surveillance considering individual UAS as well as UAS swarms. The outline of this paper is described in Figure 1. The objectives of this review study are to: • Provide a comprehensive overview of UAS-based RS applications for wildfire in three stages of wildfire such as Active-fire, Pre-fire, and Post-fire. • Provide UAS platforms, data types and their processing algorithms used in wildfire applications. • Outline the UAS navigation algorithms that have been specifically adapted for fire surveillance. • Describe ongoing trend and future directions for application of innovative methods related to UAS-based wildfire studies.

Bibliographic analysis
The literature review was conducted using three databases: Scopus, Science Direct and IEEE explore, with the following keywords: 'Bushfire', 'wildfire', 'forest fire', 'drone', 'UAV' and 'UAS'. Total of 415 research articles were obtained for the search statement in the last five years. Figure 2 illustrates statistics taken from the Scopus database concerning the number and countries of publications analysing wildfire studies using UAS technologies. However, a focused selection of the most relevant papers was undertaken, with particular emphasis on UAS-based wildfire RS applications and navigation techniques. Any unrelated papers were excluded, and we gathered 186 articles that were highly relevant. The rest of the paper is organized as follows: Section 3 presents the UAS platforms and UASdriven data for wildfire application. Section 4 presents various RS applications of UAS across three stages of wildfire. Section 5 presents the studies targeting on UAS navigation in the context of wildfire monitoring and evaluation, and Possible future applications and conclusions are presented in Sections 6 and 7.

UAS for wildfire data collection
The potential of UAS for data collection has been utilized in many countries around the world. In this review, out of 186 studies, 83 studies conducted UAS field operations. The distribution of countries engaged in UAS field operations for navigation studies and RS across three stages of wildfire is depicted in Figure 3. Overall, most RS studies were conducted in Spain, with the highest number of studies in the post-fire stage with 10. China had the highest number of studies focused on active-fire, with 8, while the United States had the highest number of pre-fire studies, with 5. In the comparison, the percentage of RS studies (44.62%) conducted in the field was significantly higher than the percentage of navigation studies (10%) tested in the field. The extensive global utilization of UAS platforms highlights a prime example of their effectiveness in achieving efficient and economical acquisition of hyperspatial imagery. This section will discuss the UAS role in wildfire RS, platforms, and the data collected in recent years for wildfire management.

Spatial and temporal scale for wildfire remote sensing
The spatial and temporal scales required for RS tasks vary among the three distinct stages of a wildfire. In the pre-fire stage, RS plays a crucial role in examining wildlands to identify potential factors that may contribute to fire ignition and spread (Keane, Burgan, and van Wagtendonk 2001). And in post fire stage, RS is used to study the impact of the fire (Arkin et al. 2019;McKenna et al. 2017). During these stages, the spatial scale requirement is to cover large areas while maintaining fine resolutions. RS requirements for pre-fire studies and post-fire studies generally demand less temporal resolution compared to the active fire stage. Pre-fire and post-fire analyses often focus on long-term trends and changes, whereas the active fire stage necessitates frequent updates and real-time monitoring to Figure 3. Number of UAS field studies in top 6 countries ranked by the number of wildfire-related field operation. This image has been designed using assets from Freepik.com provide information on fire behaviour, extent, and intensity. All stages benefit from higher spatial resolution to capture detailed information about the landscape. However, during the active fire stage, there is an additional requirement for higher temporal resolution to monitor the rapidly changing fire behaviour in real-time.

Strengths and limitations of UAS-based remote sensing
UAS possess strengths and limitations in serving the required temporal and spatial scales for wildfire monitoring compared to other RS methods, such as satellites and manned aircraft. Similar to UAS, Satellite based RS have been used in all the stage of wildfire (Szpakowski and Jensen 2019). However, the development of UASs has provided unprecedented spatial and temporal resolution for the acquisition of forest data (Shamaoma et al. 2022). The temporal and spatial resolution of satellite RS are interdependent (Bussy-Virat, Ruf, and Ridley 2019). UAS offers a trade-off solution for RS, providing remarkably high spatial resolution while covering a smaller spatial area compared to other methods like satellites. UAS, with its rapid deployment, flexibility in flying at various altitudes and angles with diverse sensing payloads, enables real-time or near-real-time monitoring of wildland, facilitating an immediate response to changing situations. The majority of active fire products are derived from satellites like MODIS and VIIRS due to their exceptionally short revisit time (<2 days), facilitating rapid updates of active fire data (Szpakowski and Jensen 2019). However, these temporal resolutions come at the expense of spatial resolution. Zubieta, Ccanchi and Liza (2023) indicated that the heat spots provided by MODIS or VIIRS are not suitable for effectively monitoring small and short-lived fire events due to their limited spatial resolution and the brief duration of burns. High spatial resolution necessitates longer revisit time as they require more time to take detailed imagery (Bussy-Virat, Ruf, and Ridley 2019). Sentinel-2 satellite offers appropriate spatial resolution of 10-60 m ground sample distance (GSD). However, their less frequent revisits, occurring every 5 to 16 days, make them suboptimal (Szpakowski and Jensen 2019). Commercial earth observation satellite constellations like SPOT-6/-7 provide a daily revisit capability to any location worldwide, offering improved spatial resolution sensors (Hafner et al. 2022). This capability enables more detailed observations of wildlands on a large scale. However, satellites still face limitations in delivering on-demand applications primarily due to challenges related to cloud cover and cost implications (Scheffler and Frantz 2022). These factors can affect the availability and accessibility of timely satellite imagery, making it less suitable for certain real-time monitoring and emergency response scenarios.
Satellites are promising option for pre-and post-fire RS in terms of long-term application over large area. Fuel mapping and post-fire RS does not require frequent update where satellite is suitable for mapping large areas with appropriate spatial and temporal resolution (Szpakowski and Jensen 2019). It may potentially reduce the necessity for UASs in collecting spectral images, which can be both time-consuming and relatively expensive for large-scale observations. However, it is important to acknowledge that satellite does have limitations. Satellite data from ICESat-2 and GEDI is freely available for wildfire studies, however ICESat-2 can be unreliable with low photon return rates, and GEDI data is unavailable in certain boreal forest areas . In scenarios where more precise geometrical measurements and detailed structural information are crucial, UAS Light Detection and Ranging (LiDAR) technology shines. UAS-LiDAR's ability to penetrate dense canopies and capture multiple viewpoints allows for superior accuracy in field data acquisition compared to satellites (Viedma, Almeida, and Moreno 2020). When dealing with burns within or near Wildland Urban Interfaces (WUI) or areas managed for agricultural or timber production, high spatial resolution close range RS is essential (Arkin et al. 2019). Both satellite and UAS-LiDAR technologies serve complementary roles, with satellites providing broad coverage, while UAS-LiDAR excels in offering detailed and precise information for wildfire studies.
Manned aircraft, on the other hand, are restricted by flight regulations and operational limitations, particularly when flying at low altitudes comparing UAS. It involves high operating costs, including fuel, crew salaries, and maintenance expenses. UAS offer safer operations compared to manned aircraft. With no human pilot onboard, the risk to human life is eliminated in case of accidents or hazardous situations. This enhances the safety aspect of UAS, making them suitable for missions in challenging environments or in areas with potential risks. Considering all stages of wildfire, UAS offer higher spatial resolution, quick deployment, cost-effectiveness, and enhanced safety, but they have limitations in terms of low spatial coverage and endurance. However, despite these limitations, the benefits of UAS make them adaptable to various wildfire applications in terms of providing flexibility in data acquisition with different sensors at a convenient time (Shamaoma et al. 2022).

UAS platform
A UAS is composed of several key components including Unmanned aerial vehicle (UAV), Ground control stations (GCS), sensing payloads, communication, and navigation systems to enable the operation of the UAS and allow it to perform a wide range of RS applications (Amarasingam et al. 2022;Szpakowski and Jensen 2019;Villa et al. 2016). UASs are available in a variety of sizes and configurations, with varying degrees of autonomy and power sources. Some UAS are also capable to coordinate with unmanned ground vehicles to perform firefighting activities (Pasini, Jiang, and Jolly 2022).

UAV types
The two main types of UAV are fixed-wing (FW) and rotary-wing (RW), both of which have their own advantages and disadvantages. Pádua et al. (2017) has given a detailed description of both UAS types and their characteristics. FW is preferable for large land coverage, while RW is more suitable for acquisition of high spatial data (Tang and Shao 2015). FWs have higher cruising speed, higher flight altitude, higher flight efficiency, long endurance, and range. The gliding ability of FW reduces the battery need to provide energy for lifting them. However, FW typically requires a runway to reach the take-off speed to launch the drone. Take-off speed can be achieved by catapult or hand launch, and landing can be achieved by releasing a parachute or a recovery net (Kiril 2017). Using this method, UAS operators can launch without being concerned with the quality of runways. On the other hand, RW are more flexible since they can take-off and land vertically in several types of environments and can hover and remain at a single point. However, RWs are limited in their flight time due to the power-hungry motors generating thrust that balances the weight. The literature surveyed for this review indicates that 12 studies employed FWs, while 63 studies employed RW in field studies. One study also used a vertical take-off and landing (VTOL) which combines the advantages of both FW and RW for data collection (Briechle et al. 2018). Both commercial UAS products and self-developed UASs have been used to gather information in forest studies (Baha Bilgilioglu et al. 2019;Shin et al. 2019). The flexibility of consumer-grade UASs for data collection makes them an attractive option for wildfirerelated studies. User friendly interfaces, advanced controls including obstacle avoidance, height maintenance and autonomous landing and take-off features provide more comfortable drone operations for RS. Among those used in recent research, the Shenzhen DJI Sciences and Technologies Ltd (DJI) Phantom series of quadcopters, the DJI Matrice 600 PRO hexacopter, the Atyges FV8 octocopter and the FW SenseFly eBee X are the most popular. Moreover, UASs are custom designed with embedded flight controllers, providing flexible flight control for specialized research solutions (Al-Kaff et al. 2020;Shin et al. 2019;Wardihani et al. 2018). Customizing and designing UAS to enhance endurance and land coverage is an ongoing research topic (Anuar, Akbar, and Herisiswanto 2019;Liu, Yang, and Hao 2022;Martin et al. 2022;Yfantis 2019;Zheng et al. 2021).

Sensing payloads
UAS deployed for wildfire studies have used of a variety of sensors depending on the specific application and the data required. Onboard sensors including barometer, an inertial measurement unit (IMU), magnetometer, and Global Positioning Sensor (GPS) for navigation enable UAS to fly safely with precision and accuracy to its desired location. Apart from these integrated sensors UAS can carry several types of payloads for sensing and data collection in wildfire studies. The UAS payload consists of cameras for capturing different spectral bands (RGB, multispectral (MS), hyperspectral) at various resolutions, as well as sensors for monitoring atmospheric conditions like temperature, humidity, and PM. These payloads are mounted with consideration of factors such as payload weight, size, and power requirements. Research addressing technical concerns, such as mounting multisensory systems with a limited gimbal load, expands the potential uses of UAS in wildfire research (Novak et al. 2020). Table 1 provides the sensing payloads that have been deployed in recent wildfire studies.

Mission planning
UAS are operated by remote controllers or programmed to fly autonomously from the ground. For RS tasks that involve take-off and landing, remote-controlled navigation is preferred. However, for extensive data collection needed in both pre-and post-fire applications, autonomous missions with pre-defined flight paths are necessary. Flight controllers with onboard sensors run advanced navigation algorithms to control UASs to follow a precise pre planned path (Murali, Lokesha, and George 2022;Shaffer, Carrillo, and Xu 2018). GCS desktop applications such as QGroundControl and Mission planner and mobile application such as DJI GO, PIX4Dcapture, and DroneDeploy enable UASs' easier operations with planning autonomous flight missions through grided waypoints for both FW and RW types (Saadat and Husen 2018). A mission can be organized with chosen parameters to achieve the necessary spatial and temporal resolution for wildfire applications, including altitude, cruising speed, ground sampling distance (GSD), frontal and lateral overlap, all aligned with data collection needs.

UAS-driven data
UASs are commonly utilized to collect three main types of data: spectral, structural, and atmospheric data. Atmospherics data collection involves measuring temperature, humidity, particles matters and gases at certain altitudes. UASs acquire spectral data using imaging cameras, such as RGB cameras, thermal cameras, MS cameras, and hyperspectral cameras have enabled UASs to acquire data at high-spectral and spatial resolutions (Hendel and Ross 2020;Jakubowksi et al. 2013;Jha et al. 2019). UASs can also be used to record structural details using point cloud in three-dimensional space and used to take geometrical measurements of trees, shrubs, and man-made structures in wildland. The sections below describe the three types of data collected in wildfire studies.

Atmospherics data
This type of data collection is highly used in active-fire studies for fire monitoring (Aurell et al. 2021;Chunitiphisan et al. 2018;Li et al. 2021;Simms et al. 2021;Wardihani et al. 2018). UAS mounted sensors look for the changes in the atmosphere and provide an alert if there are any symptoms for fire. Sensors including temperature sensors, humidity sensors, PM and gas sensors are used to monitor the fire. However, these sensors usually contain their sensing range limit and needed to be flown within the range to take measurements (Ofelia 2018). Therefore, close monitoring through low altitude flight may be more appropriate for capturing atmospheric data. However, conducting low altitude flight missions in a wildfire region may increase the risk of UAS crashes (Aggarwal, Soderlund, and Kumar 2021), making it a less attractive choice for fire detection.

Spectral data
The cameras capture the radiation from the material or object being observed at different wavelengths and frequencies, which enables the analysis of its spectral properties. The intensity of the radiation detected by the camera at each frequency band is recorded as a value for each pixel, resulting in a spectral image. Thermal cameras and Infra-Red (IR) cameras are widely used in fire detection due to the ability of detecting escalating temperatures. The use of MS cameras reveals details that are not visible to RGB cameras and to the human eye. This consists of sensors and filters to capture addition spectral bands beyond visible range such as near-IR (NIR), Red-edge (RE) and shortwave IR (SWIR) bands. In post-and pre-fire studies, MS cameras are widely used, and the most used brands are the Parrot Sequoia and the MicaSense RedEdge cameras. UAS images contain pixel-wise spectral values in multiple channels, with each channel representing a specific spectral band. Several spectral indices (SI) (Narmilan et al. 2022) are used to quantify various physio-chemical and environmental variables based on the pixel values of different spectral bands. This data has been used for determining many valuable wildfirerelated metrics, including the health of the vegetation, the severity of the burn, and the rate of recovery after a wildfire. Table 2 provides the details of SI used in recent studies across various stages of wildfire.

Structural data
Virtual environment formed by 3D point clouds generated from UAS sensing tools presents a valuable dataset for conducting measurements of existing structures in the field of forestry. It is often used in a variety of forestry applications, including FL estimation and vegetation detection. There are two methods commonly used for creating point clouds: Structure from Motion (SfM) using photogrammetry and LiDAR scanning. Point clouds using photogrammetry require capturing overlapping aerial images of the same point from various locations using a UAS equipped with RGB or MS sensors. LiDAR scanning uses light pulses to measure the distance between the sensor and objects based on time-of-flight (TOF). The LiDAR sensor mounted in UAS rotates and sweeps the area to be scanned, collecting millions of points that can be used to create a 3D model of the environment (Reutebuch, Andersen, and McGaughey 2005). Each point in the point cloud represents the distance between the LiDAR sensor and the surface of the object at that location. Photogrammetry software provides advanced tools to create Three-dimension (3D) models of physical structures present in the forest. These software applications are used to generate several 3D models including Digital Terrain Model (DTM), Digital Surface Model (DSM), and Canopy Height Model (CHM), which are important in understanding the topography and vegetation of the forest terrain.

Data processing
Since data is gathered from various locations, all data types should need to undergo preprocessing to align with the locations and get an overall view of the data collection. Atmospheric data is relatively easy to handle as it can be recorded with time, GPS location, and altitude at each sampling point. However, images taken from UAS often contain overlapping data. In the pre-processing stage of spectral data, images are stitched together using commercial software tools to create orthomosaic maps (Evita et al. 2021). Researchers are working on developing low-power image stitching frameworks to enhance the effectiveness of disaster data gathering (Yim et al. 2018). The use of data fusion techniques combining several types of data generate valuable information in the field of forestry (Sankey et al. 2021) and vegetation mapping (Rodríguez-Puerta et al. 2020). It involves the integration of aerial images taken at the same location but with different spectral bands and timestamps Kanand et al. 2020; Lewicki  ML and DL are the most common approaches to understand patterns in UAS-driven data. These approaches are based on AI to extract patterns and features for detection and classification of objects and vegetation types using aerial images (Mukhiddinov, Abdusalomov, and Cho 2022;Yandouzi et al. 2022). The ML and DL approaches can be used to segment, classify, and detect fire in dense forest or quantify vegetations and their FL for potential wildfire (da Costa et al. 2021). Coelho Eugenio et al. (2021) present details of most common ML algorithms performed in different wildfire applications including Random Forest (RF), Neural Network (NN), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR) and K-Nearest Neighbour (k-NN). There are now many software libraries available for DL that make it easier to build and train models, including TensorFlow, PyTorch, and Keras. In addition, researchers have access to open-source frameworks that facilitate the process of training and building models (Huang et al. 2017;Kinaneva et al. 2019a).
LiDAR processing differs significantly from photogrammetry in terms of data acquisition and processing. LiDAR point clouds often contain noise due to atmospheric conditions and sensor limitations (Su et al. 2023). The preprocessing involves filtering and registering scanned points to create a clean and precise 3D point cloud. Combining point clouds from various sources requires merging them to align within a common coordinate system (Hendawitharana et al. 2021), thus generating comprehensive 3D models of landscapes and vegetation structures. Typical process for extracting vegetation measurements involves denoising, subsampling, extracting ground points, normalizing, and segmenting vegetation (Reilly et al. 2021;Shin et al. 2018). These refined vegetation point clouds are then utilized to derive measurements such as Diameter at Breast Height (DBH), various height percentiles, and canopy cover. These metrics are employed in statistical analysis or machine learning algorithms for diverse applications, including fuel load estimation (da Costa et al. 2021), and severity assessment (Viedma, Almeida, and Moreno 2020). The 2D projection of point cloud-generated CHM containing variations in canopy height can also undergo photogrammetry-based processing for vegetation detection (Fernández-Álvarez, Armesto, and Picos 2019).

UAS remote sensing in wildfire stages
In this review, UAS applications are divided based on three stages of wildfire ( Figure 4). In pre-fire stage, UASs are used to assess the risk of wildfire by detecting fire prone areas and measuring FLs of wildlands. Active stage applications involve employing UAS for fire detection, fire propagation prediction and confirming the fire alerts produced by various wireless sensor network (WSN). In the final post-fire stage, UAS are employed to assess the damages and recovery of vegetations following the fire. Figure 5 shows the UAS-based RS studies in three stages of wildfire.

Pre-fire
The topographic survey over large tracts of wild land provides terrain models such as aspect, slope, and elevation with spectral information which are essential for the assessment of wildfire risk (Marić, Šiljeg, and Domazetović 2021). Several pre-fire studies have been conducted in recent years to estimate the amount of wildland FL and to identify potential risk objects that may contribute to wildfires.

Fuel load estimation
Forest structure can vary significantly between several types of forests, such as tropical, temperate, and boreal rainforests, and Mediterranean shrubland and savanna. Diverse  classification systems for multiple fuel types have been introduced on a global scale (Abdollahi and Yebra 2023). However, these systems come with limitations; they are often site-specific, which implies that each fuel type classification model is only applicable in regions with similar geographic characteristics and cannot be extrapolated to other areas (Fogarty et al. 1998). Generally, forest fuel can be classified into five layers in tropical and temperate rainforest: emergent layer, canopy layer, understory layer, shrub layer and forest floor. The emergent layer and canopy layer typically consists of taller vegetation, while the others include smaller plants growing under the canopy layer (Nyandwi 2008). The FL in forestry is the quantity of biomass available as fuel for a potential wildfire. Advances in UAS-LiDAR and photogrammetric system continue to enhance the ability to measure structural attributes of vegetations including canopy heights, canopy cover, and tree density for accurate estimation of overstory and understory FL (Table 3). However, in the recent studies, the usefulness of RS for characterizing forest surface fuel remains to be adequately demonstrated.
The total volume of forest vegetation can be estimated by taking the difference between the DSM and the DTM (Carvajal-Ramírez et al. 2019; Singh and Kushwaha 2021). Objects and vegetation must be removed from the DSM by additional processing to generate DTM. Carvajal-Ramírez et al. (2019) evaluated four methods of obtaining DTM from point cloud generated in a Mediterranean Forest. It is widespread practice to extract ground points and normalize DTM to a flat or planar surface before taking tree measurements (Reilly et al. 2021;Sagang et al. 2022;Shin et al. 2018). In this way, slope and terrain variations will not influence the tree measurements but rather reflect only the actual height of the trees. Point cloud-driven forest inventory measurements along with field data collected from sample plots, have been used to estimate biomass using biomass allometric equations (BAE) (Weiser et al. 2022). LiDAR-based point cloud estimation is more accurate than SfM-based biomass estimation (Shin et al. 2018). In dense forests, FLs are often estimated by dividing the forest into cells or plots. Da Costa et al. (2021) assessed the capability of high-density LiDAR to estimate total above-ground biomass in the Brazilian Savanna (forest, savanna, and grassland) in 30 m plot scale using linear regression. Results showed acceptable RMSE (25.99%) in forest, but errors were higher for savannas and grasslands (around 44%). Simulated GEDI data from the same LiDAR sensor with same plot scale showed higher accuracy (R 2 = 0.88) for woody fuel (DBH >10 cm) but less accuracy (R 2 = 0.17) for surface and herbaceous components using random forest algorithm. Obtaining fuel information in sparse vegetation ecosystems like savannas and grasslands is more challenging than in dense vegetation. However,  achieved comparatively higher accuracy in understory measurements (R 2 = 0.579) by slicing the point cloud to a range of 0-6 m. They rasterized the data to a 0.5 m resolution and used a simple linear regression model with 1 m plot scale understory metrics. It is evident that a finer spatial scale of plot size has the potential to enhance the accuracy of biomass estimations in understory vegetation.

Vegetation detection and classification
UASs have been employed for the detecting trees and shrubs to estimate the density that contribute to the initiation of forest wildfires. Accurate identification and classification of trees and shrubs and their health status in fire-prone areas are crucial for the proper wildfire models generation (Carbonell-Rivera et al. 2022;Cessna et al. 2021). Using individual tree identification to estimate vegetation density for a specific type can enhance the value of vegetation-specific fuel load estimations. Table 4 provides a list of recent studies focusing on vegetation targets that poses a risk for fire.
Studies used variety of ML and DL algorithms to detect and classify vegetations (Bennett et al. 2022) and the health status (Cessna et al. 2021). DL methods applied in higher spatial resolution RGB images (2-7 cm GSD) classified vegetation with considerable accuracy (f1 score > 71%) in Boreal Forest and Mediterranean shrubland (Bennett et al. 2022;Trenčanová, Proença, and Bernardino 2022). MS images, combined with various ML and DL methods, classified tree, and shrub species with a similar result in Mediterranean forest (Carbonell-Rivera et al. 2022). Accuracy of detection and classification may vary based on the specific site, learning parameters, and the quality and quantity of available data. However, the advantage of using MS cameras lies in their ability to provide additional spectral information, which can enhance prediction accuracy even at lower resolutions.
While structural data provide valuable information of tree vertical profile, most studies have used 2D projection of point clouds, often represented as CHM models, for tree classification. Point cloud-based canopy models, which leverage canopy variations, have been utilized for vegetation identification among Mediterranean vegetations (Fernández-Álvarez, Armesto, and Picos 2019). However, they demonstrated lower accuracy (62-64%), when compared to studies that utilized high-resolution aerial images.

Risk assessment assistance in WUI
UASs have been used to assess the quality of the surrounding environment for potential wildfire hazards. Fernández-álvarez, Armesto, and Picos (2019) established biomass management strips, according to the geometric requirements specified by the Galician wildfire prevention law through UAS-LiDAR-driven vegetation measurements. The vegetation pruning height (distance from the ground to its first branch), an important variable in biomass strip mapping, was estimated proportionally from tree height. Tree height is estimated from CHM, where there is no information available under the canopy. Authors found that this assumption led to low performance (51%) in a site with young plants, which typically have their first branch lower than matured trees. Hendawitharana et al. (2021) combined ground-based LiDAR and UAS-LIDAR point cloud data to reconstruct a 3D model of a building at a WUI. They then applied Computational Fluid Dynamics (CFD) heat transfer models to the building structures to provide site-specific solutions for bushfire scenarios. This shows the potential of UAS-LiDAR on reconstructing Computer-Aided Design (CAD) models of surrounding objects for taking accurate measurements in close-range RS. However, in this study, the surrounding vegetation is not considered, and the fire is simulated as a grass fire. To enhance the reliability of risk assessments, a more precise estimation of vegetation structure is necessary, and UASs have the potential to fulfil this role.

Active-fire
UASs combined with advanced sensing have been employed at active-fire stage in a wide range of applications. The advancement of UAS in the areas of fire detection, fire spread analysis, and support for WSN are highly discussed and trending topic in the literature, aiming to build a reliable fire monitoring system. This section provides a comprehensive overview of different active-fire application, highlighting the diverse range of ways in which UAS technology can be utilized in wildfire management and prevention during the fire.

Fire detection
Studies have made use of a diverse range of sensing payloads to detect wildfire mainly using two types of data: atmospheric and spectral data. Sensing payloads measures variables such as temperature, humidity, dust, volatile organic compounds, and flame and smoke-like colours. Cameras provide spectral information in different spatial scales. UAS as mobile sensor nodes are primarily employed to obtain atmospheric data, while vision-based detection utilizes spectral data for fire detection. The next sub-sections will discuss these topics.

Mobile sensor node.
Atmospheric sensors typically used for consists of numerical measurements of environmental variables, such as temperature, humidity, or gas concentration, which can be relatively simple to process and analyse using statistical techniques and AI processing. UAS sensors can monitor environmental conditions and process data on board low-performance computers without a wireless connection to a central control system. Several flying topologies have been considered in studies considering effective implementation of mobile sensors nodes for the monitoring of wildfires in human settlement areas and fire prone areas (Chowdary et al. 2022). Table 5 outlines research studies conducted for wildfire detection solely using onboard sensors. However, some studies also showed that some detections are inaccurate due to external disturbances. The non-contact IR sensor mounted to detect ground surface temperature was able to detect 9 out of 10 hotspots in multiple flights (Wardihani et al. 2018). The reason for the incomplete detection was due to the impact of the data transmission rate on detection and the drone's deviation from its intended flight path caused by wind. Chunitiphisan et al. (2018) evaluated a Plantower PMS 3003 PM sensor against a pollution control department approved instrument. The study showed low reliability of PM measurement (R 2 = 5.0-0.6), and the value of measurement changes to the direction of measurements. Windshield or wind speed-based calibration is needed to increase the accuracy. One potential approach is to combine sensor-based fire detection systems with cameras. IR sensor reading improved the accuracy of vision-based flame detection in areas with high smoke levels (Kasyap et al. 2022).

Vision-based detection. UAS enables the acquisition of high-quality images
with a variety of cameras, offering flexibility in terms of spatial and temporal scales. This capability provides visual verification of minor fires that might not activate sensors, and it also complements the coarser resolution of satellites, empowering emergency responders to address critical situations swiftly and accurately. This capability is particularly valuable complex environments, where identifying the exact location of a fire is challenging. Studies have been conducted to evaluate cameras considering the cost efficiency, thermal accuracy, spatial accuracy, for early wildfire detection, and several types of cameras were suggested for different flight altitudes and speeds (Hendel and Ross 2020). A variety of advanced algorithms are used to restore UAS images which have been degraded from environmental disturbances, such as wind and vibration at pre-processing stage (Qiao, Zhang, and Qu 2020). The images collected from cameras have been processed using a variety of techniques to detect fires. Smoke detection is considered more appropriate for early detection since fires in dense forests may not be immediately visible, particularly in their initial stages (Rahman et al. 2021;Zhan et al. 2021). Conversely, flame detection may be more effective in detecting larger fires (Chen et al. 2022;Fouda et al. 2022;Wang et al. 2022). However, recent studies also have focused on simultaneously finding both smoke and flame to overcome limitations associated with targeting only one indicator (Barmpoutis et al. 2020;. Wildfire detection using computer vision typically involves two approaches that may be combined. An image can be analysed using rule-based processing, artificial intelligence, or a combination of to identify signs of wildfire. 4.2.1.2.1. Rule-based detection. Rule-based processing involves converting colour spaces, creating a set of predefined rules and thresholds that define what constitutes a fire. For example, if an image contains a certain combination of colours or patterns that are indicative of flames or smoke, a rule-based system can confirm the fire. These systems are relatively simple and straightforward, and they can be implemented quickly with minimal computational resources. Applying a threshold-based filter on the pixel values of thermal and IR images is a simple method to confirm the fire (Nithesh et al. 2022;Sherstjuk, Zharikova, and Sokol 2018;Yang et al. 2019). Due to the flame temperature, IR and thermal images show pixels with higher brightness when fire is present. However, fire detection cannot be based solely on threshold segmentation using the temperature details. Hightemperature objects and living beings in the wild can trigger false alarms due to low thresholds and changing ambient conditions. Amanatiadis et al. (2018) used a size filter based on GSD to consider human and fire-sized objects after segmenting the high brightness pixels with blob detection algorithm and Gaussian blurring on thermal images. Classification rules based on RGB images have been applied in the RGB and other colour space system (Anh et al. 2022;Dang-Ngoc and Nguyen-Trung 2019;Sharma, Singh, and Kumar 2020). These rules align the pixel values of the image channels of colour spaces with characteristics typical of fire flames. Confirmation of fire is obtained if the pixel arrangements satisfy certain conditions. In addition to colour-based features, texturebased (Feng et al. 2018;Sherstjuk, Zharikova, and Sokol 2018) and wavelet energy-based (Feng et al. 2018) classification methods have also been used to identify fire patterns and surface characteristics of fire. For example, Sherstjuk et al. (2018) used colour-diffusion evaluation in addition to texture-based classification method to differentiate between smoke and non-smoke areas. However, static rules may not be able to handle complex scenarios where the rules and thresholds need frequent updates. For example, the system's performance can be impacted by variations in lighting conditions, which are more often during UAS missions. The system may need to adjust the rules and thresholds to maintain the accuracy of the detection. Updating rules in changing environments can significantly improve the efficacy and reliability of fire detection systems over static rules.  for instance, employed a smoke detection and segmentation scheme based on fuzzy-logic as well as an extended Kalman filter (EKF)-based intelligent regulation rule. They updated fuzzy rules by EKF based on the colour differences between RGB components and HIS intensity component.

Artificial intelligence-based detection. Detecting wildfires in computer vision
with artificial intelligence generally involves two primary approaches including ML and DL. Fire detection systems use algorithms that can learn from data and improve their performance over time. These algorithms can handle complex scenarios and can adapt to changes in the environment. However, they require a large amount of training data and computational resources, and it may not always yield accurate predictions. In such cases, manual inspection remains essential to ensure accuracy and reliability in the decision-making process. In case of performance, DL models can often achieve higher accuracy than ML models, particularly in complex fire detection tasks (Khan et al. 2022). However, ML demands lower computational power compared to DL models due to fewer complex calculation. Studies tend to favour ML models for real-time implementation due to their lower complexity, but this choice often results in a trade-off with accuracy (Khan et al. 2022).
ML techniques aim to understands the feature patterns of fire images and use statistical techniques by training neural networks such as DT, LR, and SVM for making predictions. Yang et al. (2022) created a one-class flame detection model that lies only in fire samples and does not rely on non-fire data during training. It demonstrated better performance in fire detection, both in terms of speed and the amount of required supervision for training, as compared to SVM. However, detecting smoke presents more significant challenges than flame detection, mainly due to complex scenarios like lowaltitude cloud cover and haze (Mukhiddinov, Abdusalomov, and Cho 2022).  trained two four-layer fully connected backpropagation NN on IR images using static features, such as colour, shape, and dynamic features including the changing of edges and shapes over time. A first network is used for recognizing smoke, and a second network is used for recognizing flames. This study performed well for flame detection than smoke. The Local Binary Pattern (LBP) feature appears to be a promising predictor for smoke detection. Chen et al. (2019) used an SVM classifier with LBP feature extraction for smoke detection in RGB images. They achieved 99.81% accuracy on test images from internet but suspected possible bias due to sample similarity between training and validation sets. Hossain et al. (2019) used grey level co-occurrence matrix (GLCM) texture feature extraction technique and developed a single NN model to detect both flame and smoke. Their approach achieved higher accuracy in flame detection than smoke. Then, Hossain et al. (2020) added LBP into the NN, significantly improving smoke detection (recall:88%) compared to flame (recall:80%). Their approach outperformed other classifiers and detectors (SVM, Bayesian classifiers, RF, and YOLOv3), making it a potential real-time solution for UAS-based wildfire detection.
Several advanced DL models have been used over the recent studies and achieved high levels of accuracy than ML models in detecting wildfire (Khan et al. 2022). ML algorithms need manual feature extraction, while DL models learn features directly from raw images. In the literature, authors collected various datasets to train their DL models (Certin 2015;CVPR 2012;Jeong et al. 2019;Shamsoshoara et al. 2021;SKLF 2004;UCSD 2020;UNISA 2015). These datasets vary in size, composition, and quality, but they all provide valuable resources for training and evaluating DL models for fire detection. Table 6 presents a summary of recent DL algorithms employed for wildfires detection. Studies used different set of datasets, which contains several background variations, it is not appropriate to compare accuracy of algorithms from different studies. However, studies detecting both flame and smoke included with fogs and low cloud UAS imagery resulted a low performance in smoke detection primarily attributable to false-positive occurrences (Barmpoutis et al. 2020;. Several challenges arise when implementing DL algorithms on edge devices for realtime applications due to their limited computing resources and memory compared to cloud servers, and therefore the models need to run at an acceptable frame rate (Osco et al. 2021). Fouda et al. (2022) proposed a method capable of being implemented on edge computing devices which dynamically changes from a simple ML-based model to a DL-based CNN model based on the complexity of the captured image for fire detection. Lightweight DL models such as NanoDet ) and YOLOv3-tiny (Jiao et al. 2019) are used in studies due to their lower computational resource requirements. Studies also made use of hardware accelerators that can be integrated into UAS onboard computers to improve FPS (Frame Per Second) (Xian and Nugroho 2022). Further, DL methods have been modified to reduce the parameter size and increase the speed of processing for implementation on edge computing devices. This involves eliminating redundant channels (pruning) ), compressing of weight parameters (Xiong et al. 2021) and weight sharing. Table 7 provides information of the model performance on embedded onboard devices.

Fire propagation
The UAS-based RS utilized for fire propagation involves obtaining two key pieces of information: the location of the fire front and the wind speed above the fire. Atmospheric, spectral, and structural data have been collected to provide these two pieces of information which are important for forecasting the direction at which the fire is likely to spread. UASs equipped with sensors are deployed to gather information about the wind speed Lü et al. 2019) and the images captured by the UAS's cameras and temperature sensors have been analysed using different algorithms to detect the fire front's location and movement (De Vivo, Battipede, and Johnson 2021; El Tin, Sharf, and Nahon 2022; Islam et al. 2019;Lin, Liu, and Wotton 2019;Sherstjuk and Zharikova 2019b). In addition to collecting atmospheric and spectral data, UAS images are used to reconstruct 3D models of fire fronts, which provides geometrical characteristics of fire such as fire base width, height, and surface for understanding the behaviour of fire during the propagation (Ciullo et al. 2018;Sherstjuk, Zharikova, and Dorovskaja 2020). UAS provides excellent temporal resolution with real-time monitoring for measuring windspeed over the fire. Direct measurement of wind speed can be measured by an anemometer mounted on the UAS . Xing et al. (2019) and Lü et al. (2019) estimated the wind speed from the hovering inclined angle of UAS. Xing et al. (2019) designed Kalman filter for wind speed estimation and validated in a simulation environment, while Lü et al. (2019) carried out a field experiment. However, they noted that the UAS's short operational duration limited its effectiveness in monitoring wildfires.
The timely detection of the fire front is critical for effectively tracking ongoing fires. A common method involves using fire colour or temperature thresholding and edge detection in images. De Vivo, Battipede, and Johnson (2021) proposed a time-efficient mono-dimensional noise-resistant edge detection algorithm, outperforming popular methods such as Canny and the contour method. However, Wang, Huang, et al. (2021) mentioned that edge detection on wildfires is only suitable for small-scale fires and can be disrupted by factors like smoke, fire tornadoes, and wind. El Tin, Sharf, and Nahon (2022) highlighted a limitation of using UAS for wildfire detection. Flying at low altitudes in extremely hot environments can lead to rapid dissipation with altitude especially on FW models. And their cruising airspeed being higher than the fire spread rate can cause the FW models to transition quickly from the fire region to the no-fire region. Conversely, flying at higher altitudes may solve this issue, but it could reduce spatial resolution and potentially affect detection accuracy. In addition to wind speed and fire boundary, fire propagation relies on many other factors, such as fuel, topography, and weather. Even merging fire fronts can make an impact on the rate of spread (Filkov, Cirulis, and Penman 2019). Fire propagation has been supported by several fire spread simulation software tools which consider the arrangement of physical features of wildland and weather conditions with UAS-driven data. Several studies have used wildfire simulation software such as WRF-Fire simulator (El Tin, Sharf, and Nahon 2022), DEVS-FIRE (Hu, Bent, and Sun 2019;Jia et al. 2022;Towhidul Islam and Hu 2021) and FARSITE Lin, Liu, and Wotton 2019;Shrestha, La, and Yoon 2022;Zhang, Zheng, and Yu 2019). These incorporate atmospheric conditions, terrain slope and fuel characteristics to provide information about how a wildfire is likely to behave and spread over time. It has been noted that some authors are reluctant to use software as they may not provide the same level of realism as physical experiments. De Vivo et al. (2018) mentioned that these wildfires spread simulators can lead to inaccurate simulations over the long term because they are based on empirical models that are developed and tuned based on laboratory and historical data. They proposed a real-time UAS based image segmentation for fire front tracking based on Partial Differential Equations (PDE) and validated it with a prescribed fire. Islam et al. (2019) proposed a probabilistic model of fire propagation that takes fuel and wind speed into account. Apart from visual images, Lin et al. (2019) proposed a method using a convergent Kalman filter for estimating wildfire propagation with UAS-based temperature measurements. Real-time data can be integrated into a fire spread model using GIS tools within a straightforward gridded GIS space, resulting in the achievement of this objective (Mangiameli, Mussumeci, and Cappello 2021). This is a valuable role of an UAS, as it involves providing live data to larger models and assisting in estimating fire propagation, which is not easily affordable through satellite RS methods.

Wireless sensor network and UAS collaboration
Sensor nodes deployed in dense forests are extremely effective when supported by UAS. A WSN consists of battery-powered, low-cost devices known as 'sensor nodes' that are distributed over a geographical area to monitor the environment. Sensor nodes transmit data wirelessly to a central location or to other nodes in the network by sensing and collecting data from the environment. WSN are strategically placed in areas that are susceptible to fires and operated continuously using energy harvested, usually through solar panels. UASs have been used to support WSN network in remote areas in different ways. For instance, UAS can be used to collect data situated in areas with limited communication connectivity, such as mountainous forests, where poor line of sight frequently hinders communication (Bharany et al. 2022;Li 2019;Zhang, Hu, and Yan 2021). The UASs fly over the forest and use wireless transmission modules to retrieve data from the sensor nodes. UAS also can be used to provide early warning and quantification beyond fire towers and confirm the occurrence and extent of a fire (Al-Kaff et al. 2020;Sharma et al. 2019;Sharma, Singh, and Kumar 2020). In case of fire alert induced by the sensor nodes, UAS are used to physically visit to the spot and verify the fire events using live stream of first-person view (FPV). The dense forest environment can limit the communication range, which can make it challenging to collect data in real-time and transmit it directly to the base stations. Cell networks and radio data towers or Iridium networks may not be available in remote areas or expensive in some cases and as such, some studies focus on improving the video streaming quality for better wildfire surveillance. Nihei et al. (2022) proposed a multipath video streaming method by using two mobile operators simultaneously and minimized streaming delays. Improvements in UAS wireless communications have allowed forest management officials to view clear, bird'seye footage of fires (Nihei et al. 2022).

Post-fire
It is important to assess the damage caused by wildfires in forests and wildland urban interfaces to understand how forests react to wildfires under changing environmental conditions. In post-fire studies, UASs have proven to be useful tools for facilitating research in this field. Besides, fire control and prevention measures, post-fire measures are increasingly in demand. The accuracy of this assessment is particularly important for determining the extent of the damage caused by fires and reforestation efforts, as well as for mitigating the impact of future fires through the implementation of control measures. UASs can offer the flexibility for multitemporal mapping with high spatial resolutions after the fire has subsided. Their applications include the estimation of vegetation standing after a wildfire, the assessment of the fire severity, and the burned area perimeter.

Burned area estimation
An accurate assessment of the fire-affected area's extent is essential for effective planning, management, and post-fire rehabilitation efforts. Both UAS-based high-resolution MS and RGB images have been used in burned area detection. AI-based algorithms utilizing RGB images and SI from both RGB and MS camera products were applied to segment the burned area in the affected region. Tran et al. (2020) found that the UNet-based DL method showed a poor performance in estimation (Sensitivity score: 0.2-0.4), likely due to a limited dataset of only 43 images, leading to unreliable results. Bo et al. (2022) introduced a salient object detection (SOD) model named BASNet, which demonstrated superior performance compared to state-of-the-art segmentation models. BASNet achieved an f1 score of 63.5%, surpassing U-Net (35.51%), PSPNet (57.03%), and DANet (53.92%). In this study, a large dataset from Chongli District and Andong City was used to train the model. However, the model had some failures with complex distractions, especially in cases involving images that were visually similar to burned land. Meanwhile, the application of SI derived from aerial images, especially the NDVI index, crucial in RS for assessing vegetation health, effectively demonstrated transitions between different severity classes, even in a short extent (Lazzeri et al. 2021;Talucci et al. 2020). A qualitative assessment of the results appears acceptable when compared to AI algorithms based on RGB images. A summary of recent studies targeting burned area estimation is presented in Table 8.

Severity assessment
In the RS community, fire severity is commonly defined as the loss of aboveground and belowground biomass (Keeley 2009). Recent studies have categorized severity using UAS-derived spectral and structural data into several levels of severity classes considering changes in standing vegetation Shin et al. 2019;Viedma, Almeida, and Moreno 2020;Woo et al. 2021), soil characteristics (Beltrán-Marcos et al. 2021), and ash characteristics (Brook et al. 2022). Table 9 outlines the studies targeting severity assessments. Several methods including ML algorithms are being applied for analysing spectral and structural information.
UAS plays a valuable role in prescribed fire studies, providing essential multitemporal pre-and post-fire data collection at the precise timing, and offering high spatial resolution. This contribution has proven to be of significant support in severity assessment research. Pérez-Rodríguez et al. (2019) analysed the relationship between six SI and fieldbased burn severity assessment of a prescribed fire using the analysis of variance (ANOVA) method. The authors demonstrated that it is possible to statistically distinguish three severity levels of vegetation burns and two severity levels of soil burns using SI in Mediterranean shrubland. The common SI used for UAS-based severity assessment is the NDVI. In the study of (Carvajal-Ramírez et al. 2019), the comparison of SI led to the conclusion that NDVI serves as the most effective predictor for severity assessment. However, most satellite-based RS studies used Relativized Burn Ratio (RBR) for the assessment (Qarallah et al. 2021;Viedma, Almeida, and Moreno 2020). Research has shown that incorporating the SWIR band in SI enhances the detection of vegetation biomass, surpassing NDVI in biomass estimation (Koppe et al. 2012). Satellite-based studies use the RBR, calculated from the SWIR/NIR band, even when the NDVI is available, because of the superior performance of SWIR in capturing burn severity information (De Simone et al. 2020). However, SWIR range sensors have been infrequently employed with UAS due to their high cost and weight considerations (Jenal et al. 2019).
UASs have been combined with satellite images to estimate severity by providing structural data with higher spatial resolution (Rossi, Fritz, and Becker 2018;Viedma, Almeida, and Moreno 2020). The ability of satellites to penetrate the canopy down to the ground is limited and varies depending on the severity of the fire. Rossi, Fritz, and Becker (2018) created UAS-SfM-based CHM with 50 cm GSD for adjusted canopy cover index (ACCI)-based severity assessment. However, the threshold-based height classifications led to underestimated tree heights with an RMSE ranging from 2.8 to 8.3 m. In another study, Viedma, Almeida, and Moreno (2020) collected UAS-LiDAR point cloud data with 10 cm vertical accuracy and 2 cm horizontal accuracy. Canopy modelling from photogrammetry is a complex process, influenced by a wide range of input variables (Denter et al. 2022). When compared to LiDAR, SfM-based canopy reconstruction exhibits poorer performance in tree height modelling (Winsen and Hamilton 2023). However, when considering the payload and cost, LiDAR is very limited in scale and extent. Additionally, it takes a long time to process and interpret the data.
UAS have also been deployed to assess the fire severity based on changes in soils. Beltrán-Marcos et al. (2021) assessed soil burn severity one month after the fire using fieldbased Composite Burn Soil Index (CBSI) measurements. Authors used a pool of five individual visual indicators (ash depth, ash cover, fine debris cover, coarse debris cover, and unstructured soil depth) for easy interpretation. For the purpose of their study, the NDWI showed the best performance in modelling CBSI out of six indices. However, their work did not consider the examination of indices both prior to and after rainfall and considered mostly a single type of soil. In 2023, the same authors used a Gram-Schmidt   image sharpening technique to fuse the same UAS MS images with Sentinel-2 satellite data (10-20 m GSD), overcoming limited UAS spectral resolution and generating improved spectral images (Beltrán-Marcos et al. 2023). They compared the NDVI and NDWI using SVM model estimation accuracy, which showed higher performance in the fused images compared to using satellite and UAS images separately. Additionally, they investigated the estimation of two soil property measurements, moisture content (SMC) and organic carbon (SOC), which exhibited notable differences with severity. Fusing UAV-Sentinel-2 images improved SOC estimation accuracy but resulted in low performance in SMC. The authors suspected this may be due to the specific distribution of soil field samples collected within the limited data acquisition period after the fire. In addition to the spectral analysis, structural-based assessment also conducted in soil observations for crack detection and volume estimation. A combination of UAS-SfM and terrestrial LiDAR point clouds were used for crack and scarps detection (Deligiannakis et al. 2021). TLS achieved good results when the line of sight was perpendicular to the slope. However, crack detection using DSMs derived from UAS-SfM showed a better result than terrestrial LiDAR scanning in most study sites. The dense vegetation and the distance from the examined slope led to a low-resolution point cloud, making the TLS unsuitable. Salesa et al. (2020) estimated the soil erosion rate by calculating the missing soil volume using cross-section field surveys method, UAS-based method and the smartphone-based method. The authors concluded that despite yielding similar results, the application of the UAS method was challenging due to the cost, training requirements, and possible inconvenience. However, it remains a time-consuming task.

Vegetation recovery
Wildfire plays a key role in maintaining ecosystem functions by facilitating the regeneration of vegetation (Rachels et al. 2016). UAS holds the potential to provide high-resolution information on the subsequent recovery of burnt ecosystems at a local scale (McKenna et al. 2017). Observing changes in ecology and vegetation structure after wildfires is vital to gain a deeper understanding of how forests respond to a wildfire and the long-term impacts on the ecosystem.
Most of the studies evaluated pine trees for estimating vegetation recovery. MS images have demonstrated higher accuracy in detection compared to RGB, even in low spatial resolution. White et al. (2018) conducted Jack pine sapling detection using an RF classifier with MS images (5 cm GSD). The evaluation of different band combinations (RGB+RENIR, RGB+NIR, RGB+RE, RGB+NIR-R) showed that the highest accuracy (88%) was achieved with RGB values combined with the difference of NIR and R band. Fernández-Guisuraga, Calvo, and Suárez-Seoane (2022) examined three sites to detect neighbourhood competitions of pine saplings with MS (11.31 cm GSD) and RGB (3.29 cm GSD) images. The authors performed a multi-resolution segmentation (MRS) (Baatz and Schäpe 2000) followed by an SVM-based classification based on pixel values, shape, and texture features. In this study, higher accuracy was achieved on MS images (83.67%) compared to RGB images (74.33%). Larrinaga and Brotons (2019) derived four RGB camera-based SI (2-4 cm GSD), SfM-based CHM (8-16 cm GSD), and field-sampled DBH to estimate tree diameter using simple linear regression analysis. They were able to establish a relationship with an R 2 of 60%. Comparatively, RGB-based vegetation detection is a cost-effective option compared to MS, but it exhibits poor accuracy in vegetation detection.
Besides from pine trees, Qarallah et al. (2021) estimated total vegetation growth using UAS-SfM-based height measurements and observed a slight underestimation of tree height, by less than 9%. This underestimation is likely attributable to the incompleteness of the canopy model, which is often seen in SfM-based point clouds (Fletcher and Mather 2020).

UAS navigation for wildfire
Navigation studies for UASs are critical in ensuring that they can operate effectively and safely in challenging wildfire environment considering the energy constraints and limited operation time. These limitations are crucial to consider during active fire applications, where rapid deployment and quick decision-making are necessary. UAS navigation involves developing algorithms and systems to coordinate a single or multiple UAS ( Figure 6). These studies ensure that UASs cover the survey areas and avoid colliding with each other, manned aircrafts, helicopters, and other obstacles. The Figure 7 illustrates the navigation studies that have been reviewed in this article. A summary of the studies on navigation of single and multiple UAS is presented in the first and later part of this section.

Single UAS navigation
UAS paths in the context of wildfires are typically optimized for quick deployment, efficient use of limited flight time, and prioritizing safety as crucial factors. The requirement for high spatial resolution and close-range sensing necessitates low-altitude flight missions, demanding improved obstacle handling during fire surveillance and data collection. And the limited endurance requires careful planning and optimization of their flight paths within their capabilities and limitations. Ozkan and Kilic (2022) implemented a mathematical model based on the Local Search (LS) algorithm to estimate the fire surveillance path, considering fire probabilities within regions. This approach is beneficial in avoiding unnecessary visits to non-fireable locations, which helps conserve battery charge, time, and increases the temporal resolution in the fire prone region. However, the model has not yet incorporated real-time uncertainty factors, such as wind, humidity, and temperature.
The process of attending to the fire spot for confirmation and extinguishing differs significantly from fire surveillance. In this scenario, the UAS is already aware of the fire spot's location, and the main objective is to swiftly reach the spot for prompt action. Table 10 summarizes the path planning studies conducted for rapid fire spot presence in simulation environments. However, no studies have incorporated the effect of atmospheric conditions, such as wind, in planning the path. Wind disturbance can lead to deviations in the intended path, causing incomplete detection of fire spots (Wardihani et al. 2018). In addition, these studies have not considered the individual characteristics of different UAS models, such as FWs and RWs, for the investigation.
Once the UAS reaches the fire spot, monitoring the fire front becomes more challenging for FWs compared to RWs due to the kinematic constraints associated with on-thespot turns (Sundar, Sanjeevi, and Montez 2022). Circle the fire front is a simple approach to monitoring the perimeter when the fire spreads slowly. However, it has limitations in tracking fire front with high rate of spread (Towhidul Islam and Hu 2021). Towhidul Islam and Hu (2021) calculated the diference between the fire front estimated in every circle and corrected the tragectory. They presented successful tracking in several simulated wildfire scenarios. El Tin, Sharf, and Nahon (2022) proposed a non-linear guidance algorithm, known as the L 1 capable of closely tracking wildfire perimeter. The Simulation results showed the UAS flying tightly around the fire perimeter. In this study, authors expect that the rate of fire spread will not be a problem for tracking in practical implementation, but the fixed-wing platform should be capable of withstanding wildfire wind speeds.
The UAS path is optimized to collect data from short-range WSN sensor nodes as well. Zhang et al. (2021) proposed a bi-level hybridization-based metaheuristic algorithm (BLHMA) considering FWs kinematic constrains and the shorter communication range of   2D space (20x20 raster map)  Dynamic obstacle avoidance  sensor nodes. In the simulation, the FW circularly fly to multiple sensor nodes to collect atmospheric data, demonstrating shorter data collection time compared to Variable Neighbourhood Search (VNS) and Memetic Algorithm (MA). This path planning for short range WSN sensor node and multiple fire spots can be treated as similar tasks since both involve efficiently covering multiple locations. These algorithms can be tested for both wildfire applications, but their practical implementation with real UAS models has not been thoroughly investigated yet.

Multi-UAS navigation
The use of multiple UASs in wildfire management can effectively enhance spatial scales with maximized coverage, while increasing the system redundancy. By coordinating their movements and sharing information, the UASs can avoid revisiting the same areas unnecessarily, reducing the overall energy and time required for the mission. UASs have been utilized in multi-UAS routing for fire surveillance for two main reasons; the first reason is to improve the search mechanism, and the second is to establish a perimeter around the fire (cordoning) after its detection. Table 11 summarizes recent studies that have investigated multi-UAS navigation for wildfire monitoring. Collaboration between different types of UASs has also being considered for fire monitoring. With a lower payload weight, a UAV can fly longer and cover a larger area than the same UAS with a heavier payload sensor. Atanassov et al. (2021) developed a Generalized Net (GN) model for forest terrain monitoring with a pair of UASs. The first UAS, so-called reconnaissance UAS, is smaller and swifter, and it is deployed first to patrol the fire prone zone. The second UAS, so-called specialized UAS is more substantial and can accommodate advanced cameras and sensors for accurate environmental parameter evaluation and fire detection. Kinaneva et al. (2019) proposed a method which is in a development stage for detecting wildfires using VTOL UASs at medium altitudes and RWs at low altitudes. The method is expected to minimize the likelihood of false alarms being reported by the VTOL UAS due to the lack of visibility at medium altitudes.
UASs can be required to abort a mission due to a technical failure or power depletion. This may require substituting a disabled UAS with a new one from the base. Zhang, Mu, et al. (2018) provided a solution and validate this concept with a numerical simulation which guides the new UAS to join the swarm and continue the patrol using a game tree decision algorithm and a tailored scoring function. Zhang, Zhao, et al. (2022) developed a strategy based on Star-Minimax algorithm that consider kinematic motion uncertainties to join the UAS team. In addition to simulations, a real flight test using three quadcopter UASs was conducted in Concordia University's Networked Autonomous Vehicles (NAV) Lab and validated the real-time performance of the proposed strategy. UAS paths have been optimized for firefighting purposes as well (Khachumov and Khachumov 2022;Shaji 2022). It is necessary to launch a team of UAS equipped with firefighting payloads such as water and fire-extinguishing tools as soon as the fire spot is identified. A three-tiered hierarchical framework for firefighting missions was proposed by Zhang, Hu, and Yan (2021), taking into consideration obstacles and simultaneous arrivals. An RRT algorithm is used in the first step, followed by an auction mechanism to assign tasks to the UASs, and then a modified cooperative particle optimization algorithm (Yan et al. 2019) is used to generate paths for the UASs in a coalition. Shaji (2022) proposed Ant Colony Optimization method to determine the path based on the fire intensities and the available fire extinguisher payload. Khachumov and Khachumov (2022) used a modified Hungarian method (Turpin et al. 2014) to navigate UAS, in the presence of wind disturbance, to lay optimal route to reach fire spots and a graph model-based heuristic rules to fly around the fire spots 6. Future perspectives

Pre-fire
Many studies have focused FL estimation at the plot level, limited to a specific spatial scale. These studies consider diverse types of vegetation communities within the plot and apply allometric equations. This approach may prove effective for achieving large spatial coverage. However, for small-scale applications, such as risk assessments in WUI areas, higher spatial resolution is required for surrounding fuel estimation. Shifting the focus from estimated biomass of plots to individual trees is essential. Reconstructed CAD models created from segmented vegetation point clouds can serve to evaluate wildfirerelated infrastructure risks through heat transfer models. There are several branch reconstruction algorithms available for constructing tree skeletons from point clouds (Wang, Peethambaran, and Chen 2018). Some shortest path algorithms are available as software, such as AdTree (Du et al. 2019), treeQSM (Raumonen 2017), which find the shortest path to all points from the root. Topology optimization (Lowe and Pinskier 2023) demonstrated the capability of voxelizing forest wood materials instead of segmenting individual trees for reconstruction. This reconstructed vegetation information considering individual characteristics can be used to enhance the realism of the estimation process.
Future research should explore more precise methods for estimating FLs at sparse vegetation ecosystem such as shrublands and savanna. The surface components are challenging to estimate in lower spatial resolution (da Costa et al. 2021), but the accuracy can be enhanced by using a finer spatial scale (Shrestha, Broadbent, and Vogel 2021). The quality of UAS-SfM-based point cloud was insufficient to capture the structure of the understory due incompleteness of data. Meanwhile, LiDAR-based measurements showed significant correlation to field measurement than UAS-SfM (Reilly et al. 2021). To achieve greater spatial accuracy and completeness of vegetation structure, surveys will need to combine ground-based LiDAR point clouds with UAS-LiDAR data, leading to clearer estimations in low vegetation. While this approach may not be scalable for larger areas, it can be highly valuable for small area FL estimation.

Active fire
Future research on fire detection should prioritize improving smoke detection, as smoke's resemblance to clouds, haze, and fog can lead to false detections in DL studies . Accurate smoke detection is crucial for early fire suppression. UAS with multitemporal and higher spatial resolution capabilities should include algorithms developed by incorporating diverse datasets containing wildfire smoke and similar phenomena. This integration will help achieve higher detection accuracy in wildland areas for improved fire management and early suppression.
Data processing in real-time requires low-latency and high-speed processing for rapid alerting of management and control resources. Real-time processing for fire detection and fire propagation can be implemented using edge computing devices or cloud-based processing. UASs that transmit captured image data wirelessly to a high-performance server can achieve fire detection in shorter time Mukhiddinov, Abdusalomov, and Cho 2022;Zhan et al. 2022). However, sending images to the cloud for processing in remote or beyond urban boundaries may be challenging due to limited connectivity availability of cell towers or bandwidth (Jiao et al. 2019). This can result in slow processing times, increased latency, and additional costs associated with data transmission and storage. In order to address data transmission challenges, data compression can be employed. Liang et al. (2023) introduced an image splicing compression algorithm based on the extended Kalman filter, resulting in a noteworthy improvement in compression ratio by 25:1, with only a marginal 6.5% reduction in structural similarity (SSIM). Incorporating such data compression approaches in the future will significantly expedite the low processing speed of DL caused by handling substantial amounts of data.
An alternative strategy to enhance processing speed involves executing AI algorithms on onboard embedded devices, which helps reduce transmission delays. However, several studies attempting to implement such algorithms have encountered challenges in maintaining comparable fire detection performance Shamsoshoara et al. 2021;). Further research is imperative to explore methods for compressing the parameters of AI algorithms without compromising their performance. Multiaccess Edge Computing (MEC) represents another emerging technological solution where communication service providers bring compute capacity directly to the users (Xiaohui and Zhang 2023). Such advancements hold potential for improving the overall performance of AI-based systems onboard embedded devices.

Post-fire
Future research in quantifying the wildfire impact on severity assessment and vegetation recovery presents significant challenges due to limited spatial coverage. To address this, integrating multiple data sources, such as satellites, can compensate for spatial gaps and offer a more comprehensive multi-perspective view of wildlands. Conducting surveys with adaptive altitude in specific, limited regions can be of immense value in situations where satellite images face limitations or distractions, such as cloud cover, smoke, or other factors. UAS can provide high-resolution data from these specific regions, offering more detailed ground truth information (Lazzeri et al. 2021) and generating spatially improved images (2023). In addition, UAS and satellite image fusion can help overcome the cost and payload limitations of UAS, enabling the acquisition of a wider range of spectral bands, including SWIR. The expectation is that both UAS and satellite RS technologies will continue to be used in conjunction in the future, aiming to lower operational costs and address the limitations of each technology, with satellites and UAS working collaboratively.
Compared to UAS mounted cameras (RGB, MS, thermal), hyperspectral cameras are less frequently used by researchers. The trade-off in hyperspectral imaging is lower spatial resolution but higher spectral resolution. Adapting spectral data is valuable in post-fire studies, as studies have shown that combining specific spectral bands, even at lower resolutions, leads to accuracy improvements (De Simone et al. 2020). Hyperspectral sensors offer enhanced analyses across diverse spectral bands, and their future utilization in vegetation detection and UAS-based high-resolution RBR for severity estimation studies are expected to grow significantly.

UAS navigations for wildfire
The primary challenge in employing UASs for long-term wildfire combat is their limited endurance (Sousa and Gamboa 2022). UASs are routed in such a way that they cover a large area or present to a target point in shorter time with limited battery power. Interest in the implementation of various algorithms for single UAS path planning and multi UAS coordination strategies is also increasing, however, few studies have demonstrated the performance in field experiments. Only four studies validated their proposed UAS navigations methods in field experiment. Other studies have been validated in simulation platforms, which make several assumptions, such as flying at a safe altitude, continuously broadcasting their location, and having access to communications for safe drone operation in airspace (Harikumar, Senthilnath, and Sundaram 2019;Saffre et al. 2022). In the future, these algorithms need to be tested in different dynamic real environments.

Conclusion
UAS has risen in popularity in recent years, providing valuable information to aid in wildfire understanding and management. While there is a trade-off in terms of reduced endurance and coverage, UASs provide safer and greater flexibility as well as improved spatial and temporal resolution when compared to satellites and manned aircraft. An overview of recent UAS operations is provided in this review for researchers seeking to utilize these technologies to ensure the efficient management of wildfires in the future. In this review, 186 articles related to wildfires and UAS have been gathered from bibliographic databases, including Science Direct, IEEE Explore, and Scopus. This review examines the use of UASs for RS at three stages of wildfire: pre-fire, active-fire, and post-fire, as well as how algorithms are applied to the collected data and how well they perform in different applications. Additionally, several UAS navigation and coordination techniques are presented in the review for fire surveillance and fighting. 64% of the RS studies were conducted by field operations, 30% used UAS datasets for wildfire detection, and the remaining were based on simulations. UASs mounted with diverse sensing payloads collect atmospheric, spectral, and structural data, which is subsequently analysed using AI algorithms throughout all stages of wildfire. Forest structural measurements have been made more effectively with UAS-LiDAR point clouds than with other RS methods. Research findings, however, highlight the need for continued investigations on the following.
(1) Improvement of smoke detection by incorporating diverse datasets containing wildfire smoke and similar phenomena like low altitude clouds, haze, and fogs (2) Assessment of precise wildland-urban interface fire risk by estimating fuel loads by 3D reconstruction of individual tree segments (3) Improvement of Fl estimation at sparse low vegetation ecosystem within shrublands and savanna. (4) Implementation of AI-based real-time wildfire detection by enhancing the communication range, onboard processing capabilities or building lightweight algorithms where connectivity may be limited or poor or where the local datalink may be subject to communication dropouts (e.g. mountainous terrain) (5) Application of hyperspectral cameras to provide more detailed analyses across a wide range of spectral bands. (6) The integration of UAS and satellite RS technologies in order to reduce operational costs and improve estimation accuracy, especially in pre and post-fire application. (7) Evaluation and validation of simulation-based navigation studies by conducting field experiments.
These investigations, along with advancements in UAS technology, and AI, will continue to make significant progress towards combating wildfires, preserving property, and minimizing their devastating effects.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Funding
The work was supported by the Australian Research Council [DP 220103233].