Unmanned aerial remote sensing of coastal vegetation: A review

ABSTRACT Coastal wetlands contribute greatly to our coasts economically and ecologically. The utility of coastal wetland vegetation, along with the multitude of dynamic forces they encounter, suggests the need of regular monitoring for sustainable management. While traditional in situ survey methods and remote sensing from space and manned platforms have provided means to monitor and study the coastal zone thus far, the recent developments of small unmanned aerial systems (sUAS) fill a small void between traditional in situ survey methods and the high spatial resolution of manned aircraft imagery. As an on-demand personal remote sensing device, an sUAS can be deployed over coastal regions at a low cost and with very fine spatial resolution (i.e. 1–10 cm) imagery and corresponding spatial accuracy. Though an sUAS provides many benefits, recent literature documents several shortcomings and limitations to using them for coastal wetland vegetation research, including changing tides, lighting conditions and legal restrictions on flying. This study reviewed all coastal wetland vegetation-related studies that included an sUAS as a mapping tool to document the current state of the field. Current practices, successes, and limitations are described, and future directions for the field are discussed. Coastal managers and researchers alike will be able use this comprehensive review to determine how to best approach future studies of diverse coastal vegetation.


Introduction
Coastal wetlands are highly productive environments with a multitude of ecological and economic benefits (Mehvar et al. 2018;Peterson and Turner 1994). For example, coastal wetlands serve as buffers to storms and floods, and vegetation plays a large role in protecting people and infrastructure (Narayan et al. 2017). Furthermore, coastal wetlands also sequester carbon and filter runoff and upstream water from the pollutants they carry (Loomis and Craft 2010;Ballard, Pezda, and Spencer 2016). Economically, commercial fishing industries rely heavily on coastal wetlands as they provide critical nursery and breeding grounds for many harvested species. Furthermore, those seeking to enjoy the unique beauty these wetlands provide are increasing the recreational and tourism sector (Purcell et al. 2020). Occupying the space between the land and sea, the constant pressure by a variety of forces -including aeolian, fluvial, flood hazards, and climate changemake coastal wetlands some of the most dynamic environments in the world. As a result, efforts to monitor and map these areas have taken precedence for many stakeholders. The critical ecological role that coastal wetlands provide, coupled with the dynamic environment in which they exist, necessitates frequent monitoring and study.
Traditional field methods of in situ monitoring in coastal wetlands are effective but often prove difficult due to accessibility and resource requirements. Remote sensing methods can be used to alleviate these concerns, though two challenges remain. The first involves capturing imagery at an appropriate spatial resolution, enabling the phenomenon to be appropriately mapped and visibly interpreted from the imagery. The second challenge is acquiring imagery with adequate temporal resolution to properly describe the phenomenon of interest, while avoiding unwanted environmental conditions that can negatively affect the imagery. Current satellite and manned aircraft missions can provide moderate-to-high spatial resolution imagery at fairly high temporal resolutions. Satellite platforms with moderately fine spatial resolutions (e.g. 1 m) are on set temporal orbital paths, typically with one overpass in the late morning and revisits every two to three days (Liu and Hodgson 2016). Cloud cover, a challenge specifically related to temporal resolution, can be problematic and obscure coastal environments when capturing imagery using optical satellites. Commercial high-resolution imagery is also too expensive for long-term monitoring of coastal wetlands.
Recent developments in small Unmanned Aerial Systems (sUAS) permit on-demand data acquisition and help fill the spatial and temporal resolution voids that exist when using in situ data for calibrating satellite or manned-aircraft remote sensing imagery. These newer remote sensing systems can be cost efficient, deployed when circumstances require or allow, and capture very fine (<10 cm) spatial resolutions. However, they also come with many limitations, such as limited battery capacity, as well as flight regulations from the Federal Aviation Administration (FAA), state and local authorities (Hodgson and Sella-Villa 2021).
In this study, we review the opportunities for sUAS use as an emerging remote sensing approach to assessing and monitoring coastal wetland vegetation. Other recent review studies have highlighted the specific uses of sUAS in meeting operational requirements for coastal wetlands of the Great Lakes, wetland restoration, fluvial remote sensing and monitoring the spread of water hyancith (White et al. 2020;Ridge and Johnston 2020;Rhee et al. 2017;Datta et al. 2021). We are aware of one other review of the opportunities of using sUAS-based remote sensing of coasts and wetlands published in 2015 by Klemas (Klemas 2015). Our review focuses on reviewing literature published more recently, as the larger body of peer-reviewed literature related to sUAS and coastal wetland vegetation did not exist until after 2015.
Through an in-depth review of the most recent sUAS and coastal vegetation-related literature, we present the current state of sUAS uses for coastal vegetation remote sensing, the challenges encountered in each study, and a way forward to improve research methods when using these promising new tools. The article proceeds with Section two describing the methods used to obtain the literature base and general trends in the research. Section three presents specific lessons and strategies for using sUAS for coastal wetland research. Section four discusses some future directions for research, while Section five summarizes and concludes thoughts with questions to consider moving forward.

Collection of literature
To assess the uses of sUAS in coastal vegetation research, a Web of Science search was conducted using the terms 'unmanned aerial ', 'coastal', and 'wetland vegetation' (15 articles identified). A second search used the terms 'unmanned aerial', 'coastal', and 'marsh vegetation' to enhance the investigated literature base (12 additional articles identified). For the purposes of this study, the search terms 'unmanned aerial' represented any use of an sUAS in an article, including if the sUAS only provided supporting data for the study. The word 'coastal' was critically important and represented the coast of any body of water (i.e. fresh or salt water), including a lake, sea, or ocean. Since wetlands are found all over the world in a variety of environments, it was important to clarify our geographic extent of interest. The words 'wetland vegetation' or 'marsh vegetation' reflected our focus on mapping and monitoring vegetation rather than topography or another phenomenon. These five words, when placed together in a search, produced a total of 25 distinct articles. We examined each abstract to verify the article related to our goals. Conference proceedings and other review articles were not included in this review. To note, the limited conference proceedings discovered in our search were either precursors to work that was later published and included in our review, or they were too brief to be included.
Of the original 25 studies identified using Web of Science, a few were review papers describing the application of sUAS in a coastal wetland environment that did not pertain to remote sensing of vegetation characteristics. Some articles included a short discussion of using sUAS in a coastal wetland, but then focused on other topics. For example, some studied sUAS to investigate hydrology in a coastal environment to monitor tidal channel surface velocities or determine water budgets (Pinton, Canestrelli, and Fantuzzi 2020;García-López et al. 2018). Other studies described methods (e.g. structure from motion or SfM) for mapping terrain under dense vegetation from the point cloud (Meng et al. 2017). Although vegetation was addressed, it was not the focal point of the study. These articles were also removed from our literature base. Another study's objective was identifying nesting waterbirds and was therefore removed (Barr et al. 2018). After our thorough examination of all 25 articles, only 20 articles remained. The 20 selected articles focused on coastal wetland vegetation per our qualifications above. We reviewed each article thoroughly to ascertain detailed information on where and how each study was performed; platform and sensor requirements; challenges faced; results; accuracy; and general characteristics of the products derived from the sUAS imagery.
Vegetation-related goals of each study could be categorized into three general objectives: mapping spatial distribution of vegetation or species, monitoring vegetation health (e.g. estimating biomass or mapping photosynthetically active vegetation), and quantifying vegetation structural characteristics. The number of articles focused on the use of sUAS for coastal vegetation mapping and monitoring has increased steadily over time ( Figure 1). These articles were published in a variety of journals, with Remote Sensing (n = 6) and the International Journal of Remote Sensing (n = 2) housing the highest number of related articles. The geographic study areas described in the publications were widely dispersed around the world, although a very high percentage (80%) of the research was conducted in the United States of America. Based on these results, there is an interdisciplinary and international interest in the use of sUAS resources for mapping and monitoring coastal vegetation. While only three studies that fit our criteria had been published in the year 2021 (as of the writing of this article), it is expected that there will be many more to come. Figure 1 does not include (Klemas 2015), which is a review article concerning the beginning of sUAS use for coastal vegetation.
Many different sUAS configurations (i.e. platform types and sensor types), calibration techniques, products, and methods of analysis were used. The most common platforms can be assigned one of two categories: fixed wing (airplane-like) or multicopter (helicopter-like). A variety of sensor types were also used, including the nearly ubiquitous Red-Green-Blue (RGB) cameras, multispectral cameras, hyperspectral sensors, spectroradiometers, and LiDAR sensors. Two software packages, Pix4D Mapper (n = 5) and Agisoft Photoscan/Metashape (n = 11), were most often used for transforming the images collected by sUAS into standard products, including orthomosaics, digital elevation models (DEMs), and vegetation index maps.

Benefits of sUAS remote sensing
Several benefits of employing sUAS for coastal vegetation mapping were evident in the literature base used for this review. The cost effectiveness of using a sUAS for coastal vegetation research and monitoring was well documented (Marcaccio, Markle, and Chow-Fraser 2016;Johnson, Manby, and Devine 2020;Haskins et al. 2021;Taddia et al. 2021). While initial set-up costs can be expensive, the reusability of the Global Navigation Satellite Systems (GNSS) equipment, sUAS, and ground control point (GCP) targets suggest that future sUAS mission costs would be limited, or costs would decrease over time. The commonly used image processing software can be expensive, although freeware options are available (Johnson, Manby, and Devine 2020). The use of sUAS can also significantly reduce time spent in the field. For the study area described in (Marcaccio, Markle, and Chow-Fraser 2016), traditional in situ field work for a wetland would take two researchers six to eight days. However, acquiring the aerial images using sUAS only took 6 to 24 hours, depending on the type of sUAS.  shared a similar sentiment, offering that sUAS are more efficient than traditional field surveys for tracking wetland restoration progress. The sUAS surveys took less than one hour to complete while the field survey required five days with comparable accuracy. Time spent in the field gathering data using sUAS methods and in situ methods will vary based on the area, number of sites, and workload. Nevertheless, the literature base has shown that financial and time-related costs can be significantly reduced when monitoring coastal wetland vegetation with a sUAS.
Using sUAS as an on-demand remote sensing platform can help overcome many logistical problems related to the dynamics of a coastal wetland environment (Farris, Defne, and Ganju 2019;Doughty et al. 2021). For example, tidal cycles make planning and operating missions difficult, especially when weather on coastlines can be so variable. Many manned aerial imagery programmes, such as the National Agriculture Imagery Program (NAIP), capture imagery during leaf-on conditions, but do not consider the tidal cycle and other environmental conditions when determining flight times. Other impactful natural elements, like cloud cover, precipitation and previous rainfall events can be avoided with sUAS mission flexibility. The ability to collect repeat datasets over the same area makes sUAS an appropriate tool for monitoring dynamic ecosystems, invasive species, and short-term events (Abeysinghe  Dai et al. 2020). sUAS are an on-demand remote sensing option, capable of flying over small coastal areas when the elements align to collect the best possible imagery.
There were well-mentioned benefits to using sUAS for each type of coastal wetland environment as well. For example, when mapping submerged aquatic vegetation (SAV), sUAS can provide a footprint larger than other monitoring methods (i.e. from in situ methods in a boat). These larger areas can then be mapped more efficiently (Brooks et al. 2019). sUAS-borne LiDAR sensors are able to gather more ground returns and penetrate the vegetation more effectively than manned airborne LiDAR missions due to the lower flight altitudes . High spatial resolution imagery provided by sUAS can be used for generating training samples in many environments where non-invasive and non-destructive methods are required (Haskins et al. 2021). Finally, sUAS derived datasets can serve as important supplementary information for making effective decisions and modelling (Zhou, Yang, and Chen 2018;Broussard, Visser, and Brooks 2020). Multi-scale remote sensing is used to understand ecological processes at different scalessatellite (global, regional, national) and sUAS (local). They work complementary to each other to provide a bigger picture.

sUAS platforms for coastal wetland vegetation research
The chosen sUAS platform used in each study is important because factors such as payload capacity, flight time, stabilization, cost, and maintenance are a function of the platform itself. In all studies included in our review, the two largest categories of sUAS platforms were fixed wing and multicopter; seven studies used fixed wing aircraft, eleven used multicopter aircraft, and two used both. Fixed wing aircraft, known for their ability to sustain flight for an extended period of time, have been used for a multitude of applications in coastal environments (Barr et al. 2018). In a study comparing fixed wing and multicopter aircraft for environmental mapping applications, (Boon, Drijfhout, and Tesfamichael 2017) found that fixed wing aircraft were more cost efficient, required less maintenance, and allowed increased flight time. Of the nine studies that used fixed-wing aircraft, five used a version of a Sensefly eBee aircraft (Ebee x fixed wing mapping and surveying drone 2021). Fixed wing aircraft can be beneficial for monitoring large areas of coastal wetland vegetation due to their increased flight times. Our review showed the flight area of the fixed-wing aircrafts used for this particular environment ranged from smaller areas of about 0.292 km 2 (Dale et al. 2020) to larger areas up to 2.85 km 2 (Marcaccio, Markle, and Chow-Fraser 2016). Figure 2 shows the relationship between flight altitude (above ground level) and the corresponding mapped area for fixed wing aircraft and multicopters.
The authors of (Boon, Drijfhout, and Tesfamichael 2017) suggested, based on their analysis that multicopters could maintain a higher payload and offer better stabilization. In addition, multicopters can control speed much more effectively than fixed-wing sUAS. They offer the ability to fly sufficiently slow to capture imagery with more complex sensors, such as hyperspectral sensors, which is especially useful when vegetation is submerged. Thirteen studies used a multicopter aircraft, which include octocopters, hexacopters, and quadcopters. The most common (8 articles) multicopter aircraft brand was Da-Jiang Innovations (DJI) (DJI 2021). Multicopter sUAS are excellent for capturing imagery of smaller areas. Most of the studies were conducted over similarly small study areas, below 100 ha or 1 km 2 ( Figure 2). Multicopters are also beneficial in areas like coastal salt marshes where it can be difficult to find adequate space to launch/recover a fixed wing aircraft (Broussard, Visser, and Brooks 2020). The typical payload for each sUAS was only one sensor; however, a Bergen hexacopter was used by (Brooks et al. 2019) to carry a heavier payload of two sensors.
In a study comparing the capabilities of an eBee fixed wing aircraft and a DJI quadcopter to capture aerial imagery in a coastal marsh environment, (Marcaccio, Markle, and Chow-Fraser 2016) found that multicopter aircraft could capture about 16 hectares per flight, while the fixed wing aircraft could capture 94 hectares per flight when flying at the same altitude and environmental conditions. Although the area captured by one flight is a function of altitude, battery capacity, and camera lens, this study's results suggests fixed wing aircraft could cover six times the area of a multicopter. Upon further review, the battery capacity was certainly a factor, although the manual operation of the multicopter sUAS and autopilot operation of the fixed wing eBee aircraft may have been a factor. Nevertheless, Figure 2 does support the notion that fixed wing aircraft should be considered for study areas with greater extents, as authors have been doing. Multicopter aircraft are favoured for image quality and stability, while fixed wing aircraft are ideal for larger flight areas. In summary, both multicopter and fixed wing sUAS serve as useful remote sensing platforms in coastal wetland sUAS research depending on the application, study area size, and payload required.

sUAS sensors for coastal wetland vegetation research
A diverse group of payloads were used in the articles included in this review ( Figure 3). Nevertheless, 15 of the 20 studies included at least one RGB camera for vegetation observation and analysis Lishawa et al. 2017;Rupasinghe et al. 2018). Many studies (n = 10) incorporated a camera with a near-infrared (NIR) band, with the band ranging in wavelength (depending on the sensor) between 770 and 850 nm (Abeysinghe et al. 2019;Broussard, Visser, and Brooks 2020). The red edge (RE) band was included on some of the sensor systems with wavelengths between 707 and 727 or 730 and 740 nm. The RE band was not commonly used in analysis, however (Farris, Defne, and (Broussard, Visser, and Brooks 2020) used modified RGB cameras with filters to capture a false colour image. Both RGB and Multispectral cameras offer relatively inexpensive opportunities to map and monitor coastal wetland vegetation. Even the least expensive option, RGB camera derived imagery, was used to go beyond visual interpretation and provide RGB-based vegetation indices to improve classifications and mapping (Johnson, Manby, and Devine 2020). The authors of (Haskins et al. 2021) recommended the collection of NIR wavelengths in addition to RGB imagery because of the added benefits for classifying vegetation.
Other, more unique sensors were also used. In (Cao et al. 2018) a Cubert UHD 185 hyperspectral sensor was used to distinguish between different mangrove species. This sensor can capture wavelengths from 450 nm to 998 nm in up to 138 spectral bands. The authors discovered that using very high spatial resolution hyperspectral data, along with DEMs, provided the best mangrove classifications with an overall accuracy (OA) of 89.55%. In (Brooks et al. 2019), a variety of spectroradiometers were attached to an sUAS to detect SAV. The spectroradiometers captured light  wavelengths of 190 nm to either 800, 1000 or 1100 nm, with a 1.5 nm spectral resolution. These sensors were successfully used to derive vegetation curves, even with the vegetation submerged, though the radiance values were not as strong as signals from a boat or the vegetation out of the water (Brooks et al. 2019). The authors also included a Tetracam multispectral camera as well with six spectral bands of 490, 530, 550, 600, 680, and 720 nm (Brooks et al. 2019) to enhance data collection.
Pinton and others conducted a study in a salt marsh in Georgia, USA using a sUAS-borne LiDAR sensor, SfMderived point cloud from RGB images and a combined dataset of LiDAR and SfM-derived points to map coastal marsh vegetation structural characteristics . Authors discovered that sUAS-borne LiDAR data outperformed the RGB-derived data and the combined dataset. Furthermore, sUAS-borne LiDAR was shown to overcome some shortcomings found in traditional higher-altitude manned-aircraft borne LiDAR for coastal wetlands. Applying less-commonly used sensors and cameras with practical applications, such as LiDAR and hyperspectral sensors, remain an important area of future study for coastal wetlands.

Flight parameters for coastal wetland vegetation research
Ideal flight parameters for sUAS remote sensing of coastal wetlands are dependent upon the type of vegetation being mapped. For example, remote sensing of salt marsh grasses require different parameters than a mangrove forest. Examples in the literature base gathered for this review offer insights into best approaches.  evaluated the impacts of flight altitude, image overlap, and lighting conditions on various sUAS-imagery derived products (e.g. point clouds, orthomosaics, and Digital Terrain Models or DTMs). They found that flight altitude was the most impactful parameter, while image overlap also contributed to a small degree. Flight altitude, as shown in Figure 2, varied widely across studies in coastal wetlands but was generally between 70 m above ground level (AGL) and 120 m AGL (Note: 120 m is the nonwaivered FAA legal limit for sUAS operations in the USA). Increased image overlap impacted the authordesignated level 2 products (Canopy height models or CHM and DTMs) the most. For wetland cordgrass height modelling, lower altitude flights (around 70 m in ) showed less error than higher altitude flights, while higher altitude flights (around 119 m) reduced vertical error for mangrove canopy modelling. However, it is of note that in one author's experience, lower altitude flights can make it more difficult for photogrammetric processing software to find key points in overlapping imagery. Higher altitude images stitch together into orthomosaics better.
Another study (Haskins et al. 2021) investigated ideal parameters for using sUAS to monitor marsh restoration projects. While a large portion of the study was focused on investigating topographic changes in restored marshes, one objective was to investigate the required flight altitude for identifying vegetation of different horizontal area sizes. Of the three flight altitudes (10 m, 30 m, and 60 m) flown over a 1 ha marsh area, the authors found 30 m to provide the most accurate classifications when the NIR band was included (Nash-Sutcliffe Efficiency (NSE) coefficient = 0.74). The 10 m (NSE = 0.71) and 60 m (NSE = 0.63) flights were not as accurate, though they only required the use of the RGB bands to reach this accuracy. Flights of 10 m were able to identify high percentages of mid-size and large plants (>80%) and only 45% of vegetation classified as 'small'. It was more difficult to identify vegetation with the imagery provided by flights of 30 m (0.76 cm spatial resolution) and 60 m (about 1.5 cm spatial resolution). The authors suggested the use of 30 m for vegetation recognition because of the amount of area that can be imaged while also providing high spatial resolutions for vegetation recognition.
Other studies suggest lighting conditions and tidal effects must be considered when planning missions. For example, (Farris, Defne, and Ganju 2019) suggested the collection of imagery during solar noon to reduce shadows, particularly when planning to use the data for salt marsh shoreline recognition. Shadows were shown to impact DTM accuracy by restricting the creation of ground points in the generated point cloud . However, sun glint on water in tidal areas is more prominent during this time and can impact imagery quality if used for visual and spectral analysis.
Flight altitude, overlap, and required spatial resolution were determined to be the most influential flight parameters for generating successful imagery products and eventual interpretation (Rupasinghe et al. 2018). When using visual interpretation, it is also important to reduce shadows during data collection. Water was found to dampen reflectance and impact various products, including a biomass estimation model, so it is important to consider the tidal cycle to limit the amount of water in the imagery (Doughty and Cavanaugh 2019). Collecting multi-temporal images over longer periods of time during similar tidal stages was also suggested.
The spatial resolution of collected imagery is a function of the flight altitude, focal length and sensor width of the camera. A discussion of the focal length and sensor width of different cameras is beyond the scope of this study, but it is important to note that many cameras used for sUAS remote sensing share similar characteristics. The required spatial resolution of a mission should depend upon the minimum mapping unit (MMU), which is driven by the research question. The MMU is determined by the smallest phenomenon of interest, and requires expert knowledge of that particular phenomenon. Surprisingly, the concept of a minimum mapping unit (MMU) was not mentioned in any study. The spatial resolution ranged from 0.8 cm to 32 cm across the reviewed articles. The authors of (Rupasinghe et al. 2018) flew at an altitude of 121.92 m and used a canon RGB camera to obtain the outlier 32 cm spatial resolution. The authors do not explain why a spatial resolution of 32 cm is used, though the sUAS imagery was used to compare with 1 m and 2 m data as a higher spatial resolution alternative. Higher spatial resolutions can be gathered with those parameters, and more detail from the authors is needed to understand how and why such a coarse resolution was collected. Beyond this outlier, the lowest spatial resolution collected for any study is 13.9 cm. The median spatial resolution, including the outlier, was 4.23 cm and suggests the preference of researchers to map coastal wetland vegetation at spatial resolutions in the 1 cm to 5 cm range.

sUAS data correction and calibration for coastal Wetland vegetation research
Image correction (e.g. radiometric or geometric corrections) is essential for ensuring accuracy of the reflectance value and geolocation of remotely sensed imagery. Any errors introduced in the collection of the images will be propagated through to the imagery products and eventual results. Several studies reviewed and applied various georeferencing and radiometric correction techniques to normalize the sUAS data.
Ground control points (GCPs) were deemed essential for ensuring proper georegistration of the collected sUAS data. Even though more expensive sUAS may include a high accuracy GNSS, such as those augmented with RTK or PPK, collecting and using an appropriate number of GCPs is essential to achieving the map product accuracy needed for the application. In our review, many articles did not discuss the number of GCPs being used, particularly in the earlier studies. In the 11 articles published from before 2020, only 3 (27%) mentioned that GCPs were used for georeferencing. In contrast, 8 of the other 9 articles published in 2020 and 2021 (89%) mentioned the use of GCPs. The number of GCPs and size of the study area did not show any correlation, indicating a lack of consensus on how many GCPs should be used for different study area sizes in this type of environment ( Figure 5). The most GCPs collected for any study was 30, while the least amount was 6. In a single study used an eBee plus RTK sUAS, researchers collected six ground control points over their 29 ha area (Dale et al. 2020).
The authors of (Santos Santana et al. 2021) experimented on sUAS flight altitude and ground control points in georeferencing and discovered that accuracy doesn't improve much after including more than 6 to 8 GCPs per hectare. However, the authors concluded that the number of GCPs and altitude of the aircraft are crucial components to incorporate into mission planning for a successful mission and useable product. Interestingly, (Haskins et al. 2021) suggested that two GCPs per hectare is sufficient for high-resolution mapping and accuracy did not improve much beyond that threshold. We suggest the number of GCPs per ha relationship should also be selected based on the flight altitude/study area and complexity of the topography.
Including GCPs in a wetland environment can be difficult due to the inaccessibility of marshes or height and complexity of the canopy. Solutions to this issue were to select GCPs strategically around the vegetation canopy or use an elevated GCP on a platform to rise above the vegetation canopy (Broussard, Visser, and Brooks 2020). While the elevated GCPs remove some of the studied phenomenon from the images, it is a nondestructive method that protects the underlying vegetation.
The radiometric correction conducted across numerous studies transformed the raw digital numbers (DNs) collected by the multispectral or hyperspectral cameras into radiance and percent reflectance. Accurate radiometric correction ensures that image data can be compared across collection times and minimizes seamline inconsistencies across image scenes. Changing atmospheric conditions can result in raw DNs for the same objects across scenes to change dramatically, even over just a few minutes. The most common calibration method used a radiometric calibration target provided by the camera manufacturer, such as a MAPIR Calibration Target V2 or Micasense red edge panel (Johnson, Manby, and Devine 2020;Doughty and Cavanaugh 2019). This involved placing the calibration target on the ground and using the sUAS camera to capture an image of the target before and after the programmed flight. The collected images were then used in a software package (i.e. Pix4D Mapper) to pre-process and calculate reflectance values from the raw digital numbers. This can be done with consumer grade RGB digital cameras as well.
In , the authors used upward facing sensors to simultaneously capture the downwelling irradiance of the sun along with the ground surface spectral reflectance captured by the camera. Another technique used image panels of known reflectance to use along with the downwelling sensor to calibrate the DN values to percent reflectance. For example, some authors used reference measurements from a white board and dark measurements from the lens cap to calibrate the values (Doughty and Cavanaugh 2019). Using ground reflectance targets is the most widely adopted method and can be considered a best practice for radiometric correction in coastal wetlands using sUAS.

Useful products of sUAS imagery for coastal wetland vegetation research
The products derived from sUAS images in each study were dependent upon the goals of each experiment. Digital elevation maps (e.g. DEM, DTM and Digital Surface Models or DSM) were constructed from the SfMderived point clouds or LiDAR returns in 17 of the 20 studies. Elevation information was often used for determining structural characteristics of vegetation. For example, elevation maps were subsequently used as model inputs for delineating wetland boundaries/shorelinesor describing the heights of various species (Farris, Defne, and Ganju 2019;Dai et al. 2020). Elevation data were also used as model inputs for classifying species. It was demonstrated in (Samiappan et al. 2016) that including a DSM can improve classification accuracy. The authors discovered that including the DSM produced one of the lowest omission errors (11.3%) when mapping invasive phragmites australis.
Another common product derived from the spectral bands of sUAS imagery were vegetation indices. Vegetation indices were also used as input for species classifications. Table 1 displays the vegetation indices used in the 20 coastal wetland articles. Multiple indices were used as inputs into biomass models (29,37). The most commonly used index was NDVI. NDVI was a successful indicator of coastal wetland vegetation biomass (Doughty and Cavanaugh 2019). In (Brooks et al. 2019), the authors tested the capabilities of four indices for discriminating different SAV species. While some of the indices were originally developed for terrestrial vegetation (i.e. Modified NDVI), they were adapted for submerged aquatic vegetation for detecting vegetation in the water column. The authors found that the modified NDVI was important for separating EWM from other vegetation types.
Vegetation indices based on RGB bands were widely used since many commercial sUAS come with RGB cameras. A few popular indices include the Triangular Greenness Index (TGI), Visible Atmospherically Resistant Index (VARI) and Excess Green Index (ExGI), among others (Johnson, Manby, and Devine 2020; Dale et al. 2020). The authors of (Johnson, Manby, and Devine 2020) found that TGI was more effective in separating the vegetation from water than VARI. The authors of (Dale et al. 2020) demonstrate the utility of RGB vegetation indices for mapping differences in vegetation cover in a coastal wetland and suggest their use for cost-effective monitoring using offthe-shelf sUAS cameras. For example, ExGI slightly outperformed other indices in discriminating vegetation cover (Dale et al. 2020). The authors of (Zhou, Yang, and Chen 2018) used the Difference Vegetation Index (DVI), based on NIR and red bands, to effectively model Spartina Alterniflora biomass.
Orthomosaics derived from sUAS RGB images were used in several studies for visual analysis and manual digitizing of vegetation boundaries (Taddia et al. 2021;Farris, Defne, and Ganju 2019;Dai et al. 2020). The high spatial resolution RGB orthomosaics were also used to generate random sample points for validation and accuracy assessment (Rupasinghe and Chow-Fraser 2019). Not all studies required processing in photogrammetric software before the sUAS data was useable. In (Brooks et al. 2019), the authors performed radiometric calibrations and generated vegetation indices with each individual image, and spectral signatures were also derived. The spectroradiometers provided significant spectral detail and were able to deliver detailed spectral signatures for the SAV.

Methods of analysis of sUAS imagery for coastal wetland vegetation research
A variety of analytical methods were effective in coastal wetlands. Many studies in the coastal wetland environment had the goal of mapping the spatial distribution of species and other surrounding phenomenon. In a comparison of pixel-based and object-based machine learning classifiers and the traditional maximum likelihood classifier (MLC), (Abeysinghe et al. 2019) found that a Support Vector Machine (SVM) classifier outperformed all others with an overall accuracy (OA; see section 3.8) of 90%. (Johnson, Manby, and Devine 2020) used a SVM classifier to effectively separate mangroves from other primary vegetation classes. In another study, ) also used a SVM classifier to map 17 species in a coastal wetland with an overall accuracy (OA) of 68.7%. When comparing the original high resolution sUAS orthomosaic to resampled, lower spatial resolution orthomosaics for classification, the high-resolution imagery had a higher classification accuracy as well.
Other classification comparisons were also performed. The authors of (Cao et al. 2018) performed a number of experiments with varying feature combinations for classifying different mangrove species with K-nearest neighbours (K-NN) and SVM classifiers. They determined that for the object-based classifiers, segmentation characteristics best represented the underlying vegetation with a spatial resolution of 0.15 m, a segmentation scale of 100, and a compactness of 0.7. However, the spatial resolution, segmentation scale and compactness will require adjustment based on research objectives. Both classifiers performed well, but the SVM classifier performed the best with an 89.55% OA. The authors of (Broussard, Visser, and Brooks 2020) used object-based image analysis to first classify the orthomosaics into vegetation and water, and then into more in-depth classes. In (Rupasinghe et al. 2018), the authors tested unsupervised classifiers, an MLC and an SVM classifier on sUAS imagery and then compared the accuracies to manned-airborne hyperspectral imagery. The unsupervised classifications did not work well with sUAS data and only yielded an OA of 28.8%. The authors suggested this is because the additional spectral information from a NIR band is important for classifications, and the sUAS only collected RGB images. The SVM supervised classifier (OA = 82.4%) outperformed the MLC (OA = 79.3%), suggesting that  (Taddia et al. 2021,Dai et al. 2020

2017) Visible Atmospherically Resistant Index
GREENÀ RED GREENþREDÀ BLUE (Barr et al. 2018,Boon, Drijfhout, and Tesfamichael 2017,DJI 2021    These vegetation indices were used with hyperspectral imagery; Therefore, the equations reference different wavelengths rather than simple green, blue, NIR channels. 2 L = 0.5 SVM classifiers are able to perform the best. The manned aircraft hyperspectral imagery classifications (OA = 85.5%) outperformed even the most accurate sUAS classifications, suggesting that spectral information may be more important than high-spatial resolution, however. Based on the reviewed literature, there is a consensus that SVM classifiers are the most accurate for coastal wetland vegetation classification.

REDEDGEÀ RED REDEDGEþRED
Models for estimating wetland vegetation biomass were developed using linear statistical models. Using sUAS data for accuracy assessment, (Zhou, Yang, and Chen 2018) discovered that their biomass model resulted in an R 2 of 0.89 and root mean squared error (RMSE; see section 3.8) of 0.415 kg m −2 . The model developed using an NDVI layer derived from a multispectral camera on an sUAS in (Doughty and Cavanaugh 2019) resulted in an R 2 of 0.67 and RMSE of 0.344 kg m −2 .
A variety of statistical tests (i.e. ANOVA, Tukey HSD, and Tukey-Kruskal) were used in (Brooks et al. 2019) to test for differences between direct field measurements (e.g. boatside and out-of-water) and data collected using a sUAS. No significant differences were found between the spectral index values when comparing direct field vs. sUAS-collected data. Kolmogorov-Smirnov (K-S) tests were also performed on spectral signatures of different vegetation species. Differences were found between EWM, the vegetation of interest, and the several other species.

Validation and accuracy assessment of sUAS imagery for coastal wetland vegetation research
Most accuracy assessments performed in the literature were conducted using validation data collected just before or after sUAS flights (Zhou, Yang, and Chen 2018;Doughty and Cavanaugh 2019). Validation data were also created from manual image interpretation of the high-resolution sUAS imagery. In this case, highspatial resolution sUAS data were used as if the researchers were there in person. As an example, (Zhou, Yang, and Chen 2018) used the imagery to generate validation data for fractional vegetation cover in a satellite imagederived model. Similarly, the authors of (Rupasinghe et al. 2018) used a stratified random sampling method to generate random points from sUAS imagery-based land cover maps for accuracy assessment. In vegetation structure-focused studies, in situ GNSS surveys and traditional vegetation surveys were used for accuracy assessment (Broussard, Visser, and Brooks 2020).
Traditional validation and accuracy assessment methods were effective for evaluating the performance of classifiers in the coastal wetland environment. For example, (Abeysinghe et al. 2019) effectively used a three-fold cross-validation approach to assess the many classifiers in their experiment for mapping phragmites. Other studies used in situ data and random sampling methods to determine a method's accuracy from a confusion matrix (Marcaccio, Markle, and Chow-Fraser 2016;Samiappan et al. 2016). Overall accuracy (OA) was the most commonly computed accuracy assessment metric. OA was computed by using equation 1: where x ii represents a pixel classified correctly, N is equal to the total number of pixels being assessed, and OA is the overall accuracy. The OA served as the baseline accuracy metric. For a similar type of assessment, (Haskins et al. 2021) used a weighted Nash-Sutcliffe Efficiency (NSE) coefficient, calculated using equation 2: where n is the number of sampled plots, E is estimated cover from classified imagery, M is the measured cover from the field survey, and W is the number of intercept sampling points (weight). The NSE metric indicates agreement between the classified results and field results (Haskins et al. 2021).
In studies estimating vegetation biomass, root mean squared error (RMSE; equation 3) was used to describe the differences between the modelled values and validation dataset values:

RMSE ¼
ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where N = the number of data points, x i is the groundmeasured biomass, x i is the estimated biomass (Doughty et al. 2021;Zhou, Yang, and Chen 2018;Doughty and Cavanaugh 2019). Lower RMSE values indicated more accurate estimations of biomass.

Challenges for sUAS imagery for coastal wetland vegetation research
A variety of challenges exist for using sUAS in coastal wetlands. Some challenges are common across all sUAS remote sensing applications, while some are specific to the coastal wetland environment. For example, vegetated areas with standing water and changing tide levels can pose a problem when using vegetation indices to classify the vegetation (Lishawa et al. 2017). NDVI values varied substantially between in situ measurements and the sUAS measurements for these areas within the study site due to the presence of water. In (Doughty and Cavanaugh 2019), the authors suggested that environmental conditions, such as cloud cover and tidal stage, can adversely affect biomass estimations. To combat this, the authors suggest collecting imagery under consistent atmospheric conditions as close to solar noon as possible. This technique would not be effective with standing water, however. The presence of water can affect reflectance values and potentially cause an underestimation of NDVI, so imagery should be collected at a low tidal stage. Ideal research conditions should be chosen based on research objectives. More generally, sensor noise can also contribute to radiometric variability causing inconsistencies between data collections. Other shortcomings described by (Rupasinghe et al. 2018) included misclassifications caused by shadows and shaded areas. While special processing techniques were used in an attempt to remove these artefacts, they still had an effect on the classification results. Finally, coastal wetland vegetation phenology can impact how vegetation indices and biomass correlate and can influence vegetation classification accuracy. Collecting data during peak biomass for best sUAS-derived biomass estimations was recommended (Lishawa et al. 2017). When planning to compare imagery datasets across time, it is important to collect them in a similar temporal window (e.g. season) so that vegetation phenology is approximately the same.
A general sUAS related challenge described in the literature includes the presence of artefacts in the imagery, created during orthomosaic generation, which can make classifications difficult (Samiappan et al. 2016). 'Artifacts' can be described as blurred objects or discontinuities in the orthomosaicked images. For example, a boardwalk in a section of marsh may be blurred out or slightly misaligned during the mosaicking process. The misalignment or blurring of the object can cause misclassifications. Discontinuities can be different across datasets from the same area, which can make change detection difficult.
Legal restrictions from the Federal Aviation Administration (FAA) require unmanned aerial pilots-incommand flying under Part 107 rules to be 'licensed' (i.e. the remote pilot certificate) and follow certain regulations that can potentially limit flight times and study area extent . One of these restrictions, maintaining constant visual line-of-sight, can restrict the study area to the distance of the sUAS visual recognition by the remote pilot (Broussard, Visser, and Brooks 2020). The requirement of FAA certification to fly a sUAS for research purposes can limit the number of individuals who operate a sUAS for research purposes. Many states also put additional restrictions on the launching/landing/operating from certain locations (Hodgson and Sella-Villa 2021). In a more dramatic impact, North Carolina also requires a separate state UAS permit in addition to the FAA remote pilot certificate. Remote pilots need to investigate and follow local rules and regulation wherever a sUAS is being operated.
Computing power is also a well-documented challenge. sUAS missions with a high percentage of overlap (i.e. 80%) can collect large numbers of images to be processed. Computer storage and computation power to process the images requires large amounts of computer random access memory (RAM), large graphics processing units (GPUs) and extensive storage (Broussard, Visser, and Brooks 2020). The millions of points generated in a point cloud can be difficult to process and analyse as well. Another technological limitation comes in the form of current battery capacities and their link to flight times. Depending on the size of the drone and payload, battery power limits flight duration to less than one hour for most fixed wing and multicopter sUAS. Multiple batteries are required to fly over larger areas. As technology improves, our ability to collect and analyse datasets in these environments will also improve.

Sensors
Both RGB and multispectral cameras have been used extensively to map and classify vegetation in three of the four general objectives observed in this review ( Figure 3). Future coastal vegetation research should more extensively examine other sensors and instruments, such as thermal infrared sensors, sUAS-borne LiDAR, and hyperspectral sensors. LiDAR and hyperspectral sensors are expensive investments and therefore were used in only two studies. However, data collected from these sensors and others can contribute significant knowledge on the biophysical characterization of coastal vegetation using a UAS platform. For example, thermal sensors have been used along with other multispectral sensors to detect chlorophyll content and water stress from an sUAS for crops (Berni et al. 2009). Applying similar techniques to a coastal environment may provide greater insight into variables affecting water stress in coastal vegetation. For example, (Gao et al. 2014) utilized Landsat TM and ETM+ imagery, including the thermal band, for coastal region drought monitoring. However, at such coarse spatial scales, it is difficult to discriminate coastal wetland vegetation from other types of upland vegetation. Higher spatial resolutions can highlight stresses affecting coastal vegetation in specific regions, and even for specific species. There are many applications for these sensors in mapping coastal vegetation, particularly in submerged aquatic vegetation, algae blooms, and mapping structural characteristics of vegetation Brooks et al. 2019;Kislik, Dronova, and Kelly 2018). As the technology becomes less expensive and these sensors become more widely available, an increasing number of applications will contribute to a growing knowledge base.

Incorporating multiple scales
Many articles reviewed in this study utilized sUAS imagery as ancillary data or validation data in the study. For example, (Zhou, Yang, and Chen 2018) performed their main biomass model with satellite imagery, and then used the sUAS data to validate the satellite-derived model. In another study, (Doughty et al. 2021) compared the use of sUAS for coastal wetland biomass modelling to coarse resolution satellite-based biomass modelling and found the sUAS models performed better. Conversely, studies like (Marcaccio, Markle, and Chow-Fraser 2016), (Dale et al. 2020), and (Doughty and Cavanaugh 2019), among many others, focused on the sUAS as the sole remote sensing platform.
There is merit in identifying other opportunities of data integration from all platforms and scales. sUAS can provide excellent model input and guidance for extracting information from satellite data. By including sUAS data in models of multiple scales, a highresolution imagery gap can be filled between traditional satellite and aerial-based imagery and corresponding in situ data. sUAS imagery can be used to detect the many fine scale spatial patterns and changes across a study area, and in some instances, has outperformed broader scaled products develop using Landsat imagery (Doughty et al. 2021). sUAS do an excellent job improving detection of heterogeneous spatial patterns and pairing these data with coarser resolution data can enhance larger scale analysis. Although current legal restrictions and battery limitations may curb study area size and sUAS applicability, there are increasing opportunities for sUAS to be used regularly by environmental researchers and coastal managers. Battery performance is improving, and flight time capabilities have steadily increased with new iterations of fixed-wing and multicopter sUAS ( Figure 6). Future research can take advantage of increased flight capabilities to investigate how sUAS can serve as a reliable independent remote sensing platform on its own, without the need for other manned-airborne or satellite datasets to make a meaningful contribution.

Standardization of procedures
As previously mentioned, the coastal environment is very dynamic and can be a challenging environment for remote-sensing data collection. The ever-changing weather, cloud cover, wind and tides necessitate the development of general guidelines regarding capturing imagery in these environments. Other variables that can change among sUAS missions include GCP and validation survey data accuracy, mission planning, and flight parameters. Coastal wetland vegetation research could benefit from a set of field-tested general guidelines. Because sUAS are new and developing rapidly, other scholarly research has called for a standardization of procedures and data management (Wyngaard et al. 2019;Poley and McDermid 2020). This work, particularly that of standardizing sUAS data management, has engaged the public and led to a discussion around several posed challenges. In regards to sUAS in coastal wetland vegetation research, much progress was made by  in the way of understanding the impact of flight parameters on imagery products. The effects of varying spatial resolutions were also assessed by  and (Haskins et al. 2021).
However, additional mission planning research is required to complete a set of flight standards specific to coastal wetland vegetation. Future research needs to include exploring the appropriate number of GCPs for study area size in coastal wetland environments. Topography and vegetation characteristics differ between coastal wetland environments, and therefore may need different georeferencing requirements. While (Haskins et al. 2021) determined a minimum of 2 GCPs per hectare for their study area, (Santos Santana et al. 2021) found a different threshold for their study area. The GCP requirement will be dependent upon the project accuracy requirements. Further investigation is needed in a variety of coastal wetland environments to come to a consensus. Until recently, GCPs were not frequently used or mentioned in sUAS coastal vegetation studies. A consensus for lower limits of flight parameters and mission planning (e.g. overlap or altitude and spatial resolution required to map a particular vegetation species) was not reached in our literature review. In order to correctly compare and build upon prior research, standards should be developed as soon as possible. We call on remote sensing experts and coastal wetland vegetation experts to work together on standardizing procedures for sUAS missions in vegetated coastal wetland environments.

Conclusion
Coastal environments are significant for many reasons, including their economic, ecologic, recreational, and hazard reduction benefits. Coastal wetland vegetation provides a critical role in each of these benefits and warrants regular monitoring due to the extremely dynamic nature of the coastal environment. This study reviewed and synthesized 20 articles pertaining to sUAS use for remote sensing of coastal wetland vegetation. Since 2016 there has been an increase in literature regarding the use of sUAS in coastal vegetation research. Limitations were related to using ground control points; balancing altitude, image overlap, and battery capacity; legal restrictions; environmental limitations; and computational requirements of the data. Overall, there is an increasing use of sUAS platforms for mapping and monitoring coastal vegetation and the challenges outweigh the limitations for a variety of applications. However, as sUAS platforms continue to be upgraded and new and improved sensors become more readily available and cost-effective, the applications and capabilities of sUAS for remote sensing of coastal vegetation will continue to grow.
To summarize, future research should address the following questions: • How should sUAS missions be planned to optimally acquire vegetation characteristics in each vegetated coastal environment? • How can sensors implemented at other scales (e.g. satellite and manned aircraft) be integrated with sUAS remote sensor data to provide meaningful information about wetland vegetation characteristics? • How can we develop better methods of geometrically correcting sUAS imagery that meets accuracy requirements?
• In what applications can sUAS serve as principal remote sensing instruments, and in what ways do they best serve as ancillary or complementary data sets?
As these questions are being addressed, interdisciplinary research will become more common and the future of sUAS research in coastal environments and corresponding best practices will improve.