A review of fusion framework using optical sensors and Synthetic Aperture Radar imagery to detect and map land degradation and sustainable land management in the semi-arid regions

Abstract This paper examines a feature-level fusion framework for detecting and mapping land degradation (LD) and enabling sustainable land management (SLM) in semi-arid areas using optical sensors and Synthetic Aperture Radar (SAR) satellite data. The objectives of this review were to (i) determine the trends and geographical location of land degradation mapping publications, (ii) to identify and report current challenges pertaining to mapping LD using multiscale remote sensing data, (iii) to recommend a way forward for monitoring LD using multiscale remote sensing data. The study reviewed 78 peer-reviewed research articles published over the past 24 years (1998–2022). Image fusion has the potential to be more useful in various remote sensing applications than individual sensor image data, making it more informative and valuable in the interpretation process. In addition, this review discusses the importance of SAR and optical image fusion, pixel-level techniques, applications, and major classes of quality metrics for objectively assessing fusion performance. The literature review alluded that the SAR and optical image fusion in the detection and mapping of land degradation and enabling sustainable land management has not been fully explored. Advanced techniques such as the fusion of SAR and optical satellite imageries need to be incorporated for the detection and mapping of LD, as well as the promotion of SLM in halting LD in South African drylands and around the world. We conclude that there is scope for further research on the fusion of SAR and optical images, as new micro-wave and optical sensors with higher resolution are introduced on a regular basis. The results of this review contribute to a better understanding of the applications of SAR and optical image fusion in future research in the severely degraded drylands of southern Africa. KEY RESEARCH GAPS The fusion of SAR and optical data still remains an open challenge. The future of different remote sensing applications lies in this kind of fusion. Land degradation is one of the greatest challenges amongst the environmental problems in South Africa, causing a reduction in the capacity of the land to perform ecosystem functions and services that support society and development. Yet, in South Africa, there are no studies that have widely investigated the potential for a fusion of SAR and optical data to detect and map land degradation and SLM practices. This paper established a baseline for understanding the application of a fusion of SAR and optical data as rapid tools for mapping, monitoring, and evaluating LD, as well as the impacts of SLM practices in South Africa’s degraded drylands.


Introduction
Land degradation (LD) is currently one of the greatest environmental concerns.Globally, it is estimated that about 25% of the land surface is highly degraded, while 36% is moderately degraded (UNCCD 2017b).In South Africa, approximately 0.7 million hectares of land are considered to be degraded (DEA-NAP 2018).LD is defined as a persistent decrease in the capacity of an arid or semiarid eco-system to supply a range of services, including (but not restricted to) forage, fuel, timber, crops, fresh water, wild-harvested foods, biodiversity habitats, and tourism opportunities.Land degradation is a complex term due to its interdisciplinary nature, incorporating geographical, ecological, climatic, and socio-economic perspectives.Such complexity arises partly due to an ongoing discussion on defining land degradation and how it should be measured (Nkonya et al. 2013, Lal 2015;Mirzabaev et al. 2015;Dubovyk 2017).
There is a range of definitions of land degradation -the definition by UNCCD (2017b) refers to land degradation as the 'reduction or loss of biological or economic productivity, and complexity of rain-fed cropland, irrigated cropland, or rangeland, pasture, forest and woodlands resulting from land uses or from a process or combination of processes arising from human activities and habitation patterns' (Dubovyk 2017;Sanz et al. 2017;Thomas et al. 2017).In this review, land degradation is regarded as the reduction in vegetation cover and the proliferation of bare soil patches is considered as potential land degradation risk.Generally, land degradation is mainly caused by anthropogenic factors such as overgrazing and deforestation, which leave the ground exposed to wind and surface runoff erosion (Reed et al. 2011;Traor� e et al. 2015;Alemu 2016;Wairiu 2017;UNCCD 2017a;Thomas et al. 2018).
Unsustainable land management practices such as cultivation on steep slopes and continued conventional soil tillage reduce the soil quality, thereby, accelerating land degradation (Taddese 2001;Lal 2015).Also, rapid land cover changes and climate variations threaten the productivity of rangelands leading to bush encroachment and habitat fragmentation, causing degradation (Mussa et al. 2017;Ramoelo et al. 2018).Other biophysical factors that influence land degradation include seasonal rainfall variations and steep terrain; which activate the wearing of the topsoil, especially in areas with low vegetation cover (Meadows and Hoffman 2002;Ochoa et al. 2016).Land degradation has a negative impact on people's livelihoods through the decrease in rangeland productivity and loss of fertile topsoil (Bedunah and Angerer 2012).
The concept of SLM serves as a unifying theme for global efforts to combat desertification, drought and land degradation, climate change, and loss of biodiversity (World Bank 2008;Thomas et al. 2018).SLM combines technologies, policies, and activities aimed at integrating socioeconomic principles with environmental concerns in order to maintain or enhance production, increase the resilience of ecosystem services, and be economically viable and socially acceptable (Stringer and Reed 2007;Reed et al. 2011;Stringer and Harris 2014;Reed et al. 2015;Webb et al. 2017;Thomas et al. 2018;Smith et al. 2019).The SLM concept is relatively new in South Africa -hence there is an urgent need to further develop SLM approaches and technologies by increasing the number and range of stakeholders, including small farmers and communities and other key groups (Bunning et al. 2016;Nigussie et al. 2017;Gonzalez-Roglich et al. 2019;Liniger et al. 2019;von Maltitz et al. 2019;Nzuza et al. 2021).
To date, mapping and monitoring land degradation have relied on four main approaches expert opinion, biophysical modelling, the conventional field-based approach, and satellite observations (Belenguer-Plomer et al. 2019;Nzuza et al. 2021).The expert knowledge approach is based on the nature, extent, degree, and causes of soil degradation within a mapping entity and typically necessitates extensive knowledge and experience in soil conservation (Bindraban et al. 2012;Cherubin et al. 2016).The main disadvantages of expert knowledge include, however, subjectivity, inconsistency, and difficulties in accountability (Nzuza et al. 2021).The conventional field-based approach includes techniques such as local sampling and observational surveys such as flowcharts and volumetric surveys.The benefit of this technique is that it can provide detailed objective information on soil degradation at the plot level.However, this method is frequently criticized for being time-consuming, labour-intensive, expensive, and only applicable to small areas.
LD and SLM detection and mapping are essential for countries worldwide, and remote sensing makes it possible on temporal and spatial scales, especially for semi-arid regions.Remote sensing data, irrespective of their type (active or passive sensing), provide valuable information on LD, and the effectiveness of SLM practices as compared with field-based conventional studies and land cover mapping using remote sensing methods is used widely worldwide.The objectives of the paper are to (i) determine the trends and geographical location of land degradation mapping publications, (ii) to identify and report current challenges pertaining to mapping LD using multiscale remote sensing data, (iii) To recommend a way forward for monitoring LD using multiscale remote sensing data.

Systematic reviews
A systematic literature review is a summary and assessment of the state of knowledge on a particular subject or research question that is organized to succinctly present current information.The flowchart below (Figure 1) presents the methodology used to conduct this literature review.The first step was to choose the keywords carefully and systematically for the literature search using the internet literature resources that were already available.The following were the precise keywords for this review: remote sensing in South Africa, fusing Synthetic Aperture Radar (SAR) and optical imagery in South Africa, land degradation detection and mapping using image fusion of optical and SAR in South Africa, assessment of the effectiveness of sustainable land management interventions using fusing (SAR) and optical imagery in South Africa, land-use and landcover (LULC) classification using fusing (SAR) and optical imagery in South Africa, vegetation cover mapping fusing (SAR) and optical imagery in South Africa, spatial analysis of land degradation in South Africa, alien invasive plants detection and mapping using satellite images in South Africa, biomass assessment using satellite imagery in South Africa, and so on.

Paper selection and exclusion
Downloaded papers from various publications were organized into folders.These papers were chosen for screening based on their titles and abstracts, while a slight number of published publications were chosen for screening based on the methods section.Following that, a final selection of papers was examined thoroughly, and data were retrieved.The retrieved data were handled and thoroughly analysed in a Microsoft Excel (MS Excel) database.Maps, graphs, and charts were generated as a result of the analysis using MS Excel and R software.The studies were divided into geographical areas (local, regional and national).Based on a global baseline dataset for deep learning in SAR-optical data fusion on mapping of LD, the studies under these areas were described.In order to compare the SAR-optical data fusion reported in different studies, the SAR-optical data fusion on mapping of LD in each district/province under study was retrieved.The purpose of this review was to compare and debate statistics provided between 1998 and 2022.From 1998 to 2022, we found 78 articles that presented various methodologies to map LD and its prediction assessment using remotely sensed data and imageries.Each study covers the application of SAR and optical data fusion, and the LD reported in the different studies was compared.For managing, citing, and referencing published papers in the manuscript, we used Mendeley Reference Manager, an open-source software.

Synthetic Aperture Radar (SAR) imagery
Due to its larger wavelength, synthetic aperture radar is an active microwave sensor, its radiation may, with the exception of heavy rain, pass through cloud cover, haze, dust, and other meteorological conditions (Kulkarni and Rege 2020).This capability always makes SAR data accessible in all weather and environmental circumstances (Kulkarni and Rege 2020).An antenna collects some of the energy that is backscattered from various objects in the area that the radar is illuminating at a right angle to the motion of the sensor platform.The amount of backscattered energy from distinct objects relies on their surface abrasion, moisture content, and dielectric characteristics (Robertson et al. 2019).As a result, it can distinguish between different objects in the image based on their surface characteristics and produces a SAR image that is richer in spatial information.It primarily characterizes the structural aspects of various objects on the surface (Zhang J et al. 2010).Radar signals have polarization.SAR data frequency and polarization, which refer to the orientation of the transmitted and received signal, are crucial for capturing vegetation structure.HH, HV, VH, and/or VV are the transmission and reception modes for multipolarized SAR systems, where H stands for horizontal wave orientation and V for vertical wave orientation (Figure 2).This enables the extraction of additional structural information and a more thorough assessment of the scattering characteristics of ground objects.HV and VH are more closely related to the structure of the tree canopy because of the volumetric water content (Hughes et al. 2020;Meraner et al. 2020;Torres et al. 2016;Zhang J et al. 2010).

Optical imagery
Optical sensors are passive components that gather solar light that has been reflected off of earthly objects.The electromagnetic spectrum's ultraviolet, visible, and infrared wavelengths are all covered by these sensors.Optical sensors are categorized as panchromatic, multispectral, or hyperspectral based on their spectral resolution.In the visible and nearinfrared regions of the electromagnetic spectrum, panchromatic sensors are capable of detecting a broad variety of wavelengths.These sensors have a high spatial resolution but a poor spectral resolution.Due to their wavelength specificity, multispectral sensors react differently to the electromagnetic spectrum's various wavelength bands, which range from visible to near-infrared (Figure 3).By separating targets in the scanned area depending on their multispectral reflectance, the spectral resolution of these images is increased (Mishra and Susaki 2014;Garcia et al. 2015;Togliatti et al. 2019;Hughes et al. 2020;Kulkarni and Rege 2020;Hillmer et al. 2021;Cui et al. 2022).Multispectral sensors are passive, yet even so, they have a wide enough instantaneous field of view to receive enough energy.As a result, multispectral images have a low spatial resolution.The spatial resolution of multispectral images is constrained by the sensor platform's restricted storage capacity and the earth station's constrained bandwidth.Higher spectral resolution improves  the interpretability of multispectral images.High spectral resolution and sensitivity are characteristics of hyperspectral sensors.These sensors can offer nearly constant measurements of the visible and near-infrared portions of the electromagnetic spectrum and can have hundreds of spectral bands (Joshi et al. 2016;Kulkarni and Rege 2020;Zhang R et al. 2020).

Significance of fusion of SAR and optical imagery
SAR images are rich in spatial information and available in all weather conditions and at all times -yet lack the spectral data that is essential for many remote sensing applications.However, it is difficult to read and tainted with speckle noise (i.e.granular noise present in radar imagery).Optical/multispectral images are produced using sunlight reflected from terrestrial objects.Two objects with distinct structures may look the same in optical images but can be recognized in SAR images because of their spectrum sensitivity.Datasets gathered by remote sensors that operate on various fundamental physical principles will be combined to produce synergistic data in order to alleviate this constraint, and in particular to improve land-use dynamics detection (Rani et al. 2017;Kulkarni and Rege 2020).In this way, SAR and optical images offer complementary data about the area being imaged, and combining these images creates a composite image with a richness of spectral and spatial data.Gaining a better comprehension and interpretation of the imaged area benefits from analysis of the combined images (Reiche et al. 2013;Kulkarni and Rege 2020;Baydogan and Sarp 2022;Park et al. 2022;Tufail et al. 2022;Zhang C et al. 2022;Zhao et al. 2022).

Pre-processing for SAR and optical image fusion
To improve pixel-level fusion performance, SAR and optical images must be preprocessed.The two steps of this pre-processing are registration of the merged images and speckle reduction from the SAR image.The coherent processing of the backscattered signal causes multiplicative speckle noise to taint SAR images.This makes the image look blurry and makes it challenging to visually comprehend SAR images (Heydari and Mountrakis 2018;Kulkarni and Rege 2020).Therefore, reducing speckle noise is a crucial pre-processing step before merging SAR and optical images.The primary objectives of speckle reduction filters are (1) effective noise suppression in homogeneous areas, (2) preservation and enhancement of image edges, and (3) visual appearance enhancement.
Compromises are necessary since it is difficult to accomplish all of these goals at once.Researchers have developed a variety of methods for lowering speckle noise in SAR images.Spatial domain methods and multi-resolution domain methods are the two basic types of speckle reduction techniques.Some of the most popular spatial domain techniques include the Lee filter, Frost filter, Extended Lee filter, Extended Frost filter, and Gamma MAP filter (Deepthy Mary Alex et al. 2020;Yin et al. 2020;Karimi and Taban 2021;Khare and Kaushik 2021;Li et al. 2021;Jin et al. 2022;Wang F et al. 2022;Zeng et al. 2022).Wavelet shrinkage algorithms have been used by some researchers to minimize speckle noise in SAR images (Leal and Paiva 2019;Zhang W and Xu 2019;Khare and Kaushik 2021).In order to improve the quality of the images produced by PPSDH, Jeong et al. (2019) propose a speckle noise reduction approach based on interpolation.No one algorithm can, however, guarantee the optimal speckle reduction for all kinds of SAR images.Recent developments in speckle reduction include non-local filtering-based methods (Hua et al. 2022;Kumar et al. 2021;Nakano et al. 2011;Varadarajan et al. 2022;Zhang G et al. 2021).Some researchers have applied a deep learning approach for filtering the SAR images (Hughes et al. 2020;Yin et al. 2020;Zheng et al. 2021;Karao� glu et al. 2022).Numerous speckle reduction techniques exist -however, traditional methods are still successfully used for the integration of SAR and multispectral data.

Pixel-level fusion methods
The purpose of fusing optical images at the pixel level is mainly to improve spatial resolution, improve structural and geometrical detail while maintaining the spectral fidelity of the original MS data.Using the same or distinct sensors, multitemporal data pixel-level fusion is used to emphasize the informative changes between different times.Fusion approaches are increasingly being used for change detection and analysis.

Spatial component substitution methods
Several pixel-level fusion techniques have been reported in the existing literature, including Principal Component Analysis (PCA), Gram-Schmidt (GS) Orthogonalization, Intensity-Hue-Saturation (IHS) transform, Brovery Transform (BT), High Pass Filtering (HPF) and Ehlers fusion (Liedtke and Growe 2001;Zhang J et al. 2010;Salentinig and Gamba 2016;Li 2017;Rani et al. 2017;Abdikan 2018;Ahmed, Rabus and Beg 2020).These techniques necessitate a high correlation between the images to be fused in order to minimize distortion in the fused image.However, some of the techniques in this group, require histogram matching between components to be fused (Rani et al. 2017;Kulkarni and Rege 2020).

� Intensity-Hue-Saturation (IH) Transform
One of the most frequently employed image fusion techniques for combining complementary multi-sensor datasets is the intensity-hue-saturation (IHS) transform.The IHS technique was initially used on multispectral images to distinguish spectral and spatial content (Abdikan et al. 2014;Abdikan 2018).The benefit of this technique is that it lessens spectral aberrations in the combined image.However, the intensity component is substituted with a high-resolution SAR image whose histogram has been altered in accordance with the statistical features of the high-resolution SAR image (Abdikan et al. 2014).To create fused multispectral images, modified intensity, hue, and saturation components are returned to the natural domain (Kulkarni and Rege 2020).The IHS technique was improved by the Ehlers method.

� In Ehlers Fusion
The high-resolution SAR image and the intensity component are filtered using Fourier domain filtering in the Ehlers fusion method after being transformed from a multispectral image into an intensity component using the IHS transform.Low-frequency content is extracted using the intensity component, and high-frequency content is extracted using the SAR image (Ehlers et al. 2010;Abdikan et al. 2014).

� Principal Component Analysis (PCA)
A statistical technique called PCA is used to change correlated variables in data into uncorrelated ones.Most of the spatial features in multispectral bands, which are utilized as input for PCA, are included in the first principal component.The remaining components map spectral data that is particular to various multispectral bands.After matching the high-resolution image's histogram with the first principal component, the high spatial-resolution image takes the place of the first principal component.
Creating the merged multispectral images is an inverse PCA transformation (Chen C et al. 2020;Gambardella et al. 2021).

� Gram-Schmidt (GS) Method
This methodology is a generalization of the fusion strategy based on PCA.Contrary to the PCA, this approach allows the first principal component to be selected at random, and the remaining components are calculated in an orthogonal manner to the first principal component.Initially, a simulated low-resolution SAR band is computed for the integration of high spatial resolution SAR and multispectral data.This band serves as the new orthogonal basis's first band.The first band (simulated low-resolution SAR) and multispectral bands are subjected to GS processing.The high spatial resolution SAR band is then used to replace the first Gram-Schmidt band, and finally, the inverse GS transformation is employed to create the fused multispectral bands (Ghoniemy et al. 2023).

� Brovery Transform Normalizes
The multispectral image is enhanced with the appropriate spatial information using the Brovery transform, which normalizes the multispectral bands to be fused and multiplies the normalized bands by a high-resolution band.A high pass filter in the frequency domain is used to filter a high-resolution image in the HPF method.Low-resolution multispectral bands are combined with this HPF-filtered high-resolution image to increase spatial resolution (Chen R 2015; Taxak and Singhal 2019).

Discussion
As shown in (Figure 4), this paper reviewed 78 studies on the fusion of optical and SAR data for land degradation detection and mapping, land use, and land cover assessment over the past 24 years (1998-2022) worldwide.In contrast to our expectations, just 50 studies focused exclusively on land use/land cover classification, and five focused on alien invasive detection and mapping, whereas the majority of studies addressed forest/vegetation monitoring.The potential advantages of fusion for land use analysis were assessed in 32 studies, and the vast majority (28 studies) indicated that fusion enhanced outcomes as compared to using single data sources.A number of studies have also focused on a variety of issues related to vegetation cover, Change detection, and mapping, but the majority of these studies used data fusion to answer some of their target research questions (e.g. using fusion to classify land cover but only using optical or radar data, without Fusion to identify specific land uses).As a result, our study reports on the merged articles' components and goals.Furthermore, the purpose of this review is to identify the advancements and benefits of data fusion in particular applied to land degradation detection and mapping (including sustainable land management), which is generally difficult to perform using single data sources.
All reviewed studies that were categorized into three spatial scales based on their geographic locations.This includes national, regional, and local levels as it progresses from smaller to larger spatial scales.Local-scale studies were defined as those performed in South Africa; regional-scale studies were defined as those conducted in Sub-Saharan Africa; and national studies were defined as those undertaken on a global scale, e.g.Europe, the United States, Asia, and so on (Figure 5).About 80% of the studies were conducted at the local level, while about 16% were conducted at the regional level.Of the remaining studies, only three were conducted at the national level.It can be observed that the number of studies from 2009 was very small and they were mostly carried out at the local level, while only one of these studies was carried out at the regional level.In the period 2013, especially in the last few years from 2018, there was a boom in research publications on forest assessment using remote sensing.Five local scales, two regional scales, and one national scale have already been published in 2020, showing an increasing trend in studies that include spatial forest assessments.
Due to improvements in data processing methodologies, research in the field of SAR and optical image fusion is currently moving in a number of areas (Kulkarni and Rege 2020).Deep learning may be used to combine remote sensing data like SAR and optical pictures because of its strong capabilities in feature extraction and data encoding.Ghoniemy et al. (2023) propose a hybrid pixel-level image fusion method for integrating panchromatic (PAN), multispectral (MS), and SAR images.
The multi-stage guided filter (MGF) for optical image pansharpening is used to achieve high spatial detail preservation, and the nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images, are used to improve the quality of the final fused image and take advantage of the SAR image properties.Deep learning techniques need a lot of training data, which is difficult to gather for remote sensing applications, especially for SAR imaging.Zhang R et al. (2020) propose a novel featurelevel fusion framework, in which the Landsat operational land imager (OLI) images with different cloud covers and a fully polarized Advanced Land Observing Satellite-2 (ALOS-2) image are selected to conduct LULC classification experiments.Meraner et al. 2020 propose a deep residual neural network to remove clouds from multispectral Sentinel-2 imagery and SAR-optical images.Schmitt, Hughes, and Zhu (2018) have published a huge training dataset namely SEN1-2 to promote research in SAR-optical image fusion using deep learning approaches.
This dataset consists of pairs of SAR and optical image patches, collected across the globe and throughout all seasons (Figure 7.2).Zhang P, Ban, et al. ( 2021) investigated continuous learning of U-Net by exploiting both Sentinel-1 SAR and Sentinel-2 MSI time series for increasing the frequency and accuracy of wildfire progression mapping.Zhao et al. (2020) proposed deep learning of a novel strategy to learn the relationship between optical and SAR time series based on the sequence of contextual information.Zhao et al. (2022) propose a new deep-learning model called Deep-CroP framework to improve the alignment between satellite and ground observations on crop phenology.The findings of this experiment on selected ground sites demonstrate that the proposed Deep-CroP is able to accurately identify crop phenology and narrow the discrepancies from 30þ days to as high as several days.Adeli et al. (2021) investigated the capability of Lband simulated NISAR data for wetland mapping in Yucatan Lake, Louisiana, using two object-based machine-learning approaches: Support vector machine (SVM) and random forest (RF).L-band Unmanned Aerial Vehicle SAR (UAVSAR) data were exploited as a proxy for NISAR data.This helps in improving the interpretability of SAR imagery, which is not facilitated by conventional SAR-optical fusion algorithms.Apart from that, other researchers have presented feature and decision-level fusion algorithms based on deep learning (Reiche et al. 2013;Rajah et al. 2018;Zhang R et al. 2020).

Applications of remote sensing
The use of remotely sensed imagery in LD and SLM detection and mapping has attracted much attention over recent decades, and could significantly contribute to reliable and accurate information relating to the detection and mapping of LD and the effectiveness of SLM practices (Figure 6).Remote sensing technology offers a unique opportunity to detect and map LD and SLM practices in semi-arid areas.However, few studies have examined Optical and SAR data for detecting and mapping land degradation and the impacts of SLM practices (Nzuza et al. 2021).Medium-resolution satellite imagery, such as Landsat, has long been used to map tropical forest disturbances.However, the spatial resolution of Landsat (30 m) has frequently been regarded as too coarse for reliably mapping small-scale selective logging (Zhang S et al. 2019).
Much of the current literature has shown the advantages of using a range of satellite imagery for the detection and mapping of Land Use and Land Cover Change (LULCC) with reasonable accuracy (Jabbar and Chen 2008;Faour 2014;Mitri et al. 2014;Al Saleh et al. 2019;Selvaraj and Nagarajan 2021).Satellite specifications have always affected producing trustworthy and precise spatial distribution maps.According to Rajah (2018), researchers must choose between spatial extent (swath width), spectral resolution (i.e.number of bands), spatial resolution (pixel size), and temporal resolution when using remotely sensed data (re-visit time).
Earth Observation (EO) technology currently offers a wide range of airborne and spaceborne sensors that provide a huge variety of remotely sensed data (Schmidt et al. 2018).The abundance and advancements in remote sensing satellite technology have the potential to improve detection and mapping accuracy (Rajah 2018).For instance, the European Union, through its first Earth Observation (EO) programme Copernicus, launched the Sentinel-1 (S1) and Sentinel-2 (S2) satellites (Rajah et al. 2018).Sentinel-1 (S1) is a SAR sensor with unprecedented 250 km 2 SAR imagery in single and dual polarization, while Sentinel-2 (S2) is a multispectral optical sensor with 290 km 2 optical imagery in 13 optical bands.(Ramoelo et al. 2015;Rajah 2018).According to (Ramoelo et al. 2015;Rajah 2018;Reiche et al. 2018;Schmidt et al. 2018;Das et al. 2019;Nazarova et al. 2020;Ritse et al. 2020), the tandem operation and unique characteristics of these two satellites have established a new paradigm for remote sensing applications (Rajah 2018;Schmidt et al. 2018;Sharma et al. 2021) also argue that the timely launch of S1 and S2 may increase the applicability of remote sensing-based approaches for practical environmental monitoring and mapping tasks.
The synergistic potential of multi-source remotely sensed imagery (for example Sentinel-1 SAR and Landsat-8 OLI Optical imagery) could potentially improve the analysis of LD and accuracies associated with mapping the spatial distributions (Dimov et al. 2016;Rajah 2018;Kulkarni and Rege 2020;Zhang R et al. 2020).SAR sensors operate at longer wavelengths and provide complimentary information relating to shape, moisture, and roughness, information not provided by optical imagery alone (Zhou et al. 2019).Since multispectral optical imagery records surface information regarding reflectance and emissivity characteristics, while SAR imagery captures the structure and dielectric properties of earth surface materials (Wang L et al. 2020;Euillades et al. 2021), studies have suggested that the synergistic potential and complementarity of optical and SAR imagery has potential to cost-effectively improve vegetation classification accuracies (Schmidtlein et al. 2010).Whereas the technique has been successfully applied in the computer vision, medical imaging, and defence security realms (Du et al. 2016;Leal and Paiva 2019;Arnal and Mayzel 2020;Li et al. 2021).Studies on the fusion of optical and SAR imagery for the detection and mapping of LD and SLM practices are limited.Remote sensing image fusion seeks to combine information from multiple sources to achieve inferences that are not feasible from a single sensor or source.The approach seeks to integrate different data in order to obtain more information that can be derived from each of the single sensor data alone.Multi-sensor image fusion is widely recognized as an efficient tool for improving overall performance in image-based applications.Abdikan et al. (2014); Rajah et al. (2018) and Zhang R et al. (2020) consider image fusion as the best option for the integration of information collected from different imaging sensors at varying spectral, spatial, and temporal resolutions.Sentinel-1 and Sentinel-2 imagery provides a unique opportunity to investigate the synergistic potential of new-age optical imagery fused with SAR imagery for invasive alien species detection and mapping.The freely available nature of S1 and S2 imagery, coupled with their large swath widths, short re-visit time, and unprecedented spectral and spatial resolutions offers valuable cost-effective data for invasive alien species detection at both local and regional spatial extents.This paper examines and discusses the need for a Fusion Framework of Optical Sensors and Synthetic Aperture Radar (SAR) Imagery to detect and map land degradation in drylands.

Land use/land cover classification
Human-induced land use and land cover change (LULC) is a significant contributor to global environmental change (Joshi et al. 2016;Dourado et al. 2019;Chetia et al. 2020).Understanding LULC processes is critical for more sustainable land management and will help global initiatives such as reducing emissions from deforestation and forest degradation (REDDþ) (Joshi et al. 2016).Remote sensing data is critical for classifying LULC information from various sensors with diverse spectral, spatial, and temporal resolutions (Zhang R et al. 2020).To date, several studies have investigated, a fusion of multi-sensor remote sensing data, such as SAR data with visible infrared and optical data, which improves classification accuracy and is useful in distinguishing different types of classes that are indistinguishable in optical data due to similar absorption spectrum of land features (Kulkarni and Rege 2020;Zhang R et al. 2020).

Forest/vegetation monitoring.
Forest ecosystems provide critical and diverse services and values to human society.Human exploitation of the environment and natural disasters like fires have led to significant deforestation.(Kulkarni and Rege 2020).Detecting and monitoring changes in forest cover and its drivers has become an essential component of international forest management as it aids in decision-making and policy development (Rotich and Ojwang 2021).Cloud cover remains a problem in the tropics, as does quantifying forest degradation (Joshi et al. 2015).One of the best remote sensing technologies for monitoring forests is data fusion since it combines the complimentary information from optical and SAR sensors (Kulkarni and Rege 2020).Recent studies have shown the advantages of fusing SAR and optical data for diverse applications in forest monitoring (Lehmann et al. 2012;van Beijma et al. 2014;Barrett et al. 2016;Lindquist and D'Annunzio 2016;Abdikan 2018;Reiche et al. 2018Reiche et al. , 2013;;Hermosilla et al. 2019;Koyama et al. 2019;Mercier et al. 2019;Nazarova et al. 2020;Rotich and Ojwang 2021;Zimbres et al. 2021).

Alien invasive detection and mapping
Understanding the spatial distribution patterns and processes of biological invasions may be important since they are thought to be a major cause of grassland ecosystem and biodiversity degradation.The traditional field-based survey method of monitoring, detecting, and collating spatial distributions of invasive alien plant species is time-consuming and expensive.Therefore, an accurate assessment approach is required, which can be obtained through the use of remote sensing, which provides numerous opportunities.Mapping alien invasion was originally based on aerial photographs.However, this is expensive, has small coverage, and is based on interpretation, which leads to problems in repeatability (Rajah et al. 2018;Masemola et al. 2019).With recent developments in spatial and spectral properties of sensors and improvements in classification algorithms, remote sensing data is increasingly being used to develop maps of biological invasions (Rajah et al. 2018).It has been demonstrated experimentally that multi-source remotely sensed imagery, such as S-1 SAR and L8 Optical, has the potential to improve the accuracies associated with mapping alien invasive spatial distributions (Rajah 2018;Masemola et al. 2019).However, studies on the fusion of optical and SAR imagery for the detection and mapping of invasive alien species' spatial distribution are limited.
For instance, in South Africa's Free State Province, a synergistic combination of Synthetic Aperture Radar (SAR) (Sentinel-1) and optical (Sentinel-2) Earth observation data was used to map the distribution of the Slangbos shrub (Seriphium plumosum).The model results confirm field observations that Slangbos encroachment has the greatest impact on pastures.Spatial cross-validation (SpCV) was used to estimate model accuracy, which resulted in approximately 80% classification accuracy for the Slangbos class within each time step (Urban et al. 2021).
Using the hyperspectral reflectance of the species throughout the complete range (400-2500 nm), leaves, and canopy, Masemola et al. (2019), compared the spectral distinctiveness of Acacia mearnsii to contemporary native plant species in the South African landscape using a time series analysis.The Nonparametric Random Forest Discriminant Classifier and the Parametric Interval Extended Canonical Variate Discriminant (iECVA-DA) classification models were compared (RF-DA).The ability to discriminate between A. mearnsii and the sampling species was shown to depend heavily on phenology.In comparison to the iECVA-DA, the results demonstrated that the RF classifier detected A. mearnsii with marginally greater accuracy from 92% to 100%.
Spectral analysis of seasonal profiles, image inputs at various resolutions, spectral indices, and auxiliary data was utilized to create image classification products in order to track the spatial dynamics of Prosopis in the North Cape over the past 30 years.Vector analysis and statistical data were utilized to measure the change in distribution, density, and spatial dynamics of Prosopis since 1974 using multitemporal Landsat imagery and a 500 � 500 m point grid.A combined cover density class and the calculation of the areal density per unit (ha) for each biome were used to quantify the fragmentation and change in natural vegetation.According to the findings, the Northern Cape Province's Prosopis cover reached 1.473 million ha, or 4% of the total land area, in 2007.The research area revealed a high level of accuracy (72%) (Abella et al. 2013).

Change detection
Change Detection (CD) in satellite imagery is important for a variety of applications, including geohazard monitoring and building damage assessment (Zhang C et al. 2022).Change detection (CD) in satellite images plays a key role in various applications, e.g.geohazard monitoring and building damage assessment.It involves integrating remote sensing images acquired by multiple sensors at different times to detect changes in a phenomenon across time.Multi-temporal and multi-sensor images obtained for a certain area of the Earth provide complimentary information, and combining such data is always advantageous in determining changes in that area.It is vital to ensure that the images under consideration for fusion are captured as quickly as possible when employing fused images for change detection.Using SAR and optical image fusion, several researchers have worked on change detection (Helman et al. 2014;Mishra and Susaki 2014;Hermosilla et al. 2015Hermosilla et al. , 2019;;Dourado et al. 2019;Chirakkal et al. 2021).
For instance, in southern France, extreme storm-related damage from heavy surface runoff was mapped using a repeatable change detection approach based on optical and SAR remote sensing.Based on a special training sample from the October 2018 Aude floods, the findings have more than 85% overall detection accuracies detection accuracies of more than 85% on independent validation samples for all three occurrences (Cerbelaud et al. 2021).By comparing pixel-wise and object-based classification, Osio et al. (2020) confirm the correlation between vegetation indices derived from optical sensors and backscatter indices from S1-SAR images of the same land cover classes.The study result shows the highest overall accuracy at 94%, and a kappa coefficient of 0.90 (Osio et al. 2020).
Landsat and Advanced Land Observing Satellite Phased Array Type L-Band Synthetic Aperture Radar (ALOS PALSAR) data were previously used to assess the accuracy of mapping the savannah land cover types of forested vegetation, grassland, cropland, and bare land that roughly covered 44,000 km 2 savannah in southern Africa.The results showed a number of models with good and striking similarities.The model used optical and Synthetic Aperture Radar (SAR) data from both the dry and wet seasons, and had an overall accuracy of 91.1%, an improvement of more than 10% over using the Landsat data alone from the dry season (81.7 2.3%) (Symeonakis et al. 2018).SAR models effectively mapped woody cover and achieved equivalent or lower omission and commission errors than optical models but detected other classes with less accuracy.Multiple sensors and seasons improve results and should be the preferred methodological approach for reliable savanna land cover mapping, especially now that Sentinel-1 and Sentinel-2 data are available.Monitoring of land cover in savannas in general and in the savannas of southern Africa in particular, where a number of land cover change processes are associated with the observed land degradation.

Conclusion
This paper assesses the current knowledge on a feature-level fusion framework for detecting and mapping land degradation (LD) and enabling sustainable land management (SLM) in semi-arid areas using optical sensors and SAR satellite data.It establishes that the reviewed articles cover four thematic areas namely (1) land use/land cover classification, (2) forest/vegetation monitoring, (3) alien invasive detection and mapping, (4), change detection, and (5) general mapping.
Land degradation will continue to be a challenge if urgent and coordinated action is not taken, and it is likely to increase in the future due to continued population growth, unprecedented consumption, an increasingly globalized economy, and climate change (IPBES 2018).New and improved tools and approaches suitable for assessing and monitoring land degradation under different SLM interventions will be essential to guide sustainable land use and management decisions.This paper examines a feature-level fusion framework using optical sensors with SAR imagery to detect and map land degradation and promote sustainable land management in semi-arid regions.In addition to various fusion methods, this paper discusses the importance of SAR and optical image fusion, pixel-level techniques, and major classes of quality metrics for objectively assessing fusion performance.SAR and multispectral sensors orbiting the planet acquire data in various areas of the electromagnetic spectrum and provide complimentary information about the area being imaged.For this reason, SAR-optical image fusion is a widely discussed topic in remote sensing research.
The main challenges in this area are spatial/spectral distortions and misregistration.Despite the challenges mentioned earlier in this section, due to the need for SAR and optical image fusion discussed in previous sections, this type of fusion is the way of the future for various remote sensing applications.Fusion such as component substitution methods and component hybrid approaches are discussed under the recent trends section.Fusion approaches that combine directional multiscale decomposition and component substitution methods produce a better-fused product than single methods.A review of the existing literature reveals that various approaches to combining SAR and optical images have been developed for various remote sensing applications.Due to differences in sensor geometry, polarization, frequency, and resolution, image fusion methods are data set specific and require fine-tuning of the fusion algorithm parameters.Aside from that, there are numerous difficult problems in SAR and optical image fusion, including image registration with multiple sensors, noise in source images, and computational complexity.With the launch of various microwave and optical remote sensing satellites that offer higher resolutions, the fusion of SAR and optical images remains an active area that will be useful for a variety of remote sensing applications.The findings of this review show that the use of SAR and optical image fusion approaches to detect and map land degradation and promote sustainable land management in semi-arid regions has not been fully explored.The findings of this review will contribute to a better understanding of the applications of SAR-optical image fusion in southern Africa's severely degraded drylands.

Figure 1 .
Figure 1.Methodological flowchart of the review.

Figure 2 .
Figure 2. Matched pair of different parameters of SAR images.

Figure 4 .
Figure 4.The geographical locations of 78 studies reviewed from (1998-2022) laid across the globe on the use of SAR and optical imagery to assess land cover change (LULCC) and land degradation.

Figure 6 .
Figure 6.Percentage of the application of remote sensing for the 78 reviewed studies.