An Algorithmic Approach towards Remote Sensing Imagery Data Restoration Using Guided Filters in Real-Time Applications

Abstract The images captured from SAR sensors are inherently weakened by speckle noise. The SAR image processing community targeted this problem with many feature-based filters. Since SAR images are low-contrast images, edge retention is the most crucial aspect to consider. This helps in the efficient retrieval of information. This paper provides a two-step edge-preserving homomorphic SAR image despeckling technique that implements a guided filter as the first step, and a modified method of noise thresholding using the bivariate shrinkage rule and canny edge operator in the Discrete Orthonormal Stockwell Transform (DOST) domain as the second step. The use of a canny edge operator improves overall edge preservation after despeckling. The use of noise thresholding delivers the highest level of speckle reduction in the DOST domain. The detected edges are added to the residual part obtained after removing the noise to produce more informative content. According to several qualitative and quantitative criteria, the suggested approach is compared to some of the newest despeckling methods. The execution time of the proposed method is around 7.2679 seconds. Upon conducting qualitative and quantitative analysis, it has been determined that the proposed method surpasses all other despeckling methods that were compared.


Introduction
The radars are classified as real aperture radar (RAR) and SAR according to their antenna size.The noncoherent radars that are controlled by the antenna's length are known as RAR (Zhu et al. 2013).The antenna of an active radar system transmits high-frequency radar waves to a specific region of the terrain for image capture and processing.According to Lapini et al. (2014) and (Singh and Shree 2020a), a larger antenna's length means a larger picture, which is more detailed and hence more accurate in terms of data.A high-resolution photograph cannot be obtained this way.It is very difficult, if not impossible, to fit huge antennas on satellites and aircraft.The synthetic aperture was made by engineers and scientists to solve this problem.They devised the synthetic radar to get everincreasing data resolution by connecting a small movable antenna at their fixed positions, thus making the smaller antenna seem much larger.SAR is a coherent radar that is linked to satellites and airplanes.It gives high-resolution images of a large area of the Earth's surface.The antenna size on all satellites and planes is fixed.It is a synthetic antenna, which means it can move forward and backward.As it moves forward, the SAR antenna also moves and keeps transmitting highfrequency radar waves onto the earth's surface.Upon striking the target, the high-frequency radar waves bounce back to the source.The SAR receives and processes the reflected radiation as well (Singh et al. 2021).SAR data processing takes a long time since it is a high-computing task.A constant engagement of constructive and destructive interference of transmitted high-frequency radar waves with targets on the Earth's surface results in high-resolution SAR pictures (Iqbal et al. 2013).
During the image capture stage, this constructive and destructive interaction results in significant information loss and quality deterioration.The scattering phenomena that result from this information loss and quality deterioration may be seen in the SAR image.Speckle noise is an outcome of this scattering process.The SAR images come with this speckle noise by default.The SAR image's visual quality is severely diminished by speckle noise (Singh and Shree 2017a).The pattern of the speckle noise is granular and resembles badly distorted salt and pepper noise (Yommy et al. 2015).Before executing any type of segmentation procedure on SAR images, the SAR images must first be pre-processed.SAR image despeckling constitutes this pre-processing (Singh, Diwakar, et al. 2021).SAR image despeckling is the act of eliminating speckle noise from SAR images.Any kind of SAR image processing must include this pre-processing technique since it enhances image quality and facilitates subsequent processing (Singh and Shree 2016).Compared to other types of noise patterns, the impact of speckle noise is much worse (Mv and Mn 2015).Its multiplicative character is the fundamental reason.Speckle noise has a gamma distribution-like pattern (Singh and Shree 2020a).The nature of speckle noise is multiplicative.Comparatively, multiplicative noise has a worse impact than additive noise.In this case, multiplicative noise multiplies the external noise components by the reference SAR data to produce a noisy SAR image (Shree et al. 2020).
There are several conventional and unconventional SAR image despeckling techniques.Both Bayesian and non-Bayesian approaches may be used to classify these techniques.Traditional SAR image detection approaches are mostly focused on spatial Bayesian algorithms.While nontraditional SAR image despeckling approaches are based on non-Bayesian methods and Bayesian approaches in the transform domain, in the pre-processing step, it is important to solve the major problem of speckle noise in the SAR images.When working with SAR images, this step is recognized as essential.As an outcome, there has been constant progress in this area.Numerous studies have been conducted with excellent outcomes in a variety of fields.Although hybrid, nontraditional approaches are more successful in this field, conventional ones are still useful.The outcomes of nontraditional approaches are more clearly defined than those of conventional ones.

Real-time urban remote sensing applications
The use of SAR data is becoming more important in geospatial research.SAR may be used regardless of lighting conditions or cloud cover, unlike many other observational techniques.Recent years have seen a dramatic rise in data quality and availability owing to an ever-increasing number of orbital SAR devices, with more on the horizon.This has prompted the development of new processing tools.Therefore, analytical processes based on SAR data may now be automated and executed at scale to address issues in fields as diverse as natural and man-made disaster response, urban planning and land use, agriculture, change detection, and ocean and coastal monitoring.The obtained SAR image after speckle reduction can be used for urban object analysis and identification, urban disaster monitoring and change analysis, urban climate change and variation, urban data synthesis and analysis, urban visualization and virtual reality applications, and urban applications of high-resolution optical sensors.

Related work
The authors in Baraha et al. (2022) give an insight into some of the most important current standard speckle filtering algorithms.The various SAR multiplicative noise models and their features are described.It also provides an overview of nonlocal means (NLMs) and variational models (VMs).The various algorithms are described in detail, including information on how they operate as well as their benefits and cons.SAR-DRDNet was suggested by the authors in Wu et al. (2022) as a complete SAR image despeckling model.To highlight the use of global information in images, a non-local block was added.As an outcome of this, the multi-scale impact of the picture was explored.With the SAR-DRDNet, despeckling is achieved while still preserving fine details.A new single-image speckling approach based on a combination of resemblance-based block matching and a deep learning network using noise as a reference is presented in Wang et al. (2022).This approach uses a convolutional neural network encoder-decoder to remove noise from tiny picture patches.After creating many noisy pairs from one or more noisy SAR images, this approach then uses these noisy pairings as training input to create a huge number of new pairs (Wang et al. 2022).With two parameter-shared branches, the approach then trains the network in a Siamese fashion.The authors of Perera et al. (2022a) describe a transformer-based network for despeckling SAR images.The proposed method includes a transformer-based encoder that enables the network to discover global relationships between various picture areas, improving despeckling.Using a composite loss function, the network is trained completely on artificially produced speckled pictures.
By narrowing the receptive field, the authors in Perera et al. (2022a) use an overcomplete convolutional neural network (CNN) architecture to concentrate on learning low-level characteristics.An overcomplete branch concentrates on local patterns, whereas an under complete branch concentrates on global patterns in the proposed model.The authors in Singh et al. (2022) proposed wavelet thresholding-based SAR image despeckling techniques using the 2D-Discrete Wavelet Transform (DWT).Here, a speckled SAR image is first pre-processed using an iterative inverse variance-based non-homomorphic filter.The low-frequency components are directed to a bilateral filter, and the high-frequency components are directed to modified Bayesian thresholding followed by inter-level method noise thresholding.The intra-level method of noise thresholding is applied as a post-processing operation to get the final despeckled image (Singh et al. 2022).Despeckling of SAR images using dictionary learning and sparse coding is proposed in the paper (Liu et al. 2022) to address the issue of speckle noise.The proposed model is built using quick and efficient solution stages that conduct orthogonal dictionary learning, weight parameter updating, sparse coding, and picture reconstruction concurrently.A despeckling approach that relies on the sparse model is presented in Li et al. (2022) to minimize the speckle noise from SAR images.It employs homomorphic filtering to convert the multiplicative noise into additive noise.After that, the log-intensity image is segmented using a straightforward linear iterative clustering method.Based on deep learning methods, a modified CNN algorithm is presented for speckle noise reduction in Mohanakrishnan et al. (2022).This technique employs CNN with 12 layers to reduce speckle noise.The LReLu activation function, batch normalization (BN), and residual learning technique are all part of the network's architecture, which additionally employs dilated convolution to broaden the receptive field.To prevent picture quality deterioration, a skip connection is implemented.Two cutting-edge techniques for removing speckles have been combined in Wang et al. (2022) to overcome their weaknesses.Clustering and Gray Level Co-Occurrence Matrices (GLCM) are used for image classification and weighting, respectively, to find the optimal filter output for each region in the image.Due to their continuous structural information, optical images are used for clustering and GLCM, which are substantially cleaner than SAR imagery.
According to the authors of Dalsasso et al. (2022), three self-supervised techniques are compared in terms of data preparation and performance.SAR image despeckling using deep neural networks is discussed in this paper.The authors of Farhadiani et al. (2022) carried out the speckle reduction procedure in the complex wavelet domain.Based on a pre-trained CNN model, the multichannel logarithm with the gaussian denoising technique is used to first despeckle the approximate complex wavelet coefficients.The averaged version of the maximum a posteriori estimator is then used to despeckle the log-transformed features of the complex wavelet coefficients.The suggested technique uses a Markov chain to periodically add random noise to clean pictures until they acquire white Gaussian noise (Perera et al. 2022a).By utilizing a noise predictor that is conditioned by the speckled image, a reverse process that repeatedly predicts the new noise yields the despeckled image.To enhance the despeckling performance, a novel inference approach based on cycle spinning is also suggested.In the shearlet area, this work suggests a unique statistical methodology based on Gaussian Copula modeling.Two essential parts make up the proposed multi-dimensional minimum mean square error processor (Morteza and Amirmazlaghani 2022).The marginal distribution of the shearlet coefficients is first calculated using the biparameter Cauchy Gaussian Mixture Model.To model the dependence of the target coefficient on its neighbors, joint-prior distribution modeling is created second.This is based on the Gaussian copula that is proposed by Morteza and Amirmazlaghani (2022), Nabil et al. (2023), and Baraha and Sahoo (2020).In Kamath et al. (2021), a despeckling technique employing the shrinkage of 2D-DOST coefficients in the transform domain coupled with a shock filter is presented.Also, an effort was made as a post-processing step to maintain the borders and other characteristics while eradicating the speckle.The suggested technique comprises splitting the SAR image into low-and high-frequency components and processing them individually.In Kamath et al. (2021), the speckle noise in an image is eliminated with the use of the shock filter and 2D-DOST.Edge enhancement algorithms are used to improve the sharpness of edges and their surrounding detail.This results in a smooth SAR image over the homogeneous areas, while the heterogeneous areas maintain their original detail.Improved SAR image despeckling was suggested in Yu and Shin (2022) using a selfsupervised method.The heart of the suggested method's design is the transformer block and the residual block.The loss function was the mean squared error with regularization.Both synthetic and actual SAR datasets have been used in the experiments.The findings demonstrate that, in comparison to other despeckling approaches, the suggested method can better maintain detail and lessen smoothness.
This study (Baraha et al. 2022) provides an overview of some of the most important state-of-the-art speckle filtering techniques and methodologies.The many multiplicative noise models used in SAR images are described, along with their features.There are brief explanations provided for NLMs and VMs.Different algorithms are described in detail, together with their advantages and disadvantages (Baraha et al. 2022).The article tries to explain the fundamentals of SAR imaging with the least amount of mathematics possible (Bamler 2000).It is stressed that understanding the unique characteristics of SAR pictures is necessary before interpreting these data (Bamler 2000).In a paper, the authors propose using an FPGA-based accelerator to train CNNs in an energy-efficient manner.They used improvements, including quantization, a standard method for compressing models, to speed up the CNN training procedure.In addition, a gradient accumulation buffer is used to guarantee optimal performance while keeping the learning algorithm's gradient descent intact.This paper centers on the implementation of a watermark encryption and decryption technique utilizing DWT through MATLAB simulation (Tufa et al. 2022).The objective of this study is to guarantee the retrievability of embedded data while refraining from limiting access to the primary image.The authors of Barman et al. (2023) put forward a structure for parallel single-image super-resolution that is based on edge-preserving dictionary learning and sparse representations.This framework is aimed at recovering edge information from low-resolution images and is implemented on compute-unified device architecture-enabled graphics processing units.To restore edges, a set of interconnected dictionaries is acquired, specifically, the scale-invariant feature transform key points and non-keypoint patch-based dictionaries.This paper presents a novel guidance-aided triple-adaptive Frost filter that exhibits potential for utilization in real-time processing platforms (Li et al. 2023).The proposed approach utilizes a scale-adaptive sliding window sizing technique to establish the neighborhood ranges for each point within the image.The proposed method incorporates an adaptive calculation for the tuning factor within the Frost filter.Finally, the extracted feature information from the initial image is utilized to facilitate automatic edge recovery, thereby ensuring the effective preservation of features.

Major contributions and significance of the proposed method
With the motivation of Singh et al. (2022), this research paper proposes a two-step edge-preserving and hybrid SAR image despeckling technique that implements a guided filter as the first step, and the second step includes modified method noise thresholding using the bivariate shrinkage rule and canny edge operator in the DOST domain.Two noteworthy contributions of this study are emphasized below: � The outcome of the traditional method of noise is improved by adding the concept of a canny edge operator.The traditional method of noise reduction works specifically on residual components that remain unfiltered.Incorporating the concept of a canny edge operator into it enhances its performance in terms of edge retention and uniformity in uniform areas.� The concept of noise thresholding is employed in the proposed method to get better outcomes in terms of edge retention for those regions of the despeckled SAR image that are not processed or filtered.This is done to get better outcomes.The speckled SAR image is processed by having the despeckled SAR image subtracted from it.To locate the image's edges, a Canny edge detector operation is carried out.To obtain a more detailed residual component, the identified edges are added to the part that was removed.The DOST is used to perform the decomposition of this more detailed residual portion.It decomposes the more detailed part of the residual into a component that is both approximate and detailed.To achieve the required level of filtering, the detailed component has been redirected to the bivariate shrinkage algorithm.Therefore, in this section, the significance of the proposed method should be verified by testing the outcomes with and without method noise in the proposed methodology.
The organization of this paper is divided into five sections.Section "Introduction" introduces the SAR image and its various perspectives, including speckle noise.It also surveys some of the latest nontraditional research in the same field.The contribution of the paper is also mentioned in this section.Section "Background" gives a brief introduction to the background of the techniques used in the proposed method.Section "Proposed methodology" is about the proposed method.It explains the proposed work using an algorithm and flowchart.The significance is also discussed here.The numerical outcomes with visual analysis are discussed in Section "Experimental outcomes and discussion".The merits, demerits, and future aspects of this work are discussed in Section "Merits, demerits, and future perspectives".Section "Conclusion" concludes the paper with a future perspective.

Guided filter
Guided image filters are typically used for edge detection which employs an edge-preserving smoothing filter for identifying the edges.It has a local linear model which indicates a model without bias (He et al. 2013).Let b ¼ output image, a ¼ input image, then the function of guidance image 'e' is represented as below: where, M ¼ weight, and c, d is the pixel indexes (Hiremath 2021).
The quality and effectiveness of guided image filters are excellent across many use cases, including noise reduction, detail smoothing, etc.In contrast, the guided image filter does not reverse the gradient, producing a clear border.When compared to bilateral filters, guided image filters perform very well in terms of computational time.

Canny edge detection
An enhanced canny algorithm for edge detection algorithm is applied to the despeckled image to detect the edges (Zhou et al. 2011).The major motive to apply this algorithm is its better ability to do efficient detection, efficient localization, and exactly one response per edge.Also, the error rate is low as well.There are five steps to apply this detection process.The first step is to remove the noise.The second step is to find the intensity gradients, the third step is to apply gradient magnitude thresholding.The fourth step is to determine potential edges, and the last step is to validate the detection of edges by reducing weak and unconnected edges (Wu et al. 2022).
Since the Gaussian filter is present, image noise may be eliminated.An improved signal-to-noise ratio may be achieved with the use of non-maxima suppression, which produces ridges one pixel wide.Uses thresholding to find edges even when they are obscured by noise.Parameters allow for fine-tuning the level of efficacy.It provides accurate localization, fast responses, and robustness to background noise.

R.G. Stockwell established the Stockwell Transform
(ST) idea in 1996 (Stockwell et al. 1996).In the time-frequency domain, ST is a transform that may represent a signal in a variety of ways.It provides referenced phase information in addition to the progressive resolution.Because it provides a duplicate representation of the time-frequency plane, ST cannot be used on images with a high resolution.
For the sake of efficiency, a new non-redundant variant of ST, the DOST, has been developed.In other words, DOST is an energy-preserving transform since it is orthonormal and conjugate symmetric (Katunin 2021).DOST gathers local orientation data.The fundamental function of DOST with parameters a, b, and X is represented below: where a is the center of each frequency band (voice), b is the width of the band and X is the location in time.Equation 2's spectral partitioning may be reversed to get the DOST inverse by reconstructing the Fourier spectrum from the DOST frequency bands (Hong 2021).The inverse matrix of a times series transformed into DOST form is equal to the complex conjugate transpose of the orthogonal transformation matrix.Because this is an orthonormal transformation, the vector norm is retained.As an outcome, the DOST norm is the same as the time series norm (Mejjaoli 2021).DOST is the only method that includes both progressive resolution and information about a signal's phase that is referenced.It is common to practice in wavelet transform to translate the phase reference point in parallel with the wavelet, however, this is not the case with DOST since all phase reference information is always referred to as zero time in DOST.As an outcome, DOST is found as a highly valuable tool in many image processing applications, such as encoding, denoising, despeckling, and compressing images (Esam El-Dine Atta et al. 2021).
The concept of method noise thresholding is applied to get better outcomes in terms of edge retention for those parts of despeckled SAR images that are unprocessed or unfiltered.Despeckled SAR image (B) is subtracted from speckled SAR image (A).Canny edge detector is applied to detect edges of a despeckled image.The detected edges (C) are added to the subtracted part to get a more detailed residual part (A À B).This more detailed residual part ((A À B) þC) is decomposed using Equation (2) of DOST.It decomposes the more detailed residual part into approximate (AC) and detailed components (DC).The detailed component (DC) is directed to the Bivariate shrinkage rule using below equations for filtering purposes.
The detailed components (DC) are processed using a bivariate shrinkage rule that can be represented as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffiffi The function ðzzÞ þ is well-defined as: where, k k is a threshold value that can be estimated by: To compute the speckle variance i.e., c noise from the speckled detailed components, a robust median estimator is used from the finest scale detailed components stated.
where, DC 1k � speckled detailed components: The marginal variance i.e., (r 2 k ) of speckled detailed components can be calculated as: where, r2 DC 1 k ¼ marginal variance i.e., (r 2 k ) of speckled detailed components.For that r2 DCk can be calculated as: where, NðkÞ j j is the value of block size (Zhu et al. 2013), (Singh and Shree 2020a), and (Singh, Diwakar, et al. 2021).H ¼ exp(G) 6: Final despeckled SAR image can be obtained as:

Proposed methodology
Figure 1 below shows the full working of methodology framework for SAR image despeckling using a guided filter and DOST-based method noise thresholding using a bivariate shrinkage rule.

Experimental outcomes and discussion
All experimental evaluations have performed on a computer with an Intel Core i5 processor and 8 gigabytes of memory to do the investigation.We can find out how well the proposed method works in MATLAB R2020a.The experiments have been tested a lot, both with real datasets of speckled images and with datasets of speckled image data that were made artificially.To validate the urban remote sensing application of proposed method, the tested real speckled SAR images are true urban remote sensing images.Reducing speckle noise while retaining the image's fine details is the most challenging aspect of SAR image despeckling.The lack of a recognized starting point makes the problem of identifying speckle-free reflectivity the field's most pressing one.The connection between despeckled SAR image quality and reliability is another important issue.Analyzing the deterioration in the homogeneous parts, i.e., decrease of speckle noise and retention of fine features in heterogeneous areas, allows one to determine the other despeckling outlines, i.e., feature and reliability of originality of despeckled SAR images.The simulated image dataset and urban remote sensing image dataset (real speckled) is available at open public access database (DATASET OF STANDARD (Anonymous 2023), Test Images (Anonymous 2014).
Another approach for determining the image's quality is to look at it without any reference image.This is the case with despeckled SAR images.It enables the recognition of the primary traits visible to the naked human eye that best characterize the despeckling processes.The inability to preserve edges and points on targets, as well as blurriness and structural and blocky artifacts, are all part of this category.Visual evaluation is limited in that it cannot accurately quantify the filter's bias or compare the relative effectiveness of alternative despeckling techniques.Several other performance metrics have been developed for the evaluation of the despeckling procedures to get over the limitations of visual inspection.These numerical metrics can be broken down further into two classes: with a reference image and without a reference image.These metrics are used to measure the effectiveness of the proposed strategies.
In the existence of the reference image, the image denoising and despeckling literature is practically unlimited.In this instance, the writer has full knowledge of the image.With a with-reference index, image researchers can quickly and simply evaluate their despeckling methods by comparing them to a set of reference images.Edge retention, texture retention, and uniformity maintenance in both homogeneous and heterogeneous regions can all be measured in this context with a variety of different metrics.For comparative analysis, some recent popular methods are used such as (Wang et al. 2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), and (Baraha and Sahoo 2022).Table 1 shows the proposed method's average outcomes (115 images) between despeckled and Reference images.Here the performance metrics are measured as shown in Table 1.From this table, it can be analyzed that the proposed method with method noise improves the outcomes in terms of performance metrics like Peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM), and Universal Image Quality Index (UIQI).The PSNR is a metric used to quantify the ratio between the maximum potential value of a signal and the power of any distorting noise that may impact the quality of its illustration (Mahajan 2023).The SSIM is a technique utilized to forecast the perceived quality of digital television and cinematic images, along with various other forms of digital images and videos.SSIM is a metric utilized to evaluate the degree of similarity between a pair of images (Wang et al. 2004).The UIQI metric is straightforward to compute and can be utilized in diverse image-processing contexts.The model is formulated to represent image distortion as a composite of three distinct factors, namely loss of correlation, luminance distortion, and contrast distortion (Wang and Bovik 2002).

Without-reference indexes
Visually evaluating the quality of a despeckled SAR image can be done in a variety of methods, including artifacts, edge retention, and appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions.Figure 2 depicts the comparison of the suggested method to the existing methods.Figure 2a is a speckled SAR image for outcome analysis.Figure 2b-i shows the findings of Wang et al. (2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022) and the proposed technique.The findings of Wang et al. (2022) visually provide good outcomes in terms of appearance of low contrast features, texture retention, uniformity in homogeneous parts, and retention of tiny details in heterogeneous regions.The outcome of Perera et al. (2022a) is also excellent in terms of appearance of texture retention, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions, however low contrast features are not properly identified.The outcome of Liu et al. (2022) is good in terms of texture appearance, but uniformity in homogeneous regions and fine detail retention in heterogeneous regions are not successfully identified.The outcomes of Perera et al. (2023) are excellent in terms of appearance of texture retention and low contrast features, however uniformity in homogeneous regions and fine detail retention in heterogeneous regions are not properly identified.The outcome of Wu et al. (2022) is good in terms of appearance of texture retention, but low contrast features, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions are not successfully identified.The outcome of Nabil et al. (2023) is excellent in terms of appearance of texture retention, however low contrast features, uniformity in homogeneous regions, and retention of small features in heterogeneous regions are not properly identified.The outcomes of Baraha and Sahoo (2022) are good in terms of appearance and texture retention, however low contrast features, uniformity in homogeneous parts, and retention of fine features in heterogeneous regions are not successfully identified.The proposed method produces overall outstanding outcomes in terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous regions, and fine detail retention in heterogeneous regions.
Figure 3 displays a comparison between the proposed method and the existing methods.Figure 3a is a SAR image with speckles for outcome analysis.Figure 3b-i illustrates the outcomes of Wang et al. (2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022), as well as the proposed method.Visually, the outcomes of Wang et al. (2022), (Perera et al. 2022a) are favorable in terms of the appearance of low contrast features, the retention of texture, the uniformity of homogenous regions, and the retention of minute details in heterogeneous regions.However, low contrast features are not accurately detected.The outcome of Liu et al. (2022) is satisfactory in terms of texture appearance, however uniformity in homogeneous regions and retention of fine detail in heterogeneous regions cannot be determined.However, uniformity in homogeneous parts and fine detail retention in heterogeneous regions are not identified correctly.Appearance of texture retention is preserved well by Wu et al. (2022), however  Baraha and Sahoo (2022), however low contrast features, uniformity in homogeneous portions, and retention of fine details in heterogeneous regions are not properly detected.In terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions, the proposed method produces overall remarkable outcomes.The comparison of the suggested technique to the existing methods is shown in Figure 4.A speckled SAR image for outcome analysis is shown in Figure 4a.The outcomes of Wang et al. (2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo   2022) achieves decent texture appearance; however, it fails to distinguish between uniformity in homogeneous regions and fine detail retention in heterogeneous parts.However, uniformity in homogeneous parts and fine detail retention in heterogeneous regions are not effectively detected.The findings of Perera et al. (2023) are outstanding in terms of appearance of texture retention and low contrast features.However, low contrast features, uniformity in homogeneous parts, and retention of fine features in heterogeneous regions are not properly identified.The outcome of Wu et al. (2022) is good in terms of appearance of texture retention.However, low contrast features, uniformity in homogeneous parts, and retention of small features in heterogeneous regions are not well identified.The outcome of Nabil et al. (2023) is outstanding in terms of appearance of texture retention.However, low contrast features, uniformity in homogeneous portions, and retention of fine features in heterogeneous regions are not successfully detected.The outcomes of Baraha and Sahoo (2022) are good in terms of appearance and texture retention.In terms of artifacts, edge retention, appearance of low contrast features, texture retention, uniformity in homogeneous parts, and fine detail retention in heterogeneous regions, the proposed method produces overall excellent outcomes.
On the other hand, the performance metrics of the without-reference index do not depend on groundtruth SAR information in any way.Calculating these metrics requires the use of mathematical SAR data models as well as fundamental image resolutions such as feature level heterogeneity and homogeneity.A few instances of these metrics that analyze the statistical organization of pixel values in the actual speckled SAR image are ratio images, the coefficient of variation (CV), the equivalent number of looks (ENL), target-to-clutter ratio (TCR), and noise variance (NV).In this section, we will discuss a handful of the metrics that were utilized throughout the entirety of the thesis: The smoothing factor is evaluated as a performance assessment parameter of the despeckled SAR image by the ENL (Zhu et al. 2013) during the entirety of the image creation and post-processing activity.The despeckling of the SAR image and the subsequent counting of the increase in the minimum number of views are the two steps that provide the basis for this statistic.It is calculated over the homogenous area and is defined as the ratio between the mean (l) squared to the variance (r).2023); (h) outcome of Baraha and Sahoo (2022); (i) outcome of proposed method.
To demonstrate the present level of speckle content in the image, the NV is employed (Singh and Shree 2017a).When the NV decreases, speckle noise in an image improves.The image's brightness is not a requisite (Singh and Shree 2017a).Use the following formula to determine NV: Where, N is the size of the image.
The performance assessment of despeckled SAR images without reference images are tested using NV, MSE, ENL and CV.In Table 2, the NV values are shown using existing methods and proposed method.
From Table 2, it can be analyzed the NV values of Wang et al. (2022) and (Perera et al. 2022a) are good but the best NV results are obtained by the proposed method.In Table 2, the MSE values are displayed using both the existing methodology and the proposed methodology.A conclusion may be drawn from Table 2 MSE values that, even though the MSE values of Wang et al. (2022) and (Perera et al. 2022a) are satisfactory, the best MSE outcomes can be achieved by using the proposed method.Table 2 displays the ENL values using both the suggested approach and the current methods.Analysis of the ENL values in Table 2 reveals that while (Wang et al. 2022) and (Perera et al. 2022a) have good ENL values, the proposed strategy yields the best ENL outcomes.

With-reference indexes
As SAR images are speckled in nature, it is not possible to consider SAR images as reference images.Therefore, the outcome analyses with reference images are performed over Barbara and the Cameraman Images.A despeckled image's visual quality can be assessed in a variety of ways, including artifacts, edge retention, the appearance of low-contrast features, texture restoration, uniformity in homogeneous regions, and retention of small details in heterogeneous regions.The comparison of the results between the proposed method and current methods is shown in Figure 5. Figure 5a serves as an image for a noise-free image.To evaluate despeckling techniques, Figure 5b shows a synthetically produced speckled image of Barbra.The results of Wang et al. (2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022) and the suggested approach are shown in Figures 5c-j, respectively.Figure 5 illustrates how the results of Wang et al. (2022) visually produce results that are suitable in terms of low-contrast object appearance, texture retention, uniformity in homogeneous parts, and retention of fine details in heterogeneous regions.Although low contrast features are difficult to distinguish, the results of Perera et al. (2022a) are nevertheless good in terms of the appearance of texture retention, uniformity in homogeneous parts, and retention of tiny details in heterogeneous regions.The results of Liu et al. (2022) are excellent in terms of the appearance of texture retention, but they do not successfully distinguish between uniformity in homogeneous regions and the retention of tiny features in heterogeneous regions.The results of Perera et al. (2023) are excellent in terms of the appearance of texture retention and low contrast features, but they do not successfully distinguish between uniformity in homogeneous regions and the retention of tiny details in heterogeneous regions.The result of Wu et al. (2022) provides a good appearance of texture retention, however low contrast features, uniformity in homogeneous regions, and the retention of  2023) provides a good appearance of texture retention, but low contrast features, uniformity in homogeneous regions, and the retention of small features in heterogeneous regions are not properly identified.However, low contrast features, uniformity in homogeneous parts, and the retention of tiny features in heterogeneous regions are not properly identified.The results of Baraha and Sahoo (2022) are excellent in terms of the appearance of texture retention.In terms of artifacts, edge retention, the appearance of low contrast features, texture retention, uniformity in homogeneous parts, and retention of fine details in heterogeneous regions, the proposed methods consistently produce outstanding results.
Figure 6 shows the comparison between the proposed method and the methods that are already being used.Figure 6a is the reference image as a noise-free image.Figure 6b is an image of a Cameraman that was made to be speckled on purpose so that despeckling methods could be tested.Figures 6c-j show the outcomes of Wang et al. (2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022), and proposed method.The outcomes of Wang et al. (2022) look good in terms of how well low-contrast features can be seen, how well texture and uniformity are kept, and how well fine details are kept in areas with different amounts of texture and uniformity.The outcome of Perera et al. (2022a) is also satisfactory in terms of preserving texture, uniformity in homogeneous areas, and fine details in heterogeneous areas.
However, low-contrast features are not well identified.The outcome of Perera et al. (2023) is satisfactory in terms of how well the texture is kept.However, the uniformity in the homogeneous areas and the fine details in the heterogeneous areas are not well shown.The outcome of Liu et al. (2022) is satisfactory in terms of how well it shows the retention of texture and low-contrast features.However, the uniformity in homogeneous areas and the retention of fine details in heterogeneous areas are not well shown.The outcome of Wu et al. (2022) is satisfactory in terms of how well the texture is preserved.However, low contrast features, uniformity in homogeneous regions, and the retention of fine details in heterogeneous regions are not well identified.The outcome of Baraha and Sahoo (2022) is satisfactory in that the texture is still visible.However, low-contrast features, uniformity in homogeneous areas, and the retention of fine details in heterogeneous areas are not well identified.The outcomes of Nabil et al. (2023) are good in terms of keeping the texture visible.However, low contrast features, uniformity in homogeneous areas, and keeping fine details in heterogeneous areas are not well identified.Overall, the proposed methods give excellent outcomes in terms of artifacts, keeping edges, making low-contrast features visible, keeping texture, keeping uniformity in homogeneous areas, and keeping fine details in heterogeneous areas.
Additionally, histogram analysis is also performed on cameraman image for comparative study.Figure 7a shows a comparative histogram analysis between reference image, (Wang et al. 2022), (Perera et al. 2022a), (Liu et al. 2022), and proposed method.Similarly Figure 7b shows a comparative histogram analysis between the Reference image, (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022) and proposed method.From both histograms, it can be analyzed that the reference image and the proposed method are most similar histograms in comparison to existing methods.However, histograms of Wang et al. (2022), and (Perera et al. 2022a) are also giving satisfactory outcomes in terms of histogram analysis.But from overall histogram analysis, proposed method gives excellent outcomes.
In addition, an intensity profile analysis as well as a line is done on Barbara's image to undertake a comparative study.Figure 8 shows the comparative intensity profile analysis between (Wang et al. 2022), (Perera et al. 2022a), (Liu et al. 2022), (Perera et al. 2023), (Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022) and proposed method.As an outcome of doing an intensity profile analysis, it has become abundantly evident that the intensity profiles of the reference image and the proposed approach are most comparable to those of the already existing methods.On the other hand, the intensity profile analyses of Wang et al. (2022) and (Perera et al. 2022a) are likewise producing satisfactory outcomes in terms of the intensity profile analysis.However, according to the Intensity profile study, the proposed method produces very good outcomes.Figure 7 is histogram analysis, and Figure 8 is intensity profile analysis.Two different pixel level image analysis is presented to check the despeckling at minor and major level.Histogram analysis checks the overall despeckling performance, while intensity profile analysis checks the despeckling performance along a straight pixel line.
Numerous resources (Li et al. 2022), (Mohanakrishnan et al. 2022), (Wang et al. 2022), (Dalsasso et al. 2022), and (Farhadiani et al. 2022) can be used to determine how effective a despeckled image will be in each situation where a reference index is present.Performance metrics falling under this rubric make use of data from the accompanying reference image.The two images (the "reference" and "despeckled") serve as inputs for these metrics.
The mean squared error (MSE) (Singh, Diwakar, et al. 2021) compares the despeckled SAR image to a reference SAR image and provides an evaluation of the performance of the system.The quality of the produced SAR image is evaluated.It cannot determine how well the image performs overall because it does not look at the finer features.
where X and Y represent the speck-free and original images, respectively.By comparing the despeckled image to the reference image, the SSIM can provide an assessment of their degree of resemblance.Lighting, contrast, and structure are the determining elements.All three terms are combined to form the SSIM.where Var½g� is reference image variance.
In the field of denoising, PSNR is one of the most popular performance metrics used.Outcomes are more promising when the PSNR is high.Specifically, PSNR is calculated by: The UIQI is determined by considering three separate criteria.The components that are defined include the degree of linear correlation, the proximity of the mean brightness, and the image contrast.The values of all three variables are somewhere between 0 and 1.Because of this, the UIQI will eventually settle somewhere within the range of [0, 1].Values of the UIQI that are closer to one suggest a better level of image quality, whilst values that are closer to zero indicate a lower level of image quality.(Wu et al. 2022), (Nabil et al. 2023), (Baraha and Sahoo 2022), and proposed method.
The results of performance metrics are shown in Tables 3-5.In Table 3, the PSNR values are shown with different noise levels using existing methods and proposed method.From Table 3 PSNR values, it can be analyzed the PSNR values of Wang et al. (2022) and (Perera et al. 2022a) at low noise level are good but the highest PSNR values are obtained by the proposed method.Similarly, as noise level increases the PSNR values of proposed method are still highest.Therefore, through PSNR values, it can be concluded that most of the times proposed method gives excellent results.Table 4 displays the SSIM values at various noise levels when utilizing both the existing and proposed approaches.As can be shown in Table 4, the SSIM values produced by the proposed method are the highest, even though the SSIM values obtained by Wang et al. (2022) and (Perera et al. 2022a) at low noise level are good.The SSIM values obtained with the proposed method are also the highest even when the noise level is increased.We can conclude from the SSIM values that the proposed method produces highquality outputs in most cases.Table 5 displays the UIQI values with varying amounts of noise based on both the methods that already exist and the approach that was developed.Based on the UIQI values presented in Table 5, it is possible to draw the conclusion that the UIQI values of Wang et al. (2022) and (Perera et al. 2022a) at a low noise level are satisfactory, but the proposed method produces the highest UIQI values.In a similar manner, the UIQI values obtained with the proposed method remain the highest even when the noise intensity rises.As a result of this, using the UIQI values, one can get the conclusion that the proposed method, for the most part, produces outstanding outcomes.
The MATLAB software was utilized to assess the experimental findings.The utilized hardware and software configuration is cited in Section "Experimental outcomes and discussion".It is essential to document the system configuration to ensure that the processing time of the proposed methodology is consistent with that of other methodologies.The algorithm being evaluated demonstrates a processing duration of approximately 7.2679 seconds and outperforms all other assessed techniques regarding computational speed as shown in Table 6.The proposed methodology exhibits superior outcomes and demonstrates efficient computational performance.The efficacy of the proposed methodology is confirmed by the amalgamation of its minimal computational expenditure and the superior visual fidelity of the despeckled images.

Merits, demerits, and future perspectives
The proposed methodology uses homomorphic filtering.It helps to incorporate any other additive image restoration model in this work.Due to use of canny edge detector, fine details like edges and object's corner details are well preserved in homogeneous and non-homogeneous areas.The blurring effect is totally disappeared in the results.Even no artifact generation is observed in during the process.The use of method noise thresholding in proposed method delivers the highest level of speckle noise reduction in DOST domain.The only disadvantage observed in the proposed methodology is high computational cost due to implementation of method noise thresholding in internal working.
The proposed work is a hybrid homomorphic despeckling technique that involves different background supporting methods.Various possibilities in terms of improvement in proposed work are discussed here.Instead of using canny edge detector, other edge detection method can be checked and compared.The detected edges from canny edge detector are added to the residual image.This mathematical expression can be improved by using any other mathematical operation instead of addition.Other advanced domains are available for image analysis and decomposition like non-subsampled contourlet transform, non-subsampled shearlet transform, curvelet transform, ridgelet transform etc.They can be used instead of applying DOST.Similarly, internal parameters changed and can be tested.Multiple other perspectives can be incorporated.The scope for improvement in the field of image enhancement and restoration is always there.After SAR data restoration, the pre-processed SAR image can be used for urban object analysis and identification, urban disaster monitoring and change analysis, urban climate change and variation, and other urban remote sensing applications.

Conclusion
The SAR image despeckling technique proposed in this paper is based on homomorphic filtering that     (Perera et al. 2022a) 13.5647 (Liu et al. 2022) 9.6134 (Perera et al. 202311.5974 (Wu et al. 2022) 10.3957 (Nabil et al. 2023) 11.2965 (Baraha and Sahoo 2022) 9.0014 Proposed Method 7.2679 Input image: speckled SAR image, A Output image: despeckled SAR image, I 1: Apply log transformation on A. A ¼ log(A) 2: Apply guided filter on A using Equation (1).B ¼ Guided_Filter(A) 3: Apply Canny edge detection operator on B. C represents detected edges.C ¼ Canny_Detection(B) 4: Apply DOST based method noise thresholding using bivariate shrinkage rule.a) Apply method noise.D ¼ (A À B) b) Detected edges (C) are added to increase information to residual part.E ¼ D þ C c) Apply DOST transform on E using Equation (2).E is decomposed into approximate (AC) and detailed components (DC).{AC, DC} ¼ DOST(E) d) Apply bivariate shrinkage rule on detailed component (DC) using Equations (3-8).F ¼ Bivariate_Shrinkage(DC) e) Apply Inverse DOST operation on AC and F. G ¼ Inverse_DOST(AC, F) 5: Apply exponential transformation on G.

Table 1 .
The proposed method's average outcomes (115 images) between despeckled and reference images.
Nabil et al. (2023)es, uniformity in homogeneous regions, and retention of fine features in heterogeneous regions are not identified satisfactorily.Low contrast features, uniformity in homogeneous parts, and retention of minor features in heterogeneous regions are not accurately identified inNabil et al. (2023)outcomes.Appearance and texture retention are well preserved in the outcomes of
CANADIAN JOURNAL OF REMOTE SENSING small features in heterogeneous regions are not successfully identified.The result ofNabil et al. (

Table 3 .
Performance assessment of despeckled SAR images (average outcomes over 117 images).Bold values represents best quantitative results.takesadvantage of the additive restoration model.The proposed method is a two-step edge-preserving and hybrid SAR image despeckling technique that implements a guided filter as first step and the second step includes modified method noise thresholding using bivariate shrinkage rule and canny edge operator in the DOST domain.The first step is mainly responsible for speckle reduction purpose with better edge preservation.The second step is designed for handling the unprocessed part of the despeckled image that can deliver the highest level of speckle reduction in results.It filters out the unfiltered components of residual images.These steps help in maintaining uniformity in homogeneous areas.The average PSNR value of Barbara and cameraman images are 36.5698and 34.6598.The average SSIM values of Barbara and cameraman images are 0.9443 and 0.9281.The average UIQI values of Barbara and cameraman images are 0.8838 and 0.8489.All these are average values calculated based on noise variance ranging from 5 to 40%.In the case of without reference index, the average NV value is 0.3915, the average MSE value is 881.1201, and the average ENL value is 2.9657.Based on these comparative qualitative and quantitative testing performed on the proposed work, it is found that the one being proposed surpasses them all.

Table 6 .
Comparison of execution time in seconds.