A feature based change detection approach using multi-scale orientation for multi-temporal SAR images

Excellent operation regardless of weather conditions and superior resolution independent of sensor light are the most attractive and desired features of synthetic aperture radar (SAR) imagery. This paper proposes an exclusive multi-scale with multiple orientation approach for multi-temporal SAR images. This approach integrates pre-processing and change detection. Pre-processing is performed on the SAR imagery through speckle reducing anisotropic di ﬀ u-sion and discrete wavelet transform. The processed speckle-free images are designed by Log-Gabor ﬁ lter bank in terms of multi-scale with multiple orientations. The maximum magnitude of multiple orientations is concatenated to obtain feature-based scale representation. Each scale is dealt with multiple orientations and is compared by band-wise subtraction to retrieve di ﬀ erence image (DI) coe ﬃ cient. The series of the di ﬀ erence coe ﬃ cients from each scale are add-on together to estimate a DI. Thus, the resultant image of multi-scale orientation gives perception of detailed information with speci ﬁ c contour. Constrained k -means clustering algorithm is preferred to achieve change and un-change map. Performance of the proposed approach is validated on three real SAR image datasets. The e ﬀ ective change detection is examined by using confusion matrix parameters. Experimental results are described to show the e ﬃ cacy of the proposed approach.

In remote sensing, SAR is active radar sensors to track earth surface and provides constant information even exclusive ambiance state.RADARSAT satellite operated by Canadian Centre for remote sensing to collect global information of ice berg monitoring, growth of crops, maintenance of forests, oceanography and geological monitoring.Depending on mode of operation, it may works in single, double or quad polarisation with C-band, 5.4 GHz of frequency range, and 1-100 m range of resolution.ENVI-ASAR launched by European Space Agency (ESA), it works in the C-band with broad range of modes, incidence angle of 15 À 45 and 30 À 150 m resolution.This satellites use to observe topographical information for landscape, coastal area, glacier and snow study.Then, TerraSAR satellite is operated by German SAR satellite mission works in range of X-band, 9.6 GHz frequency, 31 mm wavelength and provides high resolution data.It allows us to use in the field of environmental and catastrophe monitoring applications.
CD on complex landscape is a difficult problem to analyse.The complex system of landscape is composed of a huge number of heterogeneous elements that act together in a non-linear way and it behaves in an adaptive property with respect to space and time (Ashok et al., 2007).The inhomogeneous nature of these images contains typical features such as edges, lines, boundaries and blobs.The majority of these features are presents in medium band resolution of images, which have large peak at zero and extensive tails that fall gradually than Gaussian distribution.The better performance of qualitative properties and welldesigned mathematical modelling based on multi-scale approaches have emerged for various image processing applications.Hence, the behaviour of complex system, adaptive property, interaction of multi-scale decomposition, self similarity structure, all of these constructive characteristic patterns appeared to find changes depending on their scale of observation.
The significance of the multi-scale approach is that it consists of space localisation in the spatial-frequency domain, which may decompose the essential components of the image pattern and allow us possible to access.Researchers have proposed many CD techniques using multi-scale approaches, Bovolo and Bruzzone (2005) proposed CD by log ratio method and multi-scale decomposition by wavelet transform, adaptive scale conserved approach is utilised to fuse the difference coefficients.Additive noise filters were preferred for speckle noise reduction with several trials.Eid M. Emary et al. (2010) proposed multiscale approach for automatic scale selection by fractal net evolution.Scale-based class generation is utilised for binary change map.The optimal selection of scale is identified by manual observation to choose the number of scales.Ajadi et al. (2016) proposed multiscale approach by stationary wavelet transform and employed multi-scale decomposition.SAR images are correlated by log-ratio method.The log-ratio image is been despeckled by non-local filter and Bayesian thresholding is applied to each multi-scale images.Multi-scale coefficients are fused to obtain change map, however this method is not given exact boundary of the object.Li et al. (2015) proposed Gabor-feature-based CD, log-ratio operation is applied for the difference image (DI) and multi-scale feature extraction is done by using Gabor wavelet transform.The response of Gabor filter with twolevel clustering gives more number of iteration and inappropriate class grouping for intermediate class.Ting Bai et al. (2018) proposed object-based feature extraction using multi-scale hierarchical sampling for high-resolution SAR images, the texture and shape features of training samples are fused and classified by random forest model.This method increases the amount of the training samples at multiple scales with complex structure.
Multi-scale methods (Vijaya Geetha & Kalaivani, 2019) are widely used to analyse the image with respect to different scale.Various descriptions are available to decompose the multi-scale images based on orthogonal or non-orthogonal basis.The structure of orthogonal basis is not support in most of the multiscale decomposition design and preferred only to use non-orthogonal basis construction.Gabor filter design has important difficulty when designing the filter in multi-scale method.Gabor filter has a non-zero DC component and bandwidth is limited to odd symmetric filter, and thus filter design for low and high frequency is not obtained simultaneously.They highly consider a low-frequency component and less concern on high-frequency components in several encoding.Hence, Gabor filter bank design provides low efficiency for feature extraction.The consideration of all above issues, multi-scale with multiple-orientationbased Log-Gabor (LG) filter bank design is proposed for CD approach.
LG filter are no DC components (Fischer, 2007) and they allow arbitrarily large coverage of bandwidth in an octave scale range of multi-resolution design.LG filters yield extensive tailing, could able to encode landscape images more accurately than normal Gabor filters.The proposed method alleviates the drawbacks of Gabor filter effectively and improves the detection rate.This LG structure provides a unique statistical modelling on neighbourhood features and applying local energy models.LG filter bank design is used in many image processing applications in field of iris feature extraction (LinTao et al., 2019), corner detection of grey-level images (Gao et al., 2007), finger print orientation (Chunfeng et al., 2010), etc. Iris feature extraction technique is designed by LG filer bank without proper optimised parameter, hence it could not yield effective usage of filter bank structure to extract features completely.In corner detection method, quality measures are not evaluated to exhibit the effective use of LG filter bank to predict successful corner points.The finger print orientation takes long computation period for real-time implementation.However, LG filter bank regards destitute selection on existing applications, our proposed method deed the strength of LG filter bank in CD application.This work, multi-scale modelling is done by LG filter bank to localise the global features specifically and in stable manner (LinTao et al., 2019).The overall changes of relative significant of features are

Methodology
A mathematical overview to change detection The purpose of CD is to analyse the changes between two SAR images Y1 and Y2 tracked in equal geological area at different periods, t1 and t2, respectively; let H, W represent the number of columns and rows in the images, respectively.
The unsupervised CD is a binary classification of pixels with present scenario.The binary classification generates two classes namely changed and unchanged pixels from the DI between the input images Y t1 and Y t2 .The changed and unchanged classes are represented as Ω ω c; ; Ω ω u and the CD process is formulated as . The CD has to be performed adhering to the following conditions: (1) The proposed CD method is designed by LG filter bank, the input images are decomposed into number scales and each scale is concatenated with multiple orientations.The magnitude of combined all orientation of each scale have significant feature vector.The direct subtraction is applied to each equal number of scales in between two input images to maintain the perfect band resolution of the difference coefficient.Hence, the add-on each difference coefficients provides DI, which have symmetrical pixel distribution of low and high intensity on each scale with perfect frontier.
The proposed method is an unsupervised featurebased CD approach, the features obtained from multiscale, multiple-orientation-based decomposition of the input SAR images.The methodology consists of the following stages: (1) pre-processing stage of speckle noise reduction, (2) design of LG filter bank, (3) LG filtering on Y t1 and Y t2 to attain multi-scale decomposition with directional features, (4) evolve DIs in multiple scales and orientations and (5) constrained k-means clustering algorithm is to find binary change map.Performance of the proposed algorithm has been validates with standard quantitative metrics.The detail flowchart of the proposed method is depicted in Figure 1.
Pre-processing stage SAR images are feasible to capture at all time, entire climate conditions, though the captured images will be severely affected by speckle noise which is necessary to reduce for the further processing.The pre-processing stage is prescribed in order to reduce coherent nature of multiplicative noise termed as speckle noise (Dekker, 1998;Frost et al., 1982).This noise is modulated due to phase fluctuations of the reflecting electromagnetic signals.Speckle noise reduction is a significant task to be considered for postprocessing of the acquired SAR images (Masoomi et al., 2012).
The proposed frame work incorporates a diffusion technique (Perona & Malik, 1990) with discrete wavelet transform (DWT).For a given input image Y; speckle reducing anisotropic diffusion (SRAD) filter (Yu & Acton, 2002) is applied for speckle noise reduction.SRAD filter would not able to remove the speckle noise content by smoothing the image, which tends to feature broadening effect particularly on edges.To remove the complete speckle noise components, DWT is combined after SRAD filter, the multiplicative components of SRAD filter is converted into additive component by logarithmic transformation.This additive noise component is decomposed using DWT into three high (HH; HL; LH) and one low LL ð Þ frequency sub-band images.The noise presents in low frequency sub-band image is removed by guided filter (GF).Soft thresholding is used for diagonal sub-band images and enhanced guided filter (EGF) is used for high frequency (HH) sub-band image given in Figure 2.
In sub-band image L l l ¼ LH; HL : These vertical (LH) and horizontal (HL) sub-band images have highfrequency components.These sub-band images have similar energy level, hence only to remove noise and prevent the original signal component, soft thresholding is applied.Thus it gives excellent intensity transition on edges and fine details.The effective use of soft thresholding is described in (Choi & Jeong, 2018).
In sub-band image L l l ¼ LL : This approximate subband image in wavelet domain has low-frequency components.Significance of the information is present in this scale with less speckle content.GF is used to reduce speckle noise and preserving edge information without blurring boundaries.The detailed equations of GF are described in (Choi & Jeong, 2018).

R E T R A C T E D
In sub-band image The gradient and Laplacian operators are used to detect sharp-edge regions and local zero-crossing area.EGF edge sensitive weighting component is described as Here Δ and Ñ are Laplacian and gradient operators, respectively.A zero-crossing level is located when the value of Ω is more than one and a homogeneous region is identified when it is lesser than one.The cost function for EGF is given in equation ( 3), which minimises the difference between the input (p i ) and output images (q i ).

R E T R A C T E D
Here, a h and b h are linear coefficients of the window ω h , I h is the guidance image of ω h with centre pixel h, ε is a normalised parameter used to prevent a h from becoming much large value.
μ h and σ 2 h are mean and variance of the GF.The final value of qi is described as a h and b h are the mean values of a h and b h within the window, respectively.
2D Log-Gabor filters.The speckle-free images are then processed with a LG filter bank consisting of multiple frequency filters LG f À Á f 2n s , where n s would be different band resolution, for a multi-scale decomposition.
The transfer function of 1D LG filter is represented as radial band width B w in terms of octaves.s β is a constant range from 0 to 1.In order to achieve constant shape of the filter, the value of σ must be attuned with different f 0 to maintain their ratio to be constant.

The transfer function of 2D
LG filter in frequency domain is represented by two components.They are the radial filter components G l s β f ð Þ that facilitate the frequency selection and angular filter components G l σ θ θ ð Þ; which control the orientation of the filter.
The log-polar coordinate form of LG filter (Kovesi, 2006) is described as The filter orientation angle is θ 0 , scaling factor is θ , orientation angle between the filters is Δθ and the angular band width is ΔΩ.
The spatial response of LG filter is one in logarithm function and it is not analytic at the origin in spatial domain expression.Thus, the design of LG filter is preferred in the frequency domain to reduce the complexity and takes inverse Fourier transform to convert into spatial domain.LG function shape is similar like Gabor function for almost less than one octave bandwidth and it covers approximately equal to three octaves which increase spatial localisation of LG filter design.The arbitrary bandwidth and zero DC components increases sharpness of the filter as much the bandwidth increases.

2D
LG filter bank design aspects.
LG filter is a nonorthogonal basis arrangement and the design of LG filter bank is an art with consideration of following: (1) A desirable aspect is to construct a filter bank that it provides even coverage of the spectrum.This aspect can be attained by constructing the overlap of the filter transfer function adequately minimum, so that average result of individual transfer functions can provide equal coverage of the spectrum.Thus, each isolated transfer functions in the spectrum assemble uniformly also feasible to work and reduce computational complexity.Even coverage of the spectrum preferred four numbers of scales.
(2) With consideration of above aspect, the design of filters is independent as much as possible.The aim of designing the filter bank is to obtain information about the signal.If the filter outputs are highly correlated with those of its neighbourhood pixels then we will get unsuccessful structure of filters and they do not provide maximum extent of information as much possible.To deserve wellorganised filter bank design, we have to maintain minimum overlap of their transfer functions.Suitable values of the dependent parameters filter could make design to realise quite and even spectral coverage.The filter bank design has to decide minimum and maximum frequencies of the spectrum coverage using range of filter bandwidth, scaling between centre frequencies of consecutive filters, the angular spread of every filter, and the number of scales and orientations to be chosen.(3) The wavelength of the minimum scale filter λ min is a controlled parameter to set the maximum frequency f max .The minimum value of 2 pixels can use in the Nyquist wavelength.However, the wavelength gets significant aliasing and a value of 3 pixels is chosen.(4) The wavelength of the maximum scale filter λ max is to set the value of minimum frequency f min .This λ max is defined by wavelength of minimum scale filter λ min , scaling between centre frequencies of consecutive filters k and number of filter scales n s , λ min is fixed value (5) The filter band width is defined by σ=f 0 , which is the ratio between standard deviation σ ð Þ of Gaussian describe the LG filter transfer function in frequency domain and central frequency ðf 0 Þ.The smaller value of σ=f 0 will acquire larger bandwidth of the filter.Through empirical observation of filter σ=f 0 represents the values of 0.75, 0.55 and 0.41 with respect to the bandwidth of about one octave, two octaves and three octaves, respectively.(6) Another aspect to consider for LG filter bank design is scaling factor between the consecutive filters k.This is an essential factor to demand for even spectral extent and independence filter output.Table 1 represents the combination of k with respect to σ=f 0 , this matching value selection gives minimum overlap essential to attain practical even spectral coverage of the filter.(7) The angular spacing is necessary in frequency domain for minimum overlap, it is ratio between angular interval Δθ ð Þ and the standard deviation of the angular Gaussian spread function ðσ 0 Þ.The value Δθ=σ 0 is used to construct filter with minimum overlap needed to get in the frequency domain.
Significance of LG filter bank.The LG filter bank is designed by multiple scales with multiple orientations, whose response of number of orientation concatenated to form every scale to extract maximum features.The design specification are n s ¼ 4 scales, n θ ¼ 6 orientations, λ min ¼ 3, k ¼ 2:2 and the centre frequency f 0i ¼ ði ¼ 1; 2; 3 and 4) of the filter bank.
First scale and which is set as 1 3 initially, which reflects the high-frequency components of the image.The image contains high-frequency components leads maximum bandwidth coverage and presents the majority of detailed information in the image.
Second scale: Centre frequency at f 0i i ¼ 2 ð Þis formulated as 1 3k 1 gives the sub-high-frequency components of an image.It represents the detailed information comparatively less than f 01 and provides medium bandwidth coverage, maintain symmetrical response among the class segmentation of change and no-change pixels.
Third scale: Centre frequency at 3k 2 returns next stage of sub-highfrequency components of an image.In this layer, detailed information presents comparatively less than f 02 preserve the approximation information.
Fourth scale: Centre frequency at f 0i i ¼ 4 ð Þis formulated as 1 3k 3 contains a maximum low-frequency components covers low bandwidth coverage, which mainly concentrate the approximation information of the image.
The detailed information describes the small-scale information of the image such as lines, edges and bar features of information, and rest of large-scale information conveys contour specifics and structure of the image.
LG filter bank construction and implementation.With effect of even coverage of the frequency spectrum effectively, plan of both scales and orientations of the LG filters are considered.The objective is to construct the filter with even coverage and keeping minimum overlap between the filters in order to compute the independence between the extracted coefficients.The construction of LG filter bank is considered with a total of four scales and six orientations.The different parameter values are chosen as given below in Table 2, The multi-resolution design is implemented by n s ¼ 4 scales, n θ ¼ 6 orientations.Filter indexes are s !scales, s 2 1; . . .:; n s f g , θ !orientation, θ 2 1; . . .:; n θ f g .The f 0 corresponds to centre frequency of the filter, s β ; σ θ À Á are angular and radial components common for all filters in filter bank.
The speckle-free images are convoluted with each LG filters to regulate at different cut-off frequencies.The function of convolution is performed in frequency domain as a form multiplying despeckled image in Fourier transform with LG filter bank.The specklefree image is represented in 2D Fast Fourier Transform, The convolution of noise-free image and LG filter bank generated by four scales with six orientations.The acquired images are different band filter images have explicit nature at different scale.The LG filters with various centre frequencies are superimposed by simply add-on to obtain the final DI.
The inverse Fourier transform is applied to get the corresponding filtered image of the LG in the spatial domain.
Constrained k-means clustering algorithm.The DI = Y D ð Þ is obtained by summation of various pass band filters.Constrained k-means clustering algorithm (Lal & Anouncia, 2015) is used for semisupervised method with prior knowledge about reference map.The context of clustering algorithm is to locate instance-level constraints, which can help to partitioning pixels with reference of priori information.The semi-supervised partitioning algorithm automatically decides two instances; they are must-link constraints (ml c ) or cannot-link constraints (cl c ). Must-link constraints specify that two instances have to be in the similar group.Cannot-link constraints specify that two instances must not be located in the similar group.Configuration of the change map.The construction of a change map is mainly used to identify the changed and unchanged classes as, Ω ω c ; Ω ω u respectively.Here constrained k-means clustering algorithm is used to cluster two classes of pixels from LG DI.Each pixels in = Y D ð Þ is assigned to any one of the two cluster groups using the given equation ( 19).Depends on the distance of every pixel from the centre cluster pixel, the minimum distance pixels are assigned to have a cluster group.The binary change map image is generated by following equation:

Constrained k-means clustering algorithm
where jj Á jj is the Euclidean distance.Performance measurements.This section explained the evaluation of quality measures with different stages of aspects.In pre-processing stage, the effective speckle noise reduction is verified by some of index measurements like structural similarity, speckle suppression and equivalent number of looks and are given (Sheng & Xia, 1996) in Table 3.In proposed CD stage, the performance of effective CD is examined with ground truth using confusion matrix parameters to predict accuracy, kappa coefficient and false alarm rate given in Table 4.In the last stage, effective class distribution is evaluated by non-uniformity (NU) index and area overlap measurement (AOM) (Rosenfield & Fitzpatrick-Lins, 1986) given in Table 5.

Result and discussion
The proposed algorithm contains pre-processing stage using filtering algorithm, proposed LG-based CD method and class distribution by constrained kmeans clustering algorithm.The evaluation results and their discussions are given below.

Pre-process by filtering algorithm
In pre-processing stage, SRAD filter combined with DWT is used.Pre-processing stage is essential task to Structural similarity index is characterised by luminosity, disparity and structural changes.
Where mean intensity μ ENL is a measure of mean and variance.f I is filtered image.Higher value of ENL shows better response.Speckle suppression index (SSI) SSI is defined by the coefficient of variance of the filtered image and normalised by the speckle image.s I is a speckle image.Less than one is best response.FAR is the ratio between unchanged pixels is imperfectly identified as changed pixels and the total number of actual unchanged events pixels.
TN is number of true negatives.
F1 score is an average value of true positive rate and precision.TP is number of true positives.
The sums of changed pixels are incorrectly identified as unchanged pixels and unchanged pixels are incorrectly identified as changed pixels.
This is termed as sensitivity, which states the true detection of the result.

Miss rate
Miss rate or false negative rate is associated with erroneous response on the detection, which actually has a change.
N U and N C are number of actual unchanged pixels and changed pixels, respectively.
The non-uniformity is defined as the feature over an image and is proportional to the variance of the values of that feature assessed at every pixel of the entire image.This metric is evaluated only by test image and does not require the ground truth image.where σ 2 F is foreground variance, σ is the variance of the test image.T F ; T B are foreground and background pixels of the test image.The perfect classified image will have nonuniformity value close to zero.Area overlap measure (AOM) This measure is termed as area overlap measure (AOM) to reflect the vital aim of thresholding.AOM compares the area obtained from resultant test image to the ground truth image.This is also termed as Jaccard similarity measure which will lie between 0 and 1.The ideal similarity measure will be close to one.Where, G is ground truth image and T is a test image.

R E T R A C T E D
best noise reduction.Thus, the verification of above metrics demonstrates that the combined technique of SRAD filter and DWT gives better de-specking image without affecting original structure.This speckle filtering model is applied for all dataset images to give similar contribution to work with concern own response in comparative techniques.This method provides us excellent pre-processing tool and allowed better change map analysis.

R E T R A C T E D
The effective performance is validated by the influential parameter (σÞ of LG filter bank.The step size of standard deviation is Δσ ¼ 0:1π, the experiment is carried from starting range of 0:01π-1:5π with respect to variation of kappa coefficient for the (a) Ottawa, (b) Athabasca and (c) Wuhan dataset.It can be observed that the plots for all dataset rise as σ and increase in the beginning, and then maintain relatively steady, but gradually declined with extend grow of σ value.By this critical analysis is suggested to choose σ value not to be too high or too low to realise the acceptable change map result.The experiment σ 2 2:5π; 2:6π; . . .; 6:0π f g covers maximum kappa coefficient for three experimental datasets depicted in Figure 7.
Figure 8 represents the comparative techniques of various multi-scale algorithms with the proposed method.The multi-scale comparative methods are detail preserving scale-driven approach (DP-SDA) (Bovolo & Bruzzone, 2005), the Kennaugh element framework for multi-scale preparation (KEF) (Schmitt et al., 2015), CD using saliency extraction and shearlet transform (SEST) (Zhang et al., 2018), Gabor-filter-based CD by thresholding algorithm (GF-KI) (Sumaiya & Kumari, 2017) and Gabor-feature-based CD on two-level clustering (GFTLC) (Li et al., 2015).The multi-scale comparison methods illustrate the following facts that, DP-SDA decompose the DI and adaptive scales are fused to get scale-driven approach.This method works effectively to identify the detail preserving scale but decision-based threshold is implemented based on manual selection.Since this method is not provided maximum accuracy of change map information.KEF is designed by Kennaugh matrix preparation for CD.The optimised scale is defined by fine scale intensity tune from course to fine stragergy.Though this algorithm gives change image with blur boundary and this algorithm is complex derivative structure.SEST method used saliency extraction and ratio method for DIs.Then shearlet coefficient is used to decompose low-frequency and high-frequency components and applied hard threshold.This algorithm provides moderate accuracy with false pixel prediction.GF-KI method is proposed with single scale and

R E T R A C T E D
The experimental result from multi-scale comparative methods, it clearly illustrates that the proposed LGCD method gives estimated analysis, easy to implement and perfect class allocation in search of changed and unchanged pixels, high accuracy with actual boundary.The evaluations of confusion matrix parameters are tabulated in Tables 7-9 show that the proposed method gives improved accuracy, high value of kappa coefficient, detection rate, precision, G measure and F1 score.Performance measure shows less false alarm rate, miss rate and error rate with minimum computation period.

Class distribution by constrained k-means clustering algorithm
The correct classification of the CD is carried out by constrained k-means clustering algorithm.This gives effective grouping of changed and no-change classes with two constraints named must-link and cannot-link specifies two instances would to be segregating change and no-change pixels to two different groups.The constrained k-means clustering algorithm overcomes the problems present in state-of-art methods.The existing methods find difficult to predict centric k-value, they will not work for global cluster for different size and different density.The proposed method of LG filter bank design for CD with constrained k-means clustering algorithm (LG-CKM) is compared with existing techniques of k-means (Celik, 2009) 10.The overall performance of the proposed LGCD method highlights that the most outstanding performance of change map creation compared to other multi-scale approaches.This technique concentrates in three different ways to improve the performance in terms of pre-processing to reduce speckle content followed by LG filter bank design to predict DI, and last stage class distribution by CKM algorithm.The designed method has taken

R E T R A C T E D
1.179, 1.708 and 0.945 s of computation time for (a), (b) and (c) datasets, respectively, which gives fast execution period compare to other multi-scale approaches.This overall coverage of quality measures proves that this technique is suitable to work in disaster environmental management studies.The whole testing were implemented on system configuration of Intel (R) -Core, CPU@2.4GHz with 4GB RAM operating on windows platform for real SAR images.

Conclusion
This paper introduced new multi-scale technique for CD based on LG filter bank design with effective class generation by constrained k-means clustering algorithm.The implication of the proposed technique is to retain the unchanged pixels exactly and to improve the changed pixels efficiently with absolute edge coverage.As mentioned in the literature to overcome the problems of CD using multiscale technique by Gabor filters, LG filter bank approach is designed.The proposed method gives admirable solution to consider, (i) LG filter bank LGCD method detects small and linear contour changes effectively.
The LGCD method implemented for n s ¼ 4 scales, n θ ¼ 6 orientations.The maximum magnitude of all orientations is combined together to obtain significant feature vector.The direct subtraction is applied between each scale and concatenated to acquire the DI.The standard CD techniques of log-ratio method weakening the changed intensity pixels and amplify the unchanged pixels, which is major concerned issue of uneven pixel distribution.This problem is conquer by the proposed multi-orientation-feature-based subtraction on respective scales and concatenated to obtain scale-based DI, it gives exact class distribution of lowand high-intensity pixels with significant features.
For testing, three real SAR image datasets are used.In pre-processing stage, SRAD filter combined with DWT is used and verified their effective speckle reduction using SSIM, ENL and SSI.Then, the proposed technique is examined using other existing methods

R E T R A C T E D
and their results are compared with confusion matrix parameters.The class distribution is done by constrained k-means clustering algorithm and their effectual clustering is compared with existing methods, binary classification is verified by NU and AOM parameters.Using these experiment stages, the proposed method LGCD gives very fast response of computation period 1.179, 1.708 and 0.945 s, that it can work in calamity situation for disaster management.The future work will be considered to work filter bank design with multi-band and polarimetric SAR datasets.

R E
low-energy component compared than horizontal and vertical sub-band images.The proposed EGF is designed with new edge sensitive weighting component, it used to preserve low signal component and reduces noise effectively.The new edge sensitive weighting component Ω efficiently preserves and detects weak information.

Figure 1 .
Figure 1.Workflow of proposed LG filter bank feature-based CD method.
measures the effective classification of the DI between change and error. of prediction which expresses the correct classification of the result.
Log-Gabor-based change detection methodIn order to validate the effectiveness of the proposed method, three real SAR images: Ottawa city, Athabasca valley and Wuhan city datasets are used.The proposed method of LG-based CD method is implemented by n s ¼ 4 scales, n θ ¼ 6 orientations and each scale of multi-temporal images are compared with respect to multiple orientation.The performance of multi-scale with multiple orientations provides medium band resolution contains low-and high-frequency responses.The features of multi-temporal images are decomposed with different frequencies to analyse and preserve edges and detail information.The non-orthogonal basis of LGbased CD method contains lines, bars and edges in high-frequency component and detail information about region of interest presents in low-frequency component.Figure3is pictorial representation of LG filter bank design for different scales and orientations with respect to centre frequencies.It shows effective bandwidth changes from high-frequency components to low-frequency components represented in Figure3(ad).The spatial localisation is more in high-frequency components and gets narrower towards low-frequency components in Figure3(e-h).The broad coverage of spatial localisation provides edges, corners and bars features exactly in high-frequency components represented in Figure3(i).Least narrow circle of lowfrequency components gives well-hypothesised detail information of the target represented by Figure 3(l).Figures 4-6 represent the output of the proposed method for Ottawa city, Athabasca valley and Wuhan city datasets, respectively, the result shows effective class distribution of change image and no-change image and also depicted with false colour composition representing change in red colour and no-change in green colour.The main scenario of the proposed method deserves the changes exactly and identifying small changes accurately and avoiding unnecessary pixels spreading around zero-crossing area.The ground truth images are manipulated based on layout references of the input images.

Figure 4 .
Figure 4. Experimental results of LGCD results for Ottawa city dataset: (a) input image Y 1 , (b) input image Y 2 , (c) ground truth image, proposed LGCD results, (d) change image, (e) no-change image and (f) false colour composite of change (red) and nochange (green) images.

Figure 5 .
Figure 5. Experimental results of LGCD results for Athabasca valley dataset: (a) input image Y 1 , (b) input image Y 2 , (c) ground truth image, proposed LGCD results, (d) change image, (e) no-change image and (f) false colour composite of change (red) and nochange (green) images.

Figure 6 .
Figure 6.Experimental results of LGCD results for Wuhan city dataset: (a) input image Y 1 , (b) input image Y 2 , (c) ground truth image, proposed LGCD results, (d) change image, (e) no-change image and (f) false colour composite of change (red) and nochange (green) images.R E T R A CT E D

Figure 7 .
Figure 7. Result of parameter σ/π on kappa coefficient for different datasets.
and adaptive k-means clustering algorithm indicated as LG-KM and LG-AKM, respectively, and also performance of Gabor-based CD with k-means clustering algorithm (G-KM), Gabor-based CD with adaptive kmeans clustering algorithm (G-AKM) and Gaborbased CD with constrained k-means clustering algorithm (G-CKM) are verified by using NU and AOM parameters.The performance of different clustering algorithm has experimented and plotted in Figure 9.By this experiment, the proposed method expose better class generation compared to other existing techniques by maximum similarity value of AOM 0.8731, 0.8973 and 0.8773 and lesser NU values of 0.0847, 0.0925 and 0.1045 for (a), (b) and (c) datasets, respectively, and are tabulated in Table have not DC components, since can provide an even coverage of frequency domain in octave scale resolution.(ii) The design of transfer functions with minimum overlap to cover wide range of filter bandwidth.(iii) LG filter bank is nonorthogonal basis, the design of filter bank have independent of transfer function to sum the individual filter response.Hence, the proposed LGCD is exploiting it strength in multi-scale structure to give medium band resolutions coverage from high to low pass filter responses, also allows concentrating changes from course to fine components of change area.The proposed method is compared with state-of-art techniques and proven that

Figure 9 .
Figure 9. Plot for effective class distribution using various clustering algorithm.

Table 1 .
Representation of with respect to σ=f 0 .

Table 2 .
LG filter variables and suggested values.
Consider C 1 ; . . .::; C n f gbe the initial cluster centre points.2. For each points Y i in Y, allocate it to the nearest cluster C i such that violate-constraints (Y i ; C i ; ml c¼ ; cl cÞ ) are false.If no 1 ; . . .::; C n f g Violate-constrained (image Y, cluster C, ml c¼ Y x Y, cl cÞ Y x Y) a For each instance ðY; Y ¼ Þl c¼ : If Y ¼ ‚C, assign true b For each instance ðY; Y Þ Þl cÞ : If Y Þ 2C, assign true c Else assign false.
The binary change map image contains zeros and ones to represent as a changed Ω ω c and unchanged Ω ω u class, respectively.

Table 3 .
Quality measures for speckle noise measurement.

Table 4 .
Quality measures for the assessment of change detection.

Table 5 .
Critical performance analysis of clustering algorithm.

Table 6 .
Evaluation measurements for speckle noise reduction for various datasets.

Table 7 .
Experimental results for change detection on Ottawa dataset using confusion matrix parameters.

Table 10 .
Comparison with clustering techniques.