An investigation in satellite images based on image enhancement techniques

ABSTRACT In Satellite Images, enhancement plays a dynamic research topic in image processing. The aim of enhancement is to process an image so that the result is more suitable than original image for specific remote sensing application. Satellite image enhancement techniques provide a lot of choices for improving the visual quality of remotely sensed images. In this research review, image fusion plays an important role, since it effectively combines auxiliary image content to enhance information contained in the individual datasets. This article provides an overview of the existing enhancement techniques. There are many techniques which have been proposed for enhancing the digital images which may be used for enhancing Satellite images. Here, a survey on various Satellite image enhancement techniques has been performed which recommends fusion-based enhancement performs superior while comparing with non-fusion-based enhancement techniques.


Introduction
In Satellite Images, Hyperspectral Image enhancement plays an active research topic in remotely sensed image processing.Hyperspectral image classification has become a challenging problem due to mixed pixels that would improve by enhancing the hyperspectral image and unmixing of classes.Developing and implementing the enhancement techniques require adequate information of the existing problems and idea about the obtained hyperspectral image.Enhancement of hyperspectral images plays a vital role in classifying the pure pixels from the mixed one.Despite many significant advances made in the field of enhancement, unmixing of classes from hyperspectral image is a challenging task due to its high dimensionality, low spatial resolution and mixed pixels.This article reviews the most relevant existing enhancement methods.

Review of hyperspectral image enhancement methods
In the last decades, many researchers in the field of hyperspectral imaging have developed significant approaches for the enhancement of images in hyperspectral technology.Hyperspectral imagery is typically collected and represented as a data cube or image cube with spatial information collected in the X-Y plane, and spectral information is represented in the Z direction.Most of the sensors operate either in panchromatic mode or hyperspectral mode.A panchromatic image consists of only one band.It is usually displayed as a gray scale image, i.e. the displayed brightness of a particular pixel is proportional to the intensity of solar radiation reflected by the targets in the pixel.Thus, a panchromatic image is interpreted as a black-andwhite aerial photograph of the object.
A panchromatic mode sensor gives high spatial resolution image, but it lacks spectral resolution, which does not contain any color information.Therefore, fusionbased enhancement technique is used to get the spatial and spectral information from the image.Since the panchromatic image does not cover the same spectral range as the hyperspectral image, the details extracted from the panchromatic image would result in the introduction of spectral distortions with unclear foreground and background information.Therefore, a model is required to add the details of the matte in such a way that the spatial quality of the hyperspectral image is improved while its spectral quality remains unchanged with sharp edge information.The algorithmic approach to obtain high-resolution images is fusion of bands which have rich entropy to increase the spatial information.Many sensing platforms are equipped to capture high spectral and low spatial resolution hyperspectral image as well as low spectral and high spatial resolution auxiliary image (i.e.panchromatic image).
Generally, the enhancement algorithm is of two methods, namely non-fusion-based and fusion-based.The Various approaches to enhancement methods are shown in Figure 1.

Non-fusion-based enhancement methods
Non-fusion-based enhancement methods focus on the spatial resolution of hyperspectral imaging systems.Spectral mixture analysis approach Spectral Mixture Analysis (SMA) is a softclassification approach which models the total reflectance in a pixel as the linear combination of reflectance from each class using LMM which predicts the proportion of each class within each pixel using the model.A variety of approaches based on SMA using LMM have been proposed to address the problem of spatial resolution in hyperspectral images.SMA-based sub-pixel processing is performed in which the spatial dependencies of materials in mixed pixels are not considered thus acting as an initial stage for the spatial resolution enhancement of hyperspectral images (Ruescas et al., 2010).Brown, Gunn, and Lewis (1999) have proposed a linear SVM approach, which estimates land cover components by sub-pixel processing.This model automatically selects the relevant pure pixels and determines the number of classes in the region of interest by providing accurate representation of land covers.Atkinson (2005) proposed an algorithm for the enhancement of hyperspectral images in spatial resolution using sub-pixel target mapping.High-resolution pixels are placed based on spatial correlation using a distance-weighted function.
In the algorithm proposed by Villa, Chanussot, Benediktsson, and Jutten (2011), the spectral unmixing is performed to determine the proportion of endmembers in each pixel.Sub-pixels in the model are located by spatial resolution mapping performed by simulated annealing.However, the limitation is the requirement of high computational load because of the large number of bands present in hyperspectral images.Even though SMA methods provide the abundances of the endmembers within a pixel which is very useful in determining the presence of object in remote sensing applications; the limitation is that they do not exploit spatial and spectral information to their full capacity.

Learning-based approach
The second classification of non-fusion-based method is termed as learning-based method in which a set of training images is used for learning the super resolution attributes.Based on the learning method, it is categorized into Hopfield Neural Network (HPNN) and Back Propagation Neural Network (BPNN).A method used by Gu, Zhang, and Zhang (2008) obtains the abundance map using linear SMA which is based on spatial correlation of land covers.In this method, they use lowresolution training images to determine the parameters used in super resolution mapping.Han et al. (2019) have applied a similar approach which uses low-resolution images and their downsampled versions to train BPNN in the learningbased method.A mean filter is used for the down sampling, and the super resolution is performed by considering spatial correlation of different materials present in hyperspectral images.Hence, hyperspectral images themselves are treated as the training data to achieve better coherence between the results of the enhanced and the original hyperspectral images.Zhang and Mishra (2014) have implemented a support vector regression approach which does not use explicit formula to describe prior information about the nonlinear relationships between the coarse fractional pixels and labelled sub-pixels from the best matching high-resolution training data.Due to the above-mentioned limitations, learning-based method is hardly used in practice.

Matting-based approach
The third classification of non-fusion-based approach is termed as matting-based approach which extracts the foreground object from an image.Levin, Rav-Acha, and Lischinski (2008) have proposed a new alpha matting model in which one assumes the colors of the foreground and the background to vary linearly inside a small patch.The result is imperfect with insufficient user input.Wang and Suter (2007)  on color models of the foreground and the background regions.However, the result is worse, that some dark-green areas in the image background are semi-transparent layers, i.e. dark green is a mix of dark foreground with green background.
Chen, Zou, Zhiying Zhou, Zhao, and Tan (2013) proposed a new image matting with local and nonlocal smooth priors.In this method, editing propagation essentially introduces a nonlocal smooth prior on the alpha matte in which the manifold is preserved.Prior from matting the non-locally smooth Laplacian complements each other, and hence for natural matting, it is combined with a data term from color sampling.The color distribution is similar in foreground and background images.It is not easy to set a common window size for all test data which is a limitation.This method generalizes good results with a small fixed window size with the help of nonlocal smoothness constraint.
A matting technique called KNN matting is proposed for ordinary image by Chen et al. (2013) with a closed form solution that can hardness the preconditioned conjugate gradient method and runs in a few seconds after accepting very sparse user mark-ups.Xu, Price, Cohen, and Huang (2017) have proposed a deep image matting model to obtain high-level context and use high-level features.In this method a neural network is capable of capturing higher order features resulting in higher computational complexity.
Even though the non-fusion-based methods provide a good solution to hyperspectral image enhancement by gaining information while extracting the foreground from its background images, high spatial resolution is not obtained.In order to have high spatial information, fusion-based enhancement is needed.

Fusion-based enhancement methods
In fusion-based enhancement methods, hyperspectral images generate the high spatial resolution scene by fusing a low spatial resolution hyperspectral image with the auxiliary information.The fusion-based enhancement methods are classified into component substitution approach, numerical and statistical-based approach, multiresolution approach and optimization-based approach.

Component substitution (CS) approach
The most popular methods are Intensity-Hue Saturation (IHS) color transformation and Principal Component Substitution.A very popular technique is IHS (Malpica, 2007).Color enhancement, feature enhancement and improvement of spatial resolution are the standard procedures in analysis of images (Pohl & Van Genderen, 1998).This technique converts a color image from RGB space to the HIS color space.Here, the intensity band is replaced by the auxiliary information.Implementing this method is very efficient, but this technique produces color distortion because the auxiliary information is not created from the same wavelengths of light as the RGB image.Therefore, this method has been modified to Fast Intensity-Hue Saturation (FIHS) (Tu, Huang, Hung, & Chang, 2004).
The modification performed in the FIHS method is that it extends the IHS method from three bands to four by incorporating an infrared component because the auxiliary information is taken from infrared light, in addition to visible wavelength.This modification allowed the calculated intensity to better match the auxiliary information, thus causing less color distortion in the fused image.The trade-off between spatial improvement and spectral quality loss received a lot of attention and led to the introduction of trade-off parameters (Tu et al. 2004).To obtain the desired result, these parameters allow a fine tuning by the user.
To overcome the spectral quality problems, researchers have proposed the Adaptive IHS method (Rahmani, Strait, Merkurjev, Moeller, & Wittman, 2010) which adaptively adjusts linear combination of the coefficients of multispectral bands.The weights induced by edge injection process in the spatial detail are too large that results in color changes.Thus, this causes spectral distortion.In addition to this, weights induced by edges lead to reduction in sharpness of the fused image.Another improved method which is Improved AIHS (Leung, Liu, & Zhang, 2014) is designed by a more adaptive weighting matrix in the spatial detail injection step.It performs better than AIHS.Edge of high reflection area is more prone to distortion.Hence, the overall spectral distortion is higher.
Another technique, namely, Generalized IHS Brovey Transform (BT) Smoothing Filter-based Intensity Modulation (SFIM) Dehnavi and Mohammadzadeh (2013) is proposed which incorporates Generalized IHS, BT and SFIM by using two adjustable parameters.This modulation approach is the most frequently employed approach in which the spatial and spectral information are controlled.In addition, it preserves more spectral information, but suffers more spatial information loss.Hubert et al. (2005) has proposed a novel fusion technique which is performed using Principal Component Analysis (PCA) approach.This approach partitions the dataset into sub-groups of bands.Therefore, computational complexity is reduced.PCA is applied to each subgroup based on dominant classes.The spectral signature of a class is used as the transfer function of matched filter applied to corresponding bands of the dataset.The principal component of each sub-group is used as a component of the final RGB image.Since the energy is not uniformly distributed for each group, the color distortion is low which results in less visual quality.Qu et al. (2018) proposed a Structure Tensor-based algorithm for Hyperspectral and Panchromatic Image fusion.In this algorithm, an image enhancement approach is utilized to sharpen the spatial information of panchromatic image and the spatial details of the Hyperspectral image which is obtained by using an adaptive weight method.The structure tensor is introduced to extract spatial details of the enhanced panchromatic image.In order to avoid artifacts at the boundaries, a guided filter is applied to the integrated spatial information image.To reduce spectral and spatial distortion, an injection matrix is constructed.This algorithm provides more spatial details while preserving the spectral information.Xie et al. (2019) proposed an enhancement algorithm using multispectral and Hyperspectral fusion model, based on the observation models.In this method, all parameters can be learned from the training data and spatial, spectral response operators are discovered.This algorithm provides color and brightness much closer to the lowresolution Hyperspectral image.Jayanth, Kumar, and Koliwad (2018) proposed an enhancement algorithm using regionally weighted principal component analysis and wavelet algorithm.In this algorithm spectral information is preserved with improvement in spatial quality and good clarity.Parveen, Kulkarni, and Mytri (2018) proposed an image enhancement algorithm for lowresolution satellite images.This algorithm improves the interpretation and makes the image visually clear.
Component substitution-based approaches focus on making an ideal image intensity and a high-frequency injection model to preserve spectral information.But incurse more spatial information loss sharpness reduction and increasing spectral distortion.Some algorithms can be applied only to a specific sensor, although a few commercially available fusion software tools have proven to be suitable for all available optical panchromatic and multispectral images.In addition, these tools have a greater potential to improve the spectral quality, although they only show visually prominent results.

Multi-resolution approach
Multi-Resolution Approach (MRA) merges the spatial information from a high-resolution image with the radiometric information from a low-resolution image.The process is to sharpen the low-resolution image.In recent years, the powerful MRA technique such as wavelets, curvelets and others have become popular because of increase in computational power and availability of algorithms in commercial remote sensing software.Fusion based on multiresolution contourlet transform has been proposed by Miao and Wang (2006).In this approach, first directional image pyramids up to certain levels using the contourlet decomposition are obtained.The lowfrequency coefficients which are at the top of the image pyramids are fused using the average-based rule.At the remaining levels, the fusion rule selects the coefficients from the source image which have higher energy in the local region.
MRA-based approaches decompose images into many number of channels depending on the local frequency content (Nunez et al., 1999).The pyramid is used to represent the multi-scale models for the original image.With increasing level, the original image is approximated at coarser spatial resolution.In between the individual pyramid levels, the transform is performed using wavelet and Curvelet transforms.The wavelet transform approach is based on substitution and addition.In substitution approach, selected multispectral wavelet planes are substituted by the planes of the corresponding panchromatic images.In addition approach, the decomposed panchromatic planes are added to the multispectral bands.
Garzelli, Nencini, Alparone, and Baronti ( 2005) have proposed a fusion based on multiresolution analysis to describe how high-pass information is modelled from panchromatic image.The basic wavelet transform substitution methods are low-low, low-high, high-low and high-high.These decompositions exist to form the pyramid at several levels.The obtained fused image is inverse transformed.In practice, the wavelet transform function and scaling are not explicitly derived (Amolins, Zhang, & Dare, 2007).They are described by coefficients which are fused by the different rules of fusion to produce the resultant image Better results are obtained when the fusion process is context-driven (Aiazzi et al. 2002).This process is to make the fused bands the most similar to the narrow band multispectral sensor image with the same resolution as the broadband image sensing the single panchromatic band.In order to achieve gain equalization, the higher frequency coefficients taken from the high-resolution image are selected based on statistical congruence and weighted by a space-varying factor.Ringing artifacts are completely moderate.Here, the spectral signatures of small size may be restored (Aiazzi, Alparone, Barducci, Baronti, & Pippi, 2001), even though a heavily smeared image is obtained.
Pradhan, King, Younan, and Holcomb ( 2006) have proposed a multiresolution analysis which is extended to discrete function.The space is conserved and determines the best possible number of decomposition levels required for merging images with a particular resolution ratio.If the resolution ratio is high, the decomposition levels are more to produce better results.Computational complexity is more due to more decomposition levels.Recently, Contourlet Transform (CT) have been proposed by Metwalli et al. (2014).This transform captures and links discontinuity points into linear structures.The ability is to have different number of directions at each scale of multiresolution decomposition.The non-subsampled CT works on a non-subsampled pyramid and produces better results.This technique can also be found as hybrid component together with PCA and IHS (Xiao-Hui 2008).
Proportional Additive Wavelet and Laplacian-based context-based decision method have been considered as the good image fusion approaches during the Datafusion contest, which perform better than CS-based method (Alparone, Aiazzi, Baronti, Garzelli, & Nencini, 2006).However, in MRA-based fusion, spatial distortions may occur, because of aliasing effects and the blurring of textures, and spatial enhancement is not satisfactory compared with CSbased methods (Aiazzi, Alparone, Baronti, Garzelli, & Selva, 2006).MRA (Mallat, 1989) have provide effective tools, like wavelets and pyramid, to carry out image merging tasks.However, in the case of high pass detail injection in an image, spatial distortions, aliasing effects, originating shifts or blur of contours and textures may occur (Yocky, 1996).These disadvantages, which may be as much annoying as spectral distortions, are emphasized by mis-registration between MS and Panchromatic data, especially if the MRA underlying detail injection is not shift-invariant (Aiazzi et al. 2002, González-Audícana, Saleta, Catalán, & García, 2004).Wenyan, Zhenhong, Yu, Yang, and Kasabov (2018) proposed an enhancement algorithm based on equal weight image fusion which improves the accuracy of change detection with less visuality.To avoid these problems, Numerical and statistical-based approach is proposed which gives more efficient outputs.

Numerical and statistical-based approach
The simplest and earliest methods used in remote sensing are mathematical combinations of different images.Addition, subtraction, multiplication and division approaches play an important role in earth observation.One of the approaches is called subtractive resolution merge influence of user and predefined calculated band weights in subtractive resolution without difference in result have been analyzed by Ashraf, Brabyn, and Hicks (2013).A classical technique is BT based on the spectral modelling that reaches a normalization of the input band through subtraction and addition.A major drawback is the distortion in color induced by BT.A modification of BT is the colour normalized spectral sharpening (Vrabel, 2000).This method groups the input bands into spectral segments and is, therefore, an adaptive approach which improves the spectral quality of the fused images.
Another modification is modified BT which is based on local modulation of the multispectral image by the ratio of the new intensity and initial intensity components (Chibani, 2007).The variational model is formulated by Ballester, Caselles, Igual, Verdera, and Rougé (2006) which describe the relationship between lower resolution multispectral image and high-resolution panchromatic image using subsampling and filtering.It assumes that multispectral image with its geometry is comprised in panchromatic image.This method was extended recently by Duran, Coll, and Sbert (2013) to adapt the process considering local relationships of neighbouring pixels which have the denoising effect.
In commercialization purpose, one of the statistical approach algorithms performed is Fuze Go known as pansharpening algorithm.It uses a least square fit between the gray values of the input bands.The output values are estimated with statistical methods (Xu et al 2014).The strength is that the fully automated process also allows inexperienced users to achieve good results, and the fact that the input images are treated individually to find the best match (Zhang & Mishra, 2014).Devika and Parthasarathy (2018) proposed a fuzzy statistics-based technique for enhancing the satellite images.This algorithm results an efficient and accurate fuzzy clustering.Therefore, to enhance the contrast, techniques which jointly perform the combination operation with a solution are required.

Optimization-based approach
The maximization or minimization of a real function by choosing inputs within an allowable set for determining the resultant of the function is termed as optimization technique.In order to solve optimization-based problems algorithms or iterative methods are used that converge to a finite solution.An optimization-based approach is used for fusion for multiexposure optical images by Raman and Chaudhuri (2007) where a set of images has been fused for the purpose of enhancing the dynamic range in the output image.However, due to smoothness incorporation in the resultant image in cost function, it results in a smooth solution.A fast approach for fusion of hyperspectral images through redundancy elimination was proposed by Kotwal and Chaudhuri (2010).In this method, a specific set of image bands selected is mutually correlated, and most of the information is retained in the data.As only a fraction of the entire data is being fused, this method is computationally much faster.A new approach for visualization-based fusion of hyperspectral image bands was proposed by Kotwal and Chaudhuri (2012).Here, the geological input data have a very small value of intrinsic contrast and is difficult to visualize.Xu, Zhang, Li, and Ding (2015) have proposed a Gram Schmidt approach which generates a simulated lower resolution pan image through weighted sum of Green, Blue and Red and near-infrared multispectral bands.In spatial and spectral evaluations, results are blurred in all the band combinations and inobvious color distortion and strange artifacts are introduced.In addition, this transformation is computationally intensive, and hence it takes more time in generating output images.Wang et al. (2013) proposed a projected gradient approach based on unmixing-based non-negative matrix factorization.This method produces the fused image with high spectral and spatial resolution.It improves the spatial resolution without losing much of its color information.Rajathurai A and Chellakon H S (2018) proposed a KNN matting model which has a closed form solution that leverages the existing approaches by producing efficient visualization multilayer extraction results with reduced computational complexity.
Ben Abbes, Bounouh, Farah, de Jong, and Martínez (2018) compare three satellite image using time series decomposition methods for vegetation change detection.This results of the comparative analysis show the better performance of image fusion techniques when compared to non-fusion based techniques.Kaplan (2018) proposed a weighted intensity hue saturated transform algorithm for image enhancement.In this technique, the intensity component is obtained by weighting function which preserves more information from the bands of the input image so that the visual and quantitative comparisons give superior results.Hashimoto et al. (2011) proposed a multispectral image enhancement algorithm which gives an effective visualization.In this method, the user can specify the spectral band to extract the spectral feature and the color for visualizing independently so that the desired feature is enhanced in spectral domain in the specified color.Mozgovoy, Hnatushenko, and Vasyliev (2018) proposed an algorithm for automated recognition of vegetation, waterbodies and the territory in satellite images.This algorithm provides a significant increase of the efficiency and reliability while updating maps of large cities which reduces the financial cost and also the human errors are minimised.Guo, Ma, Bao, and Wang (2018) proposed an algorithm for fusing panchromatic and short wave infrared bands based on convolutional neural network.This method effectively enhances the spatial information by separating the basic architecture into three layers.Yadav and Agrawal (2018) proposed an enhancement algorithm using road network identification and extraction in satellite imagery using otsu's method.This method detects and extracts the road network from high-resolution satellite images which enhances the contrast of the image.Md Noor, Ren, Marshall, and Michael (2017) proposed an enhancement algorithm for corneal epithelium injuries in Hyperspectral images this algorithm improves the interpretability of data into clinically relevant information to facilitate diagnostics.
Gunlu (2014) proposed an enhancement algorithm for the prediction of stand parameters using pansharpened IKONOS satellite image.Multiple stepwise regression analysis is used to estimate these stand parameters.It gives high accurate measurements, with higher cost and time.Qifal Wang, Jia, Qin, Yang, and Hu (2011) proposed an enhancement technique for multispectral and panchromatic image fusion.This method obtains a high spatial resolution multispectral image with high similarity referenced true high-resolution multispectral image.Gewali et al. (2018) proposed an algorithm for Hyperspectral image analysis based on machine learning.This analysis algorithm extracts the desired information from intrinsic spectral variation while ignoring the extrinsic variation and intrinsic variation caused by unrelated factors.Li et al. (2019) proposed an enhancement algorithm which offers large scale degraded underwater images.This algorithm is highly desirable in which effective non-reference underwater image quality evaluation metrics are calculated.Maselli, Chiesi, and Pieri (2016) proposed a novel approach for the enhancement of spatial properties which produces NDVI image series.The statistical method is applied to improve the spatial features of the abundance images based on the end members.Tiede, Baraldi, Sudmanns, Belgiu, and Lang (2017) proposed an architecture and prototypical implementation of a semantic querying system for big earth observation image bases, which enhances the vision of the photographic images.Gavankar and Ghosh (2018) proposed an automatic building foot print extraction from high-resolution satellite image using mathematical morphology.In this approach, buildings can be detected from different size and shapes.This method eliminates false-detected buildings.Lal and Anouncia (2016) proposed an enhanced dictionary-based sparse representation fusion for multitemporal remote sensing images.A locally adaptive dictionary is created such that the dictionary contains patches extracted from images.This technique preserves spectral information, errors, color and visual quality of the fused product.

Conclusion
This article summarizes the review of Satellite image enhancement methods.Nonfusion-based enhancement method provides low spatial information with higher computational complexity.In order to improve the spatial information and reduce the computational complexity, fusion-based method is preferred.Fusion-based enhancement results in high spectral distortion, low spatial resolution, reduced contrast and sharpness.Therefore, it is necessary to have a fusion technique to enhance the Satellite images for better visualization and classification accuracy.

Disclosure statement
No potential conflict of interest was reported by the authors.ORCID R. Ablin http://orcid.org/0000-0001-8116-9868 have introduced a new model in which data terms are based Various satellite image enhancement approaches.