Hyperspectral Image Classification Based on the Gabor Feature with Correlation Information

Abstract Gabor filter is widely used to extract spatial texture features of hyperspectral images (HSI) for HSI classification; however, a single Gabor filter cannot obtain the complete image features. In the paper, we propose an HSI classification method that combines the Gabor filter (GF) and domain-transformation standard convolution (DTNC) filter. First, we use the Gabor filter to extract spatial texture features from the first two principal components of the dimensionality-reduction HSI with PCA. Second, we use the DTNC filter to extract spatial correlation features from HSI in all bands. Finally, the Large Margin Distribution Machine (LDM) uses the linear fusion of the two kinds of spatial features to classify HSI. The experimental results show that the classification accuracy of Indian Pines, Pavia University, and Kennedy Space Center data sets is 96.64, 98.23, and 98.95% with only 4, 3, and 6% training samples, respectively; and these accuracies are 2–20% higher than the other tested methods. Compared with the hyperspectral information based on SVM, EPF, IFRF, PCA-EPFs, LDM-FL, and GFDN method, the proposed method, GFDTNCLDM, significantly improves the accuracy of HSI classification.


Introduction
Hyperspectral remote sensing images have the characteristics of high spectral resolution, low spatial resolution, highly correlated spectral information, and high redundancy (Luo et al. 2022;Sellami and Tabbone 2022).The combination of spatial and spatial features for improving the classification accuracy of HSI has become a hot research topic, and the core problem is texture feature extraction and an effective combination of spectral information and spatial features.Currently, the spatial feature extraction methods used in HSI classification include morphological filtering (Guo et al. 2022;Tan et al. 2021), Markov random field (Cao et al. 2022;Fatemighomi et al. 2022), and image segmentation (Cao et al. 2020;Sun et al. 2021).Many scholars use various filters to obtain spatial texture features of HSI for pixel classification of hyperspectral remote sensing images, such as Bilateral Filter (BF) (Shen and Bai 2006;Kotwal and Chaudhuri 2010), Gabor Filter (GF) (Shen and Bai 2006), and Guided Filter (GDF) (He et al. 2012).
BF is a non-linear filter that can preserve the edge and noise-reduction smoothness of HSIs, and it is widely used to extract spatial texture features of HSI for pixel classification (Liao et al. 2019b).However, most scholars only use it to extract spatial texture features to assist the classifier, which leads to limited classification results.First, Xia et al. (2016) randomly select several subsets from the original feature space to obtain the independent components of the spectrum using the ICA method, then use the effective EPF method to generate spatial features, and finally, use a random forest or rotating forest classifier to classify spectral-spatial features.Kang et al. (2014b) apply a spectral space classification method based on edge-preserving filtering (EPF) to classify HSI using SVM, and the resulting classification map can be expressed as multiple probability maps.The classification results are obtained using BF or GDF on each probability map, and the first PCA component or the first three PCA components of the HSI are used as the guide image.To obtain a better classification effect, Kang et al. (2017) improve the EPF method and propose a PCA-based EPF (PCA-EPFs) HSI classification method, which stacks the constructed spatial information with the spatial features extracted by applying edge-preserving filters to fuze into a new feature.Through PCA dimensionality-reduction, it achieves HSI classification of SVM.Hu et al. (2022) use PCA to reduce the dimensionality of HSI, use multiple principal components as the spatial and distance domain information of BF, and use an extreme learning machine for classification.
GDF can effectively maintain the edge and do noniterative computations, and it is always used to extract spatial texture features to assist the classifier in realizing classification.However, most scholars used it to extract spatial texture features without considering the fusion of other features, leading to poor classification accuracy.Shambulinga and Sadashivappa (2019) propose an HSI SVM classification method based on GDF and PCA, and they use PCA to extract and reduce spectral features in hyperspectral data and classify them using SVM.Guo et al. (2018aGuo et al. ( , 2018b) ) combine the K-Means algorithm with a guided filter, and they use the former and the latter GDF to extract spatial information and optimize the HSI classification results, respectively.Liu et al. (2022) use GDF for feature filtering, which eliminates redundant features, and they propose a GDF and enhancement strategy model to classify HSI.Some scholars use recursive filtering (Vaddi and Manoharan 2020) to extract spatial texture features, and then directly use classifiers to classify them; however, a single spatial one cannot improve classification accuracy significantly.Kang et al. (2014a) divide hyperspectral data into subsets and fuze them, then they use recursive filtering to obtain spatial information and submit it to SVM for classification and finally propose the IFRF method.Zhan et al. (2016) implement an HSI classification method by combining recursive filters and LDM.
The frequency and direction representation of the GF is close to that of the human visual system, and it can extract spatial local frequency features.It is an effective texture detection filter, and many scholars use GF to obtain texture features to assist HSI classification.Bau et al. (2010) design a 3D GF filter bank to capture energy in spectral-spatial data at different orientations and scales.Shen and Jia (2011) design a set of GF filters with different frequencies and directions to extract the variance of the HSI spatial-spectral signal, and perform feature selection and fusion to reduce the redundancy between Gabor features.Wang et al. (2014) use GF filtering to obtain better spatial features, combine it with the active learning method to simplify the spatial neighborhood information of labeled training samples, and propose a spatialspectral HSI classification algorithm and a semi-supervised HSI classification algorithm based on label propagation.Jia et al. (2016), based on multi-task joint sparse representation, use Gabor cubes for HSI classification and the Fisher discriminant criterion of GF to extract the most representative HSI cubes for each class.Rajadell et al. (2013) use the HSI texture features obtained by the GF to reduce the number of spectra required for dimensionality-reduction and propose a spectral-spatial pixel representation method.Li and Du (2014) couple the nearest subspace classification with distance-weighted Tikhonov regularization, and they use the spatial features extracted from GF in the nearest regularized subspace classifier, implementing the HSI classification method.He et al. (2017) study discriminative low-rank GFs for HSI classification of spatial-spectral combinations, decomposing a standard three-dimensional spectral space GF into eight sub-filters corresponding to different combinations of low-pass and band-pass single-rank filters, realizing that each sub-filter filter to extract appropriate features.Ye et al. (2016) extract features from HSI by using GF embedded in principal component analysis, reduce the dimensionality of spatial features by using local Fisher discriminant analysis and local protective non-negative matrix separation methods, and propose two HSI classification algorithms.Imani and Ghassemian (2016) extract spatial texture features, shape features, and pixel neighborhood information size using the gray level co-occurrence matrix, GF, and morphological filter; and find the optimal classification algorithm combining different features.Jia et al. (2018) propose an HSI classification method based on 3D Gabor wavelet phase coding and Hamming distance matching frame, using the directional Gabor phase features and quadrant bit coding scheme.Kang et al. (2018) extract spectral and Gabor features from the first three PCA principal components of HSI by GF, and realize an HSI classification method that fuses Gabor features and deep network learning methods.Ghassemi et al. (2021) use a 3D GF to perform finite element analysis on the input data to extract spatial features, including textures and edges, and use an SVD-QR-optimized CNN for classification.Bhatti et al. (2022) use a two-dimensional GF to extract spatial features from the dimensionalityreduced hyperspectral data, then uses CNN to generate spectral features, and finally, use a dual optimization classifier to classify the final extracted features.Pan et al. (2022) generate a Gabor feature data cube with joint spatial-spectral features by filtering three-dimensional Gabor-filtered HSI.In addition, the Gabor feature data cube is input into co-selection selftraining, resulting in the labeled samples, and a coselection strategy method is proposed.Xiao et al. (2022) use the least square method to obtain a set of pixel probability maps from the input data, then filtered these probability maps by GF to extract spatial features and input them to the standard generalized learning system for classification.Huang et al. (2022) use a Gabor ensemble filter that filters each input channel by some fixed Gabor filters and learnable filters simultaneously, to extract deep features for HSI classification with CNN.In the past, many scholars used Gabor filters to extract a large number of spatial texture features by adjusting frequencies and directions, but they only considered Gabor space without combining spatial correlation features to improve the classification accuracy of HSIs.
In the past, many advances in extracting spatial features for HSI classification have been made.However, these methods obtain spatial texture features through only a single filter, often ignore spatial correlation features, and cannot obtain complete HSI features.Facts have proved that integrating spatial features into the classifier can significantly improve classification accuracy; therefore, more effective spatial feature mining and fusion methods need to be further studied.
In summary, the research on extracting spatial features of HSI for classification has some achievements, but there are also some shortcomings: (1) A single Gabor feature cannot obtain the complete spatial texture features of ground objects; (2) It is prone to losing the spatial correlation information using the GF to extract texture features; (3) The existing methods do not consider the fusion of Gabor features and spatial correlation features to form a complete spatial feature.Therefore, the existing methods of HSI classification based on spatial features need to be further improved.
This paper extracts Gabor features from the extracted hyperspectral information and obtains spatial correlation features to obtain better spatial features and provide better training samples for correlation classifiers.We propose a Gabor filtering algorithm with correlation information for HSI classification (GFDTNCLDM).The experimental results show that the fusion of spatial features extracted by GF and spatial correlation features can effectively assist LDM classification performance and significantly improve classification performance.

Gabor filter
GF is an edge feature extraction filter, and its frequency and direction expression characteristics are similar to those of the human visual system, which is suitable for extracting texture features of images.The GF kernel function of HSI at some bands is represented as (Haghighat et al. 2015): By adjusting the frequency and direction of the filter, the GF filter bank can be expressed as follows: where C and D represent the number of frequencies and directions, respectively.Convolving w c, d ðx, yÞ with the i-th band image of HSI to obtain spatial texture features: To extract better spatial features, this paper first carries out PCA dimension-reduction on HSI to ensure that most of the information concentrates in the front principal components, and then carries out Gabor filtering on the reduced principal components, respectively.The filter size is 45 Â 45, C ¼ 5, D ¼ 6, thus generating 35 filter groups for each principal component and forming 35 filter elements for each component.As shown in the figure, each filter group produces 35 filtered images (Figure 1).
As shown in Figure 2, the ground object image of the HSI and the first three principal component maps of PCA are synthesized.In addition, the partial Gabor filtered images of the first, second, and third principal components of PCA are synthesized, and the adjustment values of c and d are 10 (c ¼ 0, d ¼ 0), 20 (c ¼ 1,  d ¼ 1), 30 (c ¼ 2, d ¼ 2), and 40 (c ¼ 3, d ¼ 3), respectively, as shown in Figures 2-5.
To determine the number of principal components, this paper uses Indian dataset to increase the number of principal components for a series of verification experiments, and randomly selects 5% as training dataset and the other 95% as test dataset.First, the first principal component is used for filtering, then the filtered image is handed over to SVM for classification, and then the first two to the first eight principal components are filtered and classified.The experimental results show that 80 filtered images generated by the first two principal components have the best effect, and the overall classification accuracy OA is 95.8%.Whereas the filtered images generated by one component have too little information and more than three components have too high production dimension, which makes the classification effect unsatisfactory.Therefore, this paper uses the first two PCA principal components for filtering and classification experiments (Figure 6).
We normalize the HSI and then filter the former principal components after PCA dimension-reduction.The SVM classification algorithm in combination with GF is as follows:

Domain transform normalized convolution filter
DTNCF can convert a two-dimensional image filter into a one-dimensional one and obtain good spatial correlation features for the HSI classification (Gastal and Oliveira 2012;Liao and Wang 2020).For a uniform discretization SðXÞ of the original domain X, we can get the DTNCF function using Equation (4) for the HSI R at the ith band, where w e and KðÁÞ are the normalized factor of e and the kernel filter, respectively.In addition, Equation ( 5) indicates that the neighborhood pixels are on the same ground, and dðÁÞ in Equation ( 6) is the Boolean function.Therefore, DTNCF has spatial correlation retention characteristics.
KðnðeÞ, nðf ÞÞ (5) where nðhÞ transforms an image into a one-dimension vector and can be written as Equation ( 8), which integrates the partial differential of the image and converts it to an enhanced function, and r is the filter radius.
where r s denotes the standard of space and r r the one of range, M is the total iteration number.r J d is the amount of the dth iteration.nðhÞ, r r , and r J d correspond to Equations ( 9) and ( 10), respectively.Because of the spatial correlation-maintaining characteristics, DTNCF can make up for the incompleteness of MCF about the spatial feature extraction (Figure 6).

LDM classification method
LDM improves the SVM classification performance with the central idea of simultaneously maximizing the margin mean and variance.SVM predicts unlabeled data by maximizing the minimum margin hyperplane (Zhang and Zhou 2014).Optimizing the margin distribution can achieve better generalization performance.Bai et al. (2022) proposed a large margin distribution machine LDMM with the optimized margin distribution, which maximizes the average margin and minimizes the margin variance.The classification hyperplane of SVM and LDM is shown in Figure 7, which shows that the hyperplane of SVM is the maximization of the smallest margin among all samples (Liao et al. 2019b).LDM hyperplane considers maximization of the margin mean and minimization of the margin variance.Compared with the SVM hyperplane, the LDM hyperplane is more effective for classification.
To further show the superiority of LDM, we use symbols "᭺" and "᭝" to draw two kinds of HSI.The orange line represents the hyperplane of SVM, H SVM , while the purple line represents the hyperplane of LDM, H LDM : It shows that the hyperplane of SVM is the maximization of the smallest margin among all samples.LDM hyperplane considers maximization of the margin mean and minimization of the margin variance.Compared with the SVM hyperplane, the LDM hyperplane is more effective for classification.

Hyperspectral classification method based on Gabor features
There is a strong spatial correlation between hyperspectral pixels.In the past, the hyperspectral image classification method implemented by the Gabor filter focused more on the texture information extraction of ground objects.Although the filter can extract better texture information, it is often easy to lose spatial correlation information of ground objects.To compensate for the current deficiencies, this paper uses DTNCF to supplement spatial correlation features, and we integrate Gabor features and related features to achieve LDM classification, forming the GFDTNCLDM method, which is given as follows: First, we use the GF and DTNCF to extract spatial texture features from the first two principal components of the PCA-reduced HSI and spatial correlation features for all hyperspectral bands, respectively.LDM realizes classification and outputs the best classification results through comparison.The algorithm is expressed as follows:

GFDTNCLDM algorithm:
Step 1: Normalization, Equation ( 12) normalizes the HSI R, where l and r correspond to the mean and standard deviation of R.
W ¼ RÀl  2 shows the specific feature categories and sample numbers.
The third dataset is the Kennedy Space Center dataset (Lei et al. 2021), which is a hyperspectral image taken on March 23, 1996, by the Airborne Visible/Infrared Imaging Spectrometer (NASA AVIRIS) at Kennedy Space Center, Florida.A total of 224 bands were collected with a spectral resolution of 10 nm and a central wavelength of 400-2500 nm.The images were taken at an altitude of about 20 km with a spatial resolution of 18 m.After water absorption and noise are removed, the remaining 176 bands are analyzed.The image includes 13 types of ground objects, and Table 3 shows the specific types of ground objects and the number of samples.

Parameter setting
To verify the superiority of the proposed method, the followings methods are used to compare with GFDTNCLDM, such as: 1. SVM: according to the raw features of hyperspectral images, SVM is applied to the Gaussian radial basis function kernel (Hao et al. 2022).2. EPF: in this method, hyperspectral images are classified by SVM.Then, Edge-Preserving Filter is conducted for each probabilistic map.Finally, the class of every pixel is selected based on the maximum probability (Hao et al. 2022).

IFRF: This method attains the classified results
with SVM based on the image fusion and recursive filter (Hao et al. 2022).In this paper, we use Overall Accuracy (OA), Average Accuracy (AA), and Kappa statistic (Kappa) to measure classification accuracy.To avoid the biased estimation, we perform 12 independent tests using the computer program MATLAB R2021b, based on the configuration of i9-10900 CPU, NVIDIA GeForce RTX 3080 GPU, and 32 GB RAM.Investigation of the proposed method

Experiment of Indian Pines
To evaluate the classification performance of the GFDTNCLDM method, we used fourteen methods to classify and verify the data from Indian Pines, as follows: Figure 9a shows the distribution of Indian Pines datasets, and the dataset selected all 16 categories, of which 4% (about 420) samples and the remaining samples were selected as the training set and the test set, respectively, and 16% of the three types of grounds number of Indian Pines were not abundant for training.Table 1 shows the classification precision resulting from fifteen approaches, as shown in Figure 9.
Figure 9 shows the classification results of Indian Pines.Table 1 shows the OA, AA, and Kappa for each class using different methods, demonstrating that GFDTNCLDM reaches excellent accuracy, e.g., OA ¼ 96.64%, AA ¼ 94.81%, and Kappa ¼ 96.17%.Besides, the accuracy of GFDTNCLDM reaches 97% in seven classes.This experimental result indicates that the classification performance of GFDTNCLDM is significantly improved compared with other approaches.

Experiment of Pavia University
Figure 10a shows the distribution with grounds of the Pavia University dataset, in which nine classes were selected, with 3% of the samples as the training set and the rest 97% as the test set.Table 2 lists the classification accuracy of the Pavia University dataset using the different methods, and Figure 10 shows the classification effects.
Figure 10 shows the classification results for Pavia University, and Table 2 indicates OA, AA, Kappa, and accuracy for each of the different methods.Moreover, Table 2 gives the best accuracy obtained by GFDTNCLDM, with OA ¼ 99.23%, AA ¼ 98.54%, and Kappa ¼ 98.98%.In addition, the accuracy of four classes exceeds 99% for GFDTNCLDM.Compared with other classification methods, the proposed method can enhance classification performance.

Experiment of Kennedy Space Center
Figure 11a shows the distribution based on the Kennedy Space Center dataset.We chose all 16 categories, of which 6% (about 313) samples are training sets and the rest 94% test sets.Table 3 lists the classification accuracy of the Kennedy Space Center dataset for different methods.Figure 11 shows the classification effect.
Figure 11 shows the classification results of the Kennedy Space Center datasets, and Table 3 lists the OA, AA, and Kappa accuracy for each method.The best accuracy for GFDTNCLDM is OA ¼ 98.95%, AA ¼ 98.67%, and Kappa ¼ 98.83%, respectively.Moreover, the accuracy of the four classes of GFDTNCLDM has reached 100%.The experiment demonstrates that the classification performance is improved compared with other classification methods.

Comparison of running time
Table 4 shows the comparison of running time in seconds, which includes training time and testing time.GFDTNCLDM combines Gobor and DTNCF with LDM for classification.It can be seen from the table that the time-consuming of the algorithm is mainly concentrated on DTNCF and LDM.DTNCF transforms two-dimensional image filtering into one dimension, which greatly improves the filtering efficiency in hyperspectral images.In addition, compared with the SVM classifier, LDM has a longer running time; however, there is still a certain advantage compared with DGEF by using a deep learning method.

Analysis
As shown in Figure 12, we can have the following conclusions.First, the GFDTNCLDM for hyperspectral image achieves better classification results for three datasets, among which the OA of the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 96.64,98.85, and 98.95%, respectively, which are 13-20% higher than that of SVM.The result shows that after fuzing Gabor features, LDM can achieve high-precision classification performance, which abundantly verifies the effectiveness of the GFDT NCLDM algorithm in HSI classification.Third, the OA values of GFDTNCLDM for the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 3.59, 2.06, and 1.25% higher than DTNCF-SVM, respectively.It indicates that LDM classification with the fused spatial features is more effective than SVM classification using only spatial correlation features, which sufficiently verifies the effectiveness of the GFDTNCLDM algorithm.
Forth, the OA values of GFDTNCLDM for the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 0.93, 0.04, and 0.83% higher than that of GFDTNCF-SVM, respectively.It indicates that LDM can realize high-precision classification with Gabor features and spatial correlation features fused and is more effective than SVM.Therefore, the experimental results sufficiently verify the effectiveness of GFDTNCLDM.
Fifth, the OA values of GFDTNCLDM in the Indian Pines dataset, Pavia University dataset, and Kennedy Space Center dataset are 0.31, 0.17, and 2.81% higher than that of GFDN, respectively.In addition, the OA values of GFDTNCLDM for the three datasets are 0.93, 2.27, and 1.75% higher than that of DEFG, respectively.It indicates that GFDTNCLDM is more effective in improving the classification than the deep learning classification method with a single spatial feature, proving that GFDTNCLDM also has better performance than the classification methods based on deep learning.
Last but not least, the OA values of GFDTNCLDM in Indian Pines, Salinas Valley, and Kennedy Space Center are 3.43, 3.04, and 5.11% higher than that of LDM-FL, respectively.It shows that GFDTNCLDM, which fuses the two types of features, can achieve better classification performance than that with a single spatial one.Therefore, the spatial features extracted by DTNCF and GF can improve the hyperspectral classification.

Conclusion
In this paper, we proposed a classification algorithm for HSI based on the Gabor filter (GFDTNCLDM).The experimental results show that the classification accuracy of Indian Pines, Pavia University, and Kennedy Space Center data sets is 96.64, 98.23, and 98.95% with only 4, 3, and 6% training samples, respectively; and it is 2-20% higher than the other methods.Compared with the hyperspectral information-based SVM method, EPF, IFRF, PCA-EPFs, LDM-FL, and GFDN, the proposed method significantly improves the accuracy of HSI classification.
The experimental results show that GFDTNCLDM has superior performance, and the classification accuracy of the GFDTNCLDM algorithm outperforms that of EPF, IFRF, PCA-EPFs, LDM-FL, GFDN, and DGEF, respectively.The results show that the Gabor filter can extract better spatial texture features, and the spatial features obtained by spatial correlation can help LDM to improve classification accuracy.The algorithm proposed in this paper has the following characteristics: 1. Gabor filtering uses different frequencies and directions to generate a Gabor group composed of rich features, which can extract various and more comprehensive spatial texture features in the same PCA principal component, thereby obtaining better spatial texture features of HSI; 2. Using DTNC to extract spatial features of hyperspectral images can make up for the deficiency of the Gabor filter to obtain spatial texture features, which significantly improves the classification performance of LDM; 3. The algorithm achieves lower training samples and higher classification accuracy.
In addition, this paper focuses on more effective mining of hyperspectral spatial features to improve classification accuracy in the future.

Figure 1 .
Figure 1.Generation diagram of GF group.
11) where relt is the highest classification accuracy, OA represents the whole classification accuracy of classification results, D Gabor is the spatial texture feature extracted by Gabor filter, D c is the spatial correlation feature extracted by DTNCF, and LDM refers to the classification optimization by large margin distribution machine The detailed flowchart of the GFDTNCLDM algorithm is shown in Figure 8.The algorithm consists of seven execution steps: (1) normalize HSI; (2) use PCA to reduce HSI; (3) use Gabor filter to extract spatial texture features from the first two principal components after PCA dimensionality-reduction; (4) use

Figure 2 .
Figure 2. Indian Pines (a) ground feature composition, (b) 1st principal component composition of PCA, (c) 2nd principal component composition of PCA, and (d) 3rd principal component composition of PCA.
Dimension reduction, reduce W to J using PCA and select the first three principal components of PCA for further processing by Gabor filter J ¼ PCAðWÞ (13) Step 3: Spatial texture features extraction, extract spatial texture features D Gabor from J by using GF Step 4: Spatial correlation feature extraction, DTNCF extracts spatial correlation feature D c from W. Step 5: Fusion, Equation (14) does the linear fusion of D Gabor and D c : Y ¼ D Gabor þD c (14) Step 6: Classification 1) We randomly extract a proportion of the samples from the filtered results as the training set Ys, and the rest of the filtered results are as the training set Yt. 2) Use the LDM method supported by the radial basis function to cross-validate to find a better parameter combination; 3) Use the LDM supported by the radial basis function to train Ys to obtain the training model; 4) After acquiring the model, classify the test set Yt with LDM supported by radial basis functions.Step 7: Output the classification results 2022) was obtained in 1992 by an airborne visible infrared imaging spectrometer (AVIRIS) sensor over the Indian Pines region in Northwestern Indiana.This dataset contains 220 spectral bands with a spatial size of 145 Â 145 pixels and removes the 20 spectral bands due to noise and water absorption.As shown in Table 1, the image consists of 16 classes.The second dataset is the University of Pavia (Cai et al. 2021), obtained from the Spectrometer (Reflective Optics System Imaging Spectrometer), hyperspectral remote sensing images taken at the University of Pavia, including 610 Â 340 pixels and 115 bands, and 12 spectral bands removed due to noise and other factors.The remaining 103 bands contain nine categories, and Table

Figure 12 .
Figure 12.Classification maps of different methods for HSI (a) Indian Pines, (b) Salinas Valley, and (c) Kennedy Space Center.
Randomly extract the training set F s from the filtering result F in a certain proportion, and use the rest as the training set F t ; 2. Use the SVM method supported by radial basis function to cross-validate to find a better parameter combination 3. Use the SVM supported by the radial basis function to train F s to obtain the training model 4.After acquiring the model, classify the test set Ft with SVM backed by radial basis functions.6. Output: classification results

Table 2 .
Comparison of classification accuracies (in percent) provided by different approaches (Pavia University dataset).

Table 3 .
Comparison of classification accuracies (in percent) provided by different methods (Kennedy Space Center).

Table 1 .
Comparison of classification precision (in percent) provided by different approaches (Indian Pines dataset).

Table 4 .
Comparison of classification running time (in second) provided by different approaches (Indian Pines dataset).