Integrating elevation data and multispectral high-resolution images for an improved hybrid Land Use/Land Cover mapping

ABSTRACT The combination of elevation data together with multispectral high-resolution images is a new methodology for obtaining land use/land cover classification. It represents a step forward for both the accuracy and automation of LULC applications and allows users to setup thematic assignments through rules based on feature attributes and human expert interpretation of land usage. The synergy between different types of information means that LiDAR can give new hints at both the segmentation and hybrid classification steps, leading to a joint use of multispectral, spatial and elevation data. The output is a thematic map characterized by a custom-designed legend that is able to discriminate between land cover classes with similar spectral characteristics (level 3 of the CLC legend). Experimental results from a hilly farmland area with some urban structures (Musone river basin, Ancona, Italy) are used to highlight how the proposed methodology enhances land cover classification in heterogeneous environments.


Introduction
The increased availability of Light Detection and Ranging (LiDAR) data provides new sources for Land Use/Land Cover (LULC) mapping. While high-resolution multispectral images offer detailed information on objects, such as spectral signature, texture and shape, LiDAR data provide important position and height information [Hadaś and Estornell, 2016]. The mapping of land cover is designed to identify the physical land type (e.g., water bodies, forests, urban areas), while the land use is a categorical definition that exploits land cover information to characterize the functional aspects as well as the consumption of land resources [Samal and Gedam, 2015]. Land Cover and Land Use (LULC) thematic maps directly influence government policies underlining the importance of their role.
Several authors have combined high resolution multispectral and LiDAR data with interesting results. Zeng et al. [2002] show an improvement in the classification of IKONOS imagery when integrated with LiDAR. Syed et al. [2005] underline how this integration makes the object-oriented classification superior to maximum likelihood in terms of reducing the "salt and pepper" effect. Ali et al. [2008] describe an automated procedure for identifying forest species improved by high-resolution imagery and LiDAR data. The integration of multispectral imagery and multi-return LiDAR data for estimating attributes of trees is reported in Collins et al. [2004]. Alonso and Malpica [2008] combine LiDAR elevation data as well as SPOT5 multispectral data for the classification of urban areas using a Support Vector Machine (SVM) algorithm. Ke et al. [2010] and Zahidi et al. [2015] combine QuickBird imagery with LiDAR data for object-based classification of forestal species. Forest characterization using LiDAR data is dominated by high-posting-density LiDAR data [Reitberger et al., 2008], Machala and Zejdov [2014] due to its ability to derive individual tree structures. Low-posting-density LiDAR maps are largely limited to applications of terrestrial topographic mapping [Hodgson and Bresnahan, 2004]. Wang et al. [2012] combine QuickBird imagery with LiDAR-derived metrics for an object-based classification of vegetation, roads and buildings.
Amalgamation of these two kinds of complementary datasets has also shown promise in the detection of buildings [Rottensteiner et al., 2003], [Khoshelham et al., 2010], [Tan and Wang, 2011], [Ma et al., 2015]. It has been usefully deployed in road extraction [Hu et al., 2004], [Azizi et al., 2014], [Hu et al., 2014] and 3D city modelling [Awrangjeb et al., 2013], [Kawata and Koizumi, 2014]. Not limited to these, the technique has been shown to be of use in the classification of coastal areas [Lee and Shan, 2003] and the evaluation of urban green volumes as in [Bork and Su, 2007], [Zhang et al., 2009], [Tan and Steve, 2011], [Huang et al., 2013], [Parent et al., 2015]. The diversity and scope of such developments lead generally to a better characterization of the surveyed land scene.
More recent research by Germaine and Hung [2011] delineates impervious surfaces from multispectral imagery and LiDAR data through a knowledge based expert system. In the same direction, Rodriguez-Cuenca et al. [2014] conduct an impervious /non-impervious surface classification using a decision tree system.
From the described state of art, emerged that similar hybrid data were used to analyse forestry landscapes or urban environments with successful improvement of classification accuracies. However, the techniques relied mainly on object-based and decision tree modelling or focused on few CLC classes. The aim of this work is to define a workflow able to integrate different techniques (e.g., pixel-based classification, object-based classification, rule-based system), combining their advantages in a more general approach to increase the overall accuracy in the Land Use/Land Cover (LULC) mapping.
The present article thus builds upon a body of previous work showing how pixel and object information can be integrated into a hybrid and GIS-ready solution for thematic mapping (Subsection 2.3) with measurable benefits in the final results. As a remark it is important to note that the CLC legend was adopted by European Union to uniformly map land cover across member states for planning as well as territorial government legislature [Burkhard et al., 2009], [Burkhard et al., 2012], [Feranec et al., 2010].
Central to the research conducted here, the T-MAP system is a GIS-ready solution, pushing towards an automated thematic mapping (i.e., Land Cover/Land Use mapping) paradigm in contrast to the traditional expensive and time-consuming photointerpretation process. Starting from high resolution remotely sensed data (e.g., aerial images), T-MAP combines the pixelbased and object-based approaches into an innovative hybrid classification solution that makes use of image segmentation tools and rule-based thematic categorization in order to give a GIS-ready product. In particular, it incorporates information processing from developed automated image classification to expert based image classification.
In this manner, it is possible to incorporate both the advantages of a supervised pixel-based approach, like higher reliability and more details, and of an object-based segment classification, like GIS-ready image quality and customization in terms of scale and legend.
Further work, relevant to the research described in this paper, was carried out using IKONOS/ADS40 multispectral data and led to the development of the Thematic-MAPping (T-MAP) software [Malinverni et al., 2011], which yields good results with the Corine Land Cover (CLC) legend 1 (see Appendix A).
Thus, a step forward is achieved in this work by exploiting the synergy of high spatial resolution multispectral imagery and high-posting-density LiDAR data for LULC classification, as it is described in Section 2. We demonstrate the use of WorldView-2 multi-spectral imagery combined with LiDAR derived features to improve upon the previous T-MAP approach: the goal is to define a more effective and accurate LULC mapping for heterogeneous landscapes based upon two or multiple sources of information from the same terrain object. The automatic extraction of LULC homogeneous segments, possible using the T-MAP pipeline, can be useful for GIS-based applications of environmental modelling and monitoring, like GIS modelling for river basin maintenance and hazard assessment duty, as it will be shown below.

Study area
The study area is situated in the Musone river basin in Ancona, Italy. This is a typical river basin valley spread out on a hilly farmland area, between 0 and 572 m a.s.l. populated by urban structures in the main valley.

Datasets
The experiments are carried out on WorldView-2 (WV2) multispectral images together with LiDAR data. Figure 1 shows the case study area and samples of the spectral and LiDAR data sources.
The WV2 dataset has 2 m resolution for multispectral bands (red, blue, green, near-IR, red edge, coastal, yellow, near-IR2) and 0.5 m resolution in the panchromatic band.
The LiDAR dataset (DTM, DSM First, DSM Last and Intensity) has 2 m resolution over the coastal strip (1 km inland from the coastline) and 1 m resolution for internal areas (all river basins covered for hydrogeological studies). The LiDAR dataset is property of the Italian Ministry in conjunction with the Extraordinary Plan of Environmental Remote Sensing (EPRS-E).
In addition to this, the experiments have used data from the Technical Cartography of the Marche Region (CTR) at scale of 1:10.000. The CTR is a topographic map produced by the Italian administration and contains informative layers about orography, hydrography, vegetation, infrastructures, buildings, administrative limits and toponyms.

Overall workflow of the T-MAP approach
The T-MAP system innovated through its hybrid automatic classification solution [Bernardini et al., 2010], [Malinverni et al., 2011] that combines pixel and object/region-based approaches (see Figure 2).
It uses a pre-classification process that calculates textural (Gray Level Cooccurrence Matrix, Gabor and Law filters) and has additional features such as the vegetation indices NDVI and TDVI as well as band ratios derived from the multi-spectral bands. A selection of these bands can be sourced to the pixel-based classification via an Adaptive Boosting approach. Using an AdaBoost filter, one of the most used boosting algorithms, the performance of a generic classifier is enhanced by creating a strong hypothesis from weak classifier combinations (e.g., LMS, SVM, Perceptron, Tree). The key idea of AdaBoost filtering is to iteratively focus on difficult patterns increasing the weights of misclassified training patterns while decreasing the correctly classified training patterns.
The next part of the T-MAP pipeline is the segmentation processing via a Relief-F approach, which enables selection of the most relevant features [Liu and Motoda, 2008].
An image segmentation process generates meaningful regions in terms of spectral attributes and shape parameters (i.e., compactness, convexity, etc.). A rule-based and regularised Winner Takes All (WTA) approach [Mancini et al., 2009] is then applied within each segment to determine its class assignment, by working on the results of pixel-based classification, and to turn it into a "GIS-ready" object.
The "GIS-ready" object is representative of features and probabilities on the terrain rather than being just a pixel distribution. This, by definition, significantly improves spatial consistency, semantic representation and hence the number of extracted classes.
An "object-oriented" approach carried out in this manner enables spatial information to drive the extraction of meaningful segmented regions and hence the assignment of thematic land cover classes.

Integration of multispectral and LiDAR data
Moving firmly beyond T-MAP, the present analysis seeks to knit both multispectral and LiDAR approaches together in a powerful methodology that extracts a greater quantity of reliable data than has been previously possible.
The LiDAR dataset consists of DTM (1m), DSM first pulse (1m), DSM last pulse (1m) and intensity (1m) data. For a marginal portion of the study area near the coast, we use 2 m resolution data with a resampling operation down to 1 m resolution. The classification uses the following features: (I) Δp defined as the difference between the DSM first and the last pulse; (II) Δh defined as the difference between the last pulse and the DTM. Figure 3 shows an example of a LiDAR dataset in the study area. The canopy introduces noise into the image due to the presence of dense /sparse foliage. The LiDAR dataset is useful to separate buildings from other objects, land and trees, but the presence of a dense canopy can produce errors in the signal. For this reason, an NDVI index filter is added to LiDAR features to distinguish between buildings and dense canopy excluding the vegetation points. Obviously, the acquisition period plays a key role due to the season dependence of the NDVI filter. Figure 4(e) shows how the classification without spectral information may produce incorrect results: the permanent crops are completely missed out for example. This particular result derives from the majority filter applied to reduce the "salt and pepper" effect, which removes small isolated objects (≤ 2 pixels) and tends to fill "holes" (≤ 2 pixels). Meanwhile results obtained by adding the NDVI are shown in figure 4(f).
The NDVI is a good general technique for extracting permanent crops, as it reduces "salt and pepper" effect and avoids the destructive effects of a majority filter. The training set for the classifications of the two LiDAR datasets, with or without NDVI, is formed by only 1400 pixels over an image of 4362 x 4362 pixels (< 0.008%). The selection of training areas is performed by the photointerpretation of the overall area. The training set has been reviewed by other users to avoid any check for bias.
LiDAR datasets with additional features can be integrated in two feasible ways: (I) a priori integration adding the actual LiDAR data as a feature generalization in the Feature Selection phase (Subsection 2.4.1); (II) a posteriori integration using LiDAR classified objects in the Object rule-based WTA (Subsection 2.4.2).
We can apply a posteriori integration to WTA results obtained with ("a priori LiDAR WTA") or without ("standard WTA") a priori integration.
The results of such a classification are sensitive to various factors: training samples (in terms of number and location), number of iterations, the feature dataset; in section 3.1.2 we discuss a sensitivity analysis that enables quality assessment of results over the two integration approaches.

A priori integration by increasing feature space
All LiDAR features (DTM, DSM first, DSM last, intensity, Δp and Δh) are given as inputs to the feature selection module together with all textural and spectral features. Among the LiDAR features, only the Δh was selected for the pixel-based classification. In the fusion of LiDAR and WV2 datasets an alignment procedure is required to correctly overlap spatial information: regular grids of LiDAR dataset have thus been produced by aligning the origin pixel of WV2 datasets. Selected features are given equal weights by the pixel-based classification and then assigned as segment attributes by the WTA algorithm, always in accordance with the hybrid classification solution. Figure 5 sketches out the working schema.

A posteriori integration by rules
LiDAR information together with data from NDVI, obtained from multispectral images, is used to determine the segment attributes of "LiDAR WTA" (see Figure 6). The final object rule-based processing "a posteriori LiDAR WTA" combines the segment attributes obtained during creation of "LiDAR WTA" with the results of the object rule-based "standard WTA". The "standard WTA" incorporates spatial and geometrical information coming from the segmentation algorithm and includes size, shape and percentages of WTA land cover classes in each segment. This new learning system improves the pixelbased classification result in terms of spatial consistency, semantic representation and number of extracted classes. Figure 6 illustrates the working schema.
Better results are obtained with a priori and a posteriori integration. This variant is still based upon the "a posteriori LiDAR WTA" but uses the "a priori LiDAR WTA" in place of the "standard WTA".
In general, standard satellite imagery enables the automatic classification of wide areas. However, problems arise with complex and heterogeneous CLC classes, especially when using only spectral features or combinations of them (e.g., band ratios and vegetation indexes) [Mather and Tso, 2016].
Taking a different approach, the research presented here instead makes use of LiDAR additional information and of knowledge based expert rules to perform an automatic classification.
The inherent rule framework is developed in accordance with the chosen CLC legend, by defining rules according to the following formula: Rule: If (conditions) Then (decision class) End if where the conditions are related to certain attribute hypotheses that will lead to the assignment of the decision class to the segment involved, if verified. The time required to process a rule is negligible because attributes involved in the hypotheses are easily accessible from WTA outputthat is, the data is already present in some sense. This increment in efficiency is an addition to the overall quality of the workflow. Examples of rules implemented in this new system are graphically represented as a decision tree. The rules discriminate classes as "not detectable" or they detect and update erroneous classifications of the foregoing process. The symbol WTA may represent either a "standard WTA" or an "a priori LiDAR WTA" and the symbol LiDAR will be used in place of "LiDAR WTA". Table 1 summarizes acronyms related to the rules, while the Appendix A reports classes of the CLC legend.
The complex CLC classes "Continuous urban fabric" (1.1.1) and "Discontinuous urban fabric" (1.1.2) produce similar spectral signals, so they required additional rules (Figure 7) to be discriminated after the WTA step. The idea behind the rules is to measure the dominance of the building /infrastructure over other cover classes, without requiring the specification of other urban parameters.
LiDAR classification ensures better performance in extracting urban objects (LiDAR: Building class 3). We can improve these urban-extraction-rules by implementing other metrics [Dell'Acqua et al. (2006)] for environmental characterization or classification of urban areas. The complex CLC class "Transitional woodland-shrub" (3.2.4) is very similar in spectral signature to other classes, like CLC "Heterogeneous agricultural areas" (2.4.2). Thus again it required an additional rule (Figure 8) to be discriminated after the WTA step. The Figure 8 structure considers the distribution of pixels (areas) classified as building and transitional woodland-shrubs (respectively WTA classes 1. 2.2 and 3.2.4), which usually represent a heterogeneous agriculture area. Two thresholds (*_th) are set to adjust the rule for a reliable work. The choice of parameters is worked out from several trial runs. Common representative values are 50%   for WTA.Win_perc_th and 0.3 for the LiDAR.
Area_th. Furthermore, rules may also be applied to detect and update erroneous classifications from the previous pipeline process, as is the case with the following two rules, respectively for CLC "Arable land" (2.1.0) in Figure 9 and for CLC "Permanent crops" (2.2.0) in Figure 10.
A parameter (LiDAR.Win_perc_th) is set to adjust dominance of the LiDAR classification in the given region (LiDAR: Land class 2). The choice of this parameter is made by again carrying out several trial runs. A common representative value is 50% for LiDAR.Win_perc_th.
The rule attempts to evaluate the distribution of trees and agricultural land considering that trees belong to permanent crops. The LiDAR classification component reinforces the detection and then the discrimination between trees and land. Common representative values are found to be 50% for the WTA.Win_perc_th and 65% for the LiDAR.

Win_perc_th.
More rules can be also defined by a domain expert user to extract classes that are not spectrally separable. This set of rules handles complex classes that are often available in standard legends like the CLC (e.g., CLC "Heterogeneous agricultural areas" 2.4.2).

Classification using a priori methods
Considering classification by using an augmented feature space, in order to choose the best feature combination, we adopt the feature selection algorithm used by [Malinverni et al., 2011]. As a slight modification to that procedure we also implement a comparison between certain different combinations to understand better the impact of any less understood features that might occur during the analysis. Figure 11 shows the result of different T-MAP steps using the WV2 dataset: the pixel based classification through spectral features are developed using the "region growing" approach ( Figure 11, mid). During this process, the radiometry data are treated with the NDVI and texture segmentation operations (Figure 11, left).

Augmented feature space
The object-based classification on the other hand is brought about by using the Winner Takes All (WTA) methodology (Figure 11, right), which combines pixel classification and segmentation through assigning to each segment the most representative class. Experimental results thus show that the integration of LiDAR elevation data improves the classification of multispectral bands, by allowing the separation of classes that have similar spectral characteristics. In general, when adding LiDAR information, the classification results show a more realistic and homogeneous distribution of geographic features than those obtained when using multispectral WV2 images alone.

Sensitivity analysis
Results of each classification are sensitive to various factors like training samples (in terms of number and location), number of iterations and the feature dataset. We did not consider classification using only LiDAR datasets because the set of classes compared to WV2 ones are completely different.
We start by analysing classification performance with the best combination of features -as introduced in Section 2.2 -changing control-sets and number of iterations to gauge the sensitivity performance. The best combination of features is composed from 8 base bands of WorldView2 imagery (base in tables), vegetation index calculated over NIR and red bands (TDVI in tables), 3 texture filters outputs (GLCM, Gabor and Law, text in tables) and LiDAR dataset (Δh as the difference between the DSM last pulse and the DTM, LiDAR in tables).
The Overall Accuracy "saturates" with increasing number of iterations with different threshold values for the training-set and the two control-sets. Samples for Control-Set 1 have been manually chosen in a fashion similar to the Training-Set. For Control-Set 2, samples of urban area classes (CLC 1.1.0, 1.2.1 and 1.2.2) are chosen in the same areas as the Training-Set but instead taking regions from the building and infrastructure layers of the CTR at scale of 1:10.000. There is a less than desirable correspondence of a building's footprints and roadways (as vector representations in CTR) between the WorldView-2 imagery and the LiDAR dataset. It directly leads to a worse classification with lower value of Overall Accuracy (approximately 7% lower than in Control-Set 1 and 17% compared with the Training-Set).
We next compared the behaviour of the Overall Accuracy with the User Accuracy and Production Accuracy of urban area classes as a function of the iterations number. The values of the indices considered were average values and min-max regions. The same saturation effect that is observed for the Overall Accuracy is also observed for User Accuracy and Production Accuracy of urban area classes. We note that the saturation effect for average values and the convergence effect of min-max values is actually an expected feature of the AdaBoost classifier.
To better understand differences in the classification performance, due to the addition of the LiDAR dataset and its derived features, we set number of iterations comparing minimum Overall Accuracy of classification using different feature dataset combinations as is shown in Table 2. Greater increments are shown in green (smaller values are in red with a graduated colour scale). Comparison of the Overall Accuracy shows that the main contribution is due to texture filters, while the LiDAR dataset seems irrelevant here. When we consider only urban area classes (1.1.0, 1.2.1 and 1.2.2) and we compare minimum values of averages for User Accuracy (Table 3) or for Production Accuracy ( Table 4) the importance of adding the LiDAR dataset to urban areas classification becomes clear.

A posteriori classification
In all these cases it was necessary, before running rules, to classify with success buildings and permanent crops or trees with the help of the LiDAR dataset, which has better performance and a higher detection rate when compared to visual imagery. LiDAR features are in fact fundamental to the extraction of buildings versus other classes as road or land, tree or grass. However, experimental results show how NDVI features must be added to the LiDAR dataset to improve the classification in building/road, tree and land categories. In particular, this is most evident in the discrimination between buildings and land. The AdaBoost classifier is instantiated by a training phase of 35 iterations. Without spectral information, problems arise for the following classes:  Figure 11. Classification of the study area using the spectral WV2 dataset. On the left, the pixel based classification through spectral features, NDVI and texture. On the middle, the "region growing" segmentation. On the right, the object-based classification using the Winner Takes All (WTA) methodology.
(I) Building vs dense canopy /tree (the first and the last pulse are identical with similar intensity) (II) Road vs land (the first and the last pulse identical with similar intensity) Figure 12 presents the results of the classification using the LiDAR data set augmented with the NDVI index: the pixel based classification (Figure 12, left) and the WTA using the same image segmentation as before (Figure 12, right). A segmented region with the WTA is represented with the more frequent class of classified pixels, thus obtaining a map with homogeneous regions. Finally, smaller areas are merged with the larger ones, in accordance with the sensitivity parameters of the segmentation algorithm.

Continuous /discontinuous urban areas
The focus of this classification is the extraction of continuous and discontinuous urban areas, quite a difficult challenge for the CLC definition by itself. For the CLC legend, a continuous urban area has at least the 80% of the total surface that is impermeable, while for the discontinuous urban area this percentage value comes down to the range of 30%-80%. From the spectral perspective, this creates a challenge because a single closed region of a discontinuous urban area includes several classes while over-segmentation should be avoided due to the similarity with a pixel-based classification. In this way, urban fabric areas are subdivided into continuous and discontinuous categories according to the percentage of surface covered by buildings, obtaining a more detailed LULC map.

Heterogeneous agricultural areas
The heterogeneous agricultural areas or croplands are another challenging class due their intrinsic variety of objects. Heterogeneous agricultural areas are defined by the CLC project itself as the "juxtaposition of small parcels of annual crops, city garden pastures, fallow lands and/or permanent crops somewhere with scattered houses" 2 .
According to this definition, it is clear how this class is more a land use rather than a land cover class and is composed of more objects with different spectral signatures. Pure object based approaches are unable to extract this kind of class considering the bending effect of spectral signatures inside an object. By applying the above mentioned rule, segments can be converted into heterogeneous agricultural areas and classified as a Complex cultivation pattern (CLC class 2.4.2, third level).
Experimental results highlight how the ability to model the distribution of classes within a closed segment exploiting the benefits of a pixel based classification can enable the classification of areas that a pure object based approach would probably fail to detect.

Permanent crops
The correct extraction of permanent crops plays a key role for a large set of stakeholders involved in the management of high value agricultural areas.
This actually happens in Italy where vineyards, fruit trees, berry plantations and olive groves are often considered key resources for the local economy owing to the international demand for high quality products.
In this context, one can infer that it is important and vital to correctly map and monitor the CLC class 2.2 known as "permanent crops". In the WV2 classification and more in general with T-MAP, textural features (i.e., using Gabor and Haralick filters) are added to the feature set to enable the detection of repetitive patterns that are typically dominant on permanent crops. Despite this expedient, in some areas, the texture can be irregular, weak and the textural filters can actually fail. To overcome this problem, a rule is developed to consider both the WV2 and LiDAR dataset and the distribution of trees and permanent crops (CLC class 3.2.4 and 2.2.0) inside a given segment. Figure 13 shows some of the working steps with permanent crops that are easily recognizable in the WV2 image ( Figure 13, upper left). These are also correctly detected (as trees) by the LiDAR pixel-based classification ( Figure 13, upper right). The (down left) image in Figure 13 shows how the permanent crops are not correctly detected in the WV2 classification working only with spectral and connected textural features.

GIS-ready CLC and stability maps
The stability map (Figure 14, right) is generated by considering the ratio between the second-best and the winner class for each segment. Red areas with stability indices close to unity have to be reviewed while yellow ones are to be considered reliable.
The stability map is a useful tool extending the well-known concept of the confusion matrix especially during the instantiation of the training set: small areas can be processed and checked providing feedback (stability maps) and then the training set and classifier parameters may be tuned to provide better classification results.
We considered the stability map as a comparison term between different strategies in our methodology. In Table 5 we compare computed average stability and standard deviation by class and by strategy.
There is a clearly detectable improvement in WTA stability for urban areas (1.1.0, 1.

Conclusions and ongoing research
The key idea of this paper is to use both spectral and LiDAR data to improve classification results in terms of extracted classes and robustness. The methodology is based on a hybrid approach that combines pixel-based and object-based classifications. The WTA approach models the variety and/or the dynamics of a closed segment supporting the extraction of complex/heterogeneous classes that, by their CLC definition, are not actually spectrally separable. The post-integration of spectral and LiDAR classification by a set of rules enhances/reinforces the extraction of CLC classes as continuous or discontinuous urban areas, arable lands, heterogeneous agricultural areas and permanent crops.
Values of the NDVI index derived from the multispectral imagery aid in the extraction of vegetation from human-made materials and improve the classification of LiDAR data, while elevation and attribute height data extracted from the LiDAR data help in discriminating attributes such as buildings, roads and the often-dry streams and waterways.
The LiDAR dataset augments the spectral classification owing to the robust detection of tree, building and land classes and improves the extraction of third level CLC classes.
Improvements are evident in classes with similar spectral characteristics but for which altitude is a relevant discrimination factor. Experimental results show that LiDAR should be augmented by using the NDVI index to avoid the misclassification between dense canopy and buildings, in particular in heterogeneous environments.
The double linking of LiDAR and spectral data suggests the importance and the relevance of multisource heterogeneous data when dealing with a highly detailed legend (such as the CLC legend at the third and fourth levels). As the level of detail increases the spectral separation decreases due to the dominance of the semantic over the spectral class definition. The set of rules takes into account the concept of heterogeneous areas where different objects with different spectral signatures are present.
In conclusion, the new proposed methodology: (I) represents a step forward evaluating the synergistic use of high spatial resolution multispectral imagery and high-posting-density LiDAR data (1 and 2 m) for LULC classification; (II) automates complex manual procedures, saving time and money and also increasing the number of acquisitions and analyses.
Finally, the resulting LULC map is GIS-ready and so suitable for use inside GIS based approaches, like the two we have performed, always in the Musone river basin, to detect changes in the estuary mapping [Mancini et al., 2015] and to determine the class and location of buffer strips [Pierdicca et al., 2016].