Skip to Main Content

This article is aimed at earthquake hazard, vulnerability and risk assessment as a case study to demonstrate the applicability of Spatial Multi-Criteria Analysis and Ranking Tool (SMART), which is based on Saaty's multi-criteria decision analysis (MCDA) technique. The three specific study sites of Delhi were chosen for research as it corresponds to a typical patch of the urban environs, completely engrossed with residential, commercial and industrial units. The earthquake hazard affecting components are established in the form of geographic information system data-set layers including seismic zone, peak ground acceleration (PGA), soil characteristics, liquefaction potential, geological characteristics, land use, proximity to fault and epicentre. The physical vulnerability layers comprising building information, namely number of stories, year-built range, area, occupancy and construction type, derived from remote sensing imagery, were only considered for the current research. SMART was developed for earthquake risk assessment, and weights were derived both at component and its element level. Based on weighted overlay techniques, the earthquake hazard and vulnerability layers were created from which the risk maps were derived through multiplicative analysis. The developed risk maps may prove useful in decision-making process and formulating risk mitigation measures.

1. Introduction

Natural disaster risk analysis is a complex task that influences almost every aspect of decision-making. Although risk has different definitions amongst various stakeholders, it is considered as a function of hazard (e.g. earthquakes, tropical cyclones, floods, etc.), vulnerability and exposure (life and property) (ADB 2005). The terminology of “disaster risk” in risk assessment has always been a state of confusion. This is mainly due to contextuality, which is automatically built-in, within the concept of risk. In the field of disaster research itself, there are variants in risk assessment framework based on purpose/objective of the end user. Hence, in the current research, nomenclature of terms associated with disaster risk is as discussed in the works of Birkmann (2007) and Peduzzi et al. (2009).

1.1. Hazard

Potentially damaging exogenous events whose probable characteristics and frequency of occurrence can be estimated (expressed in numerical terms) is called hazard. Also, it can be further defined as a potentially damaging physical event, phenomenon and/or human activity which may cause loss of life, property damage, social and economic disruption and environmental degradation.

1.2. Vulnerability

Aggregated probability describing system's susceptibility to the disaster and its effect is called vulnerability. In the purview of the present research, it is often viewed as an intrinsic characteristic of a system (physical, social, economic and environmental factors), which increases the susceptibility of a community to the impact of a hazard. The dimensions of vulnerability include the following:

  1. Physical vulnerability – impact of events on assets such as building, infrastructure, agriculture, etc.

  2. Social vulnerability – impact of events on highly vulnerable groups such as poor, coping capacity of people, etc.

  3. Economic vulnerability – impact of hazards on economic assets and processes such as business interruptions, etc.

  4. Environmental vulnerability – impact on environmental quality, natural resilience to hazard, environmental buffering, etc.

1.3. Natural disaster risk

Objective measurement and scientific repeatability are hallmarks of risk (Prevention Web 2011). Disaster risk is a function of probabilistic output of potential disaster exposure and vulnerability (expressed in numerical terms). It is important to note here that natural disaster risk cannot be construed as a simple summation/aggregation of distributional potentials of the hazard, instead is an inclusive term built upon ‘systemic weakness (vulnerability)’ and ‘hazard exposure’.

In the current research, the natural disaster risk is defined through the following relationship: where

Hazard = f (extension {area}, frequency {time}, magnitude {strength of event})

Vulnerability = f (physical, social, economic and environmental factors)

Despite varied definitions, natural disaster risk in all its form has continued to increase in complexity, magnitude, frequency causing disruption to social–economic–environmental support system. Amongst natural disaster categories, the risk caused by geophysical events, namely earthquakes, tsunamis and volcanic eruptions, are considered to be of utmost complex and disastrous because since the time immemorial, these can neither be predicted nor estimated (Erlingsson 2007). The destructive power of geophysical hazards, combined with vulnerabilities across the spectrum of exposed elements has led to large-scale covariate losses (Parvez et al. 2003). Though it may not be feasible to prevent the risk and associated damages caused, the efforts could be directed to alleviate their impacts on life and property by adopting apposite management strategies.

One of such devastating geophysical hazards is earthquake. The term ‘earthquake’ describes a sudden, rapid shaking of the earth caused by the release of energy stored in rocks causing great destruction (FEMA 2002). Earthquakes are caused by the movement of tectonic plates. According to seismic zone maps developed by Building Materials and Technology Promotion Council (BMTPC 1997), over 65% of India is prone to earthquake of intensity: Medvedev–Sponheuer–Karnik (MSK) scale VII or more. Based on the maximum intensity of earthquake expected, India has been divided into four seismic (zones II–V) zones. Amongst these, zone V is the most active one and comprises the entire Northeast India, the northern portion of Bihar, Delhi, Uttarakhand, Himachal Pradesh, Jammu and Kashmir, Gujarat, and Andaman and Nicobar Islands.

The objective of research is to evaluate the use of Spatial Multi-Criteria Analysis and Ranking Tool (SMART) in earthquake risk assessment in regions of Delhi, India. Delhi, which lies between the latitudinal parallels of 28°40′N and 28°67′N and the longitudinal parallels of 77°14′E and 77°22′E, occupies the northern region of India (216 m above sea level). The study area characterizes a typical congregation of urban landform with high usage, density and exposure, which is appropriate for developing activities revolving around risk planning and mitigation. Such characteristics suit the selection of area for the case study to demonstrate the applicability of tool and multi-criteria decision analysis (MCDA) techniques.

The geological settings of Delhi make it potentially face with naturally hazardous phenomenon such as earthquake activities. In this highly populous city, building constructions are not earthquake-resistant. In addition, the rising demographics with rapid developmental activities and unsafe building practices are compounding the risk of earthquake. This is because to cater such large population, low-cost materials are employed without giving much consideration to the technical aspect and, henceforth, become easy targets of disaster events. The scope of the current research is to employ SMART, a multi-criteria decision-making tool, to develop seismic hazard, vulnerability and risk maps, which can be used by planners, managers and decision-makers to manage risk by necessitating development in highly vulnerable areas, and reducing losses through adopting preventive measures.

Advances have been made in the earthquake hazard and vulnerability assessment, including mapping, zoning, risk scoring using geospatial techniques of remote sensing, GIS and global positioning system (GPS). In this study, geospatial techniques have been employed for assessing the spatio-temporal nature of geophysical risk using diverse spatial data-sets. However, the application of geospatial technology in isolation cannot address the complexities associated to earthquake risk management and fails to extend full support in decision-making. Being a problem with multiple dimensions, involving multiple criteria, and conflicting objectives, its assessment is considered as an MCDA problem that needs specialized tools and techniques that can support a systematic decision-making. MCDA is characterized by evaluation of a finite set of alternatives on the basis of conflicting and incommensurable criteria of quantitative, qualitative or both. The preference values of the alternatives on permissible scale measure the overall preference values (Malczewski 1999; Jankowski et al. 2001).

There are notable amounts of research on hazard, vulnerability and risk assessment where MCDA techniques have been used. MCDA has been used extensively to analyse flood vulnerable areas in Turkey (Yalcin 2002), seismic retrofitting of an under-designed reinforced concrete structure (Caterino et al. 2006), effectiveness of alternative retrofit options in seismic risk mitigation (Giovinazzi et al. 2006), vulnerability assessment of volcanic risk (Aceves-Quesada et al. 2007), fire risk evaluation (Vadrevu et al. 2009), seismic hazard for Bangalore city of India (Anbazhagan et al. 2011) and assessment of landslide vulnerability in Romania (Armas 2011). Hizbaron et al. (2011) tested spatial multi-criteria evaluation (SMCE) feasibility for social vulnerability assessment in seismic prone areas of Bantul, Indonesia. Erden and Karaman (2011), in their study, analysed earthquake parameters and generated hazard maps for Kucukcekmece region by integrating MCE and GIS. Martins et al. (2012) presents a GIS-based MCDA to assess social vulnerability to seismic risk to the municipality of Vila Franca do Campo in Portugal. Thus, there has been a growing interest in GIS-based MCDA techniques for disaster risk assessment. The trend in risk assessment studies is further evident by an increasing number of published articles on this topic.

MCDA technique incorporated into the GIS-based earthquake risk analysis is the analytical hierarchy process (AHP) devised by Satty (1980). SMART based on the concept of AHP is employed to conduct pairwise comparison based on expert judgement, and derives weights both at component and its element level. The weighted overlay analysis is performed to develop seismic hazard and vulnerability maps. These maps are then combined to develop seismic risk map in a manner similar to that used in the linear multiplicative combination methods.

Three sites in Delhi have been selected in the study (figure 1). Site I (pin code – 110001) falls in the central Delhi region. The region covers an area of approximately 25 sq. km with a population concentration of 646, 385 persons. Study site II (pin code – 110025) lies in the south Delhi region which accommodates a total population of 2,267,023 and an area of 250 sq. km. Site III (pin code – 110023) falls in the south-west region of Delhi. The region has the lowest population concentration, i.e. 1,755,041 and encloses an area of 420 sq. km. The population density in the south (6012) and south-west regions (2583) is comparatively lower than the central region (Census 2001). The density of establishments is quite high in Site I (3223.48) followed by Site II (419.20) and Site III (146.10) (Fifth Economic Census 2005, Delhi). To achieve the objective of developing earthquake risk map, eight earthquake hazard and five vulnerability causal components were taken into consideration. These were extracted, calculated and evaluated, and then the component weight and its element weight were developed.

Figure 1. Overview of study area.

2. Materials and methods

2.1. Spatial data layers for hazard and vulnerability assessment

Earthquake risk in this study is predominantly a function hazard and vulnerability, i.e. risk = f (Hazard × Vulnerability). The development of earthquake risk map involved generation of both hazard and vulnerability maps. In vulnerability analysis, only physical vulnerability (building vulnerability) has been considered in this study. The data-sets for hazard and vulnerability were derived from various data sources. The data development for risk analysis is beyond the scope of the current study.

Various processing steps were carried out for the development of spatial data layers and a geo-database was created. MCDA was applied to extract standardized weights for different layers and, finally, earthquake hazard and vulnerability maps for the three study sites of Delhi were created by weighted overlaying techniques as discussed in the subsequent section. Risk maps were construed from the obtained hazard and vulnerability maps. Figure 2 summarizes the methodological framework adopted in the present research.

Figure 2. Methodological framework adopted in the present research.

2.2. MCDA tool development and implementation – SMART

Saaty's (2000) multi-criteria decision analysis process, a participatory decision-making framework, was used to develop seismic hazard and vulnerability map of the study area. To perform risk assessment, a spatial tool called ‘SMART’ (Spatial Multi-criteria Analytical Ranking Tool) following Saaty's concept and MCDA calculation techniques was developed (figure 3). The code for application development was written in C#, and MapXtreme was used for the development of mapping interface. SMART is designed into an easy-to-use format so that users are able to reap some of the benefits of GIS without any background knowledge. The SMART consists of four different components for performing AHP:

Figure 3. SMART interface for performing AHP.

  1. Selection of hazard/vulnerability component and its element for hierarchical development.

  2. Pairwise comparison and weight determination at different levels of hierarchy, i.e. component and its element.

  3. Weighted overlay analysis.

  4. Hazard/vulnerability/risk map creation using decision rule.

On running SMART, Home tab page opens up whereby users can launch layers for AHP analysis. The user has flexibility to select minimum three and maximum nine components for analysis from a range of available component maps. This is because, as a rule of thumb, psychologists conclude that, in general, nine objects are the most that an individual can simultaneously compare and consistently rank in any one node of a decision hierarchy to achieve greater decision efficiency and consistency (Pereira and Ducktein 1993; Zhu et al. 1996). On selecting components for evaluation, SMART launches pairwise comparison tab for component and its element evaluation. Depending on the selected number of components, the same number of tabs and one cross-component pairwise comparison tab open up for analysis at both levels (figure 3). The user is provided with comparison charts whereby they can select absolute numbers provided to express individual preferences/judgement on a semantic nine-point scale for the assignment of priority values (Saaty1980).

Once the user is through with comparative analysis, the SMART displays weights and consistency ratios (CRs) based on expert judgements, besides calculating eigenvalues and consistency index (CI). CI provides information about logical consistency among pairwise comparison judgements in a pairwise comparison matrix. As the value of CI grows, the degree of logical inconsistency among the pairwise comparison judgements is also considered to grow. The comparison matrix is considered to be adequately consistent if the corresponding CR is less than 10% (Saaty 1980). Thus, the weighted overlay analysis including the final derived weights, both at component and its element level, is added to develop hazard and vulnerability maps, respectively. The obtained hazard/vulnerability map is launched in map control of home tab of SMART.

2.3. Algorithm for SMART

SMART is based on Saaty's MCDA techniques. The calculation presented in the subsequent section forms the back end of SMART development.

Step 1: Pairwise comparison matrix

This step involves the comparison of component or its element over another. Only one side of off-diagonal element needs to be filled in and the other side is filled based on the reciprocal of the corresponding off-diagonal element (figure 4). Diagonal elements are always equal to 1.

Figure 4. Pairwise comparison matrix.

Step 2: Computation of normalized matrix

Once matrix is ready in Step 1, the sum of corresponding column is calculated. All elements in pairwise comparison matrix of Step 1 are divided by their respective column sum values for the generation of normalized matrix (figure 5).

Figure 5. Normalized matrix.

Step 3: Weight (normalized eigenvector) derivation

Individual weights/normalized principal eigenvector of component or its element are derived by the summation of respective normalized matrix row and dividing it by the number of component/element(s) used for analysis (equation 1). (1)

Example:

Normalized principal eigenvector1 = (0.533 + 0.800 + 0.500 + 0.381 + 0.320 + 0.286 + 0.267 + 0.267)/ 8 = 0.42

Step 4: Calculation of principal eigenvalue (λmax)

Further, the eigenvalues are calculated which is defined as the product of normalized principal eigenvector and the sum of respective columns of pairwise comparison matrix (equation 2). The principal eigenvalue is derived from the summation of eigenvalues (equation 3). (2) (3)

Example:

Step 5: Computation of CI and CR

CI is the consistency index which is derived from principal eigenvalue and number of components (equation 4): (4)

Example:

The logical consistency of comparison matrices is evaluated using the CR which is derived from the ratio of CI and random inconsistency index (RI) (equation 5): (5)

Example:

RI depicts the random inconsistency index, obtained by averaging the CIs of many randomly generated pairwise comparison matrices (Saaty and Vargas 1993).

2.4. Assessing the weights and overlaying components

MCDA was performed by deriving weights for the selected map components using nine experts comprising two academic researchers, three from government organizations and four from non-government organizations who work closely in seismic risk assessment fields. In this research, pairwise comparison and weight determination addressing preference statements at two levels of hierarchy, i.e. elements of component and components, were carried out. Two levels of weighting performed to obtain standard weights were:

  1. weight derivation of the element associated with each component (attribute values of the map layers), and

  2. weight derivation of component/decision factor (map) layers.

The pairwise comparison matrices obtained from each expert that met the consistency criterion of <0.10 were considered for aggregation. The aggregations of matrix were done to obtain the complete overview of preferences of experts related to hazard and vulnerability. The geometric means were derived using aggregation method for each element in the matrix (Saaty 2000). A geometric mean was used instead of arithmetic mean when aggregating different criteria and finding a single “figure of merit” as it “normalizes” the ranges being averaged and hence no range dominates the weighting, and a given percentage change in any of the criteria has the same effect on the geometric mean. For example, a 20% change in seismic zonation from 4 to 4.8 has the same effect on the geometric mean as a 20% change in slope variability from 60 to 72. The resulting priority weights for earthquake hazard and vulnerability components and its elements are enlisted in tables 1 and 2, respectively. Thus, a weighted overlay analysis was performed to obtain final hazard and vulnerability maps as illustrated in figures 7 and 8, respectively.

Table 1. Spatial data layers employed in the hazard study with their sources.

Table 2. Priority weights for seismic hazard and vulnerability mapping.

Figure 6. Seismic hazard map of Delhi, India.

Figure 7. Seismic vulnerability map of Delhi, India.

Figure 8. Seismic vulnerability maps of the three selected study sites of Delhi.

3. Results and discussion

Different hazard and vulnerability components were appraised in order to establish their validity and usefulness, and eventually to develop composite hazard and vulnerability maps for the study region. Following MCDA technique, each component and its element were assigned weights and rankings according to their perceived relative significances to seismic hazard and vulnerability. To develop hazard map, eight earthquake-inducing components were calculated, namely seismic zone, PGA, soil characteristics, liquefaction potential, geological characteristics, land use, distance to faults and distance from epicentre (table 1). Building attributes constituting physical vulnerability components, namely building area, age, number of stories, occupancy and construction type, were used to develop seismic vulnerability map. These factors were evaluated, and then component weight and element weight were derived from pairwise comparison matrix.

The obtained cross-component weights were applied and a hazard score map of Delhi was generated with values ranging between 0.17 and 0.89 (figure 6). The hazard map was classified into five categories, namely negligible (0–0.01), low (0.01–0.25), moderate (0.26–0.50), high (0.51–0.75) and very high (0.76–1.00) hazards. The classified map illustrates that seismic hazard of the Delhi region with very high and high hazard classes follows order of east > north > west > south. The east region of Delhi is considerably high hazard prone because it is posited in high seismic zone and greater liquefaction potential. Parts of central Delhi are also subjected to high hazard susceptibility due to socio-economic asset accumulation. Accumulation of people and their assets seemingly become major cause of the hazard risk. In seismic hazard assessment, PGA was ranked as the most important component and considered as the highest contributing factor in seismic hazard by all experts. Soil characteristics, geological characteristics, liquefaction potential and land use with ranks 2, 3, 4 and 5 within the MCDA method forms the second line of influencing components. In all pairwise comparison matrices, the ranking is very sensitive to small changes of component weights. Differences between liquefaction potential and geological characteristics, as well as between liquefaction and land use, are negligibly small according to MCDA techniques.

Two levels of vulnerability analysis were carried out, i.e. at pin code and building levels. For pin code-level vulnerability assessment, two components built-up and population. These factors were evaluated, and then the component weight and element weight were derived from the pairwise comparison matrix. The obtained cross-component weights were applied and a vulnerability score map of Delhi was generated with values ranging between 0.1 and 0.9 (figure 7). The map was classified into five vulnerability classes, namely negligible (0–0.01), low (0.01–0.25), moderate (0.26–0.50), high (0.51–0.75) and very high (0.76–1.00). The classified map illustrates that the seismic vulnerability of the Delhi region with very high and high vulnerability classes follows order of east > north, central > south >west.

Building-level analysis involved the evaluation of three study sites. Vulnerability component weights were assigned according to their perceived relative significances to seismic risk (table 2). The first component is building storeys. Based on expert knowledge, the number of storeys is accepted as having a linear vulnerability degree until the seventh floor. If building has more than seven storeys it is accepted to fall under high and very high vulnerability. The second component is building year-built range. Based on housing policy in India and expert knowledge, construction law can be accepted as a crucial point. Until 1978, vulnerability value was accepted as having a linear increase than there is a sharp decrease because of housing policy and legal structure. After 1978, vulnerability became less and after 1999, since strong arrangements were done on construction law, vulnerability has been accepted as decreasing. The third vulnerability component is building area. The building area with greater square footage is accepted as the most vulnerable ones; however, having less building area is accepted as the least vulnerable ones. The fourth component is construction type. Buildings made of mud/tin/kachcha materials are accepted as the most vulnerable ones. However, those made from reinforced concrete cement (RCC) are accepted as the least vulnerable ones. The fifth component is building occupancy type. Residential buildings are accepted as the least vulnerable ones as opposed to industrial structures. In vulnerability assessment, building stories is considered the highest contributing factor in asset vulnerability modelling by all experts. Building area, construction, and occupancy type with ranks 2, 3 and 4 within the MCDA form the second line of influencing parameters. Also, the difference between construction type and occupancy is negligibly small.

The obtained cross-component weights were applied at building level for each study site, namely Site I (pin code – 110001), Site II (pin code – 110025) and Site III (pin code – 110023), and vulnerability maps were generated with values ranging between 0.092 and 0.526 with a mean value of 0.16478 and standard deviation as 0.0435. The maps were classified into four categories, namely low (0.092–0.144), moderate (0.145–0.182), high (0.183–0.246) and very high (0.247–0.526), using natural classifier classification techniques. The classified map illustrates that vulnerability with very high and high classes follows the order as Site I > Site II > Site III. This conforms very well to the fact that Site I (figure 8(a)) lies in zone IV area for seismic activity and is one of the most vulnerable areas for earthquake. A number of high-rise buildings in and around Connaught Place, which are not earthquake proof, make it more vulnerable for damages (District Disaster Management Plan, New Delhi). A majority of vulnerability scores for Site II also fall under very high and high classes as it consists of cluttered buildings with masonry construction type (figure 8(b)). Vulnerability falls in moderate and low classes for Site III as this site consists of medium-rise, planned residential buildings with mostly RCC construction type buildings (figure 8(c)).

As risk is defined as a function of hazard and vulnerability, multiplicative function between developed hazard and vulnerability maps were carried out to generate risk maps of Site I (pin code – 110001), Site II (pin code – 110025) and Site III (pin code – 110023). The risk values of these sites ranged between 0.044 and 0.373 with a mean value of 0.105 and standard deviation as 0.034. The obtained risk maps were subjected to natural classifier technique into four classes, namely low (0.044 – 0.089), moderate (0.090 – 0.122), high (0.123 – 0.170) and very high (0.171 – 0.373) risks. The classified risk map illustrates that risk with very high and high classes follows the order as Site I (figure 9(a))> Site II (figure 9(b))> Site II (figure 9(c)). The Connaught Place area, i.e. study site I, is illustrated to be the riskiest amongst all three sites. The high risk illustration is in accordance with the reports of District Disaster Management Plan, Delhi, which considers Connaught Place region to be highly populous with large number of high-rise buildings. The report also suggests that an earthquake of large magnitude may result in colossal damage to this study region.

Figure 9. Seismic risk maps of the three selected study sites of Delhi.

These developed risk maps could be useful for explaining the known existing earthquake susceptibility and making emergency decisions. In addition, results obtained in this study may assist the efforts of planners and decision-makers in adopting mitigation measures for future earthquake risk.

Decision mapping is elemental to make better decisions. It enables to improve “hit rate,” i.e. the proportion of decisions basically getting right. It does this by improving the thinking leading up to the decision. There are two main aspects to it. First, the various decision ingredients are made fully explicit – the questions, options and sub-options, and their relationships in a systematic, disciplined way. Second, the map is produced, i.e. laying all this out in visual form. The use of SMART caters both aspects of decision mapping, i.e. diagramming conventions and depicting scenarios, and how all projects together on map is produced. Thus, the process of decision mapping using SMART makes the thinking more clear, rigorous and complete. Then, when time comes to exercise choice, judgement is well-founded.

4. Conclusion

Earthquake causes detrimental consequences to socio-economic status of the region in which it occurs. Systematic assessment using various tools and techniques is important for earthquake risk assessment especially in urban centres where rapid infrastructure developments and rising demographics are making region more susceptible to the vagaries of earthquake hazard. GIS-based MCDA tool can be thought of as a process that combines and transforms geographical data regarding value judgements of decision-makers and enables to analyse model of geographical phenomena such as earthquake hazard and vulnerability. SMART based on Saaty's concept was developed to perform MCDA. It's aimed at assisting decision-makers and planners to develop decision scenarios based on their expert knowledge and illustrates the same in spatial domain without a-priori knowledge of GIS. Large-scale MCDA-based maps derived in this study can bridge the gap between overview plans and regional zoning besides suggesting areas which require immediate attention to perform remedial measures. The results presented herein could assist planners and decision-makers to reduce losses by means of adopting adequate preventive and mitigation measures.

The present research benefits from the fact that vector layers can be used in MCDA analysis in SMART. The use of raster of layers is not mandatory unlike other GIS-based MCDA tools. The decision mapping can be carried out and visualized simultaneously based on well-founded judgement. However, the research is limited by the fact that interdependencies of the components are not considered in evaluation. Thus, future work can be directed to include component dependencies into account. Introducing high-resolution data for hazard will further enhance the analysis of current work. With new technologies becoming available in the near future, there is much future scope of including demographics at building level besides physical assets to represent social vulnerability in risk assessment.