CHAPTER 8 STRATEGIES FOR RISK ASSESSMENT - CASE STUDIES (L’NDHA. Airording to the Lnited Nations Department of Iurnmitarian Affairs 1992), assessment involves sursey ofa real or pomtial disaster to estimate the actual 1 ;1 or rxpected damages and to make recommendations for prevention, preparedness and rcsponse. The survey of the expected damages for a potential disaster essentially con- sists of a risk evaluation. Risk is defined as the expected losses (of lives, persons injured, property damaged and eco- nomic activity disrupted) due to a particular hazard for a givcn area and reference period (UNDHA, 1992). Based on niathematical calculations, risk is the product of hazard and vulnerability (UNDHA, 1992). Risk evaluations should be the basis of the design and establishment of methods to prevent, reduce and mitigate damages from natural disasters. Methods to evaluate mete- orological, hydrological, volcanic and seismic hazards are available and have been presented in Chapters 2 to 5, respectively. Methods also are available to develop a com- mensurate rating system €or the possible occurrence of multiple potentially damaging natural phenomena (e.g., landslides and floods) and to present equivalent hazard levels to land-use planners in a single map, as illustrated by the example given in Chapter 6. Methods also have been proposed to evaluate the economic damages resulting from natural disasters some of which are presented in Chapter 7. I lowever, despite the availability of the methods to evaluate the damages resulting from natural disasters, most societies have preferred to set somewhat arbitrary standards on the acceptable hazard level as the basis for mitigation of risks from natural disasters. Without a detailed evaluation of the damages resulting from natural disasters and the direct consideration of societally acceptabte damage levels (including loss of life), society is sure to inadequately allo- cate natural-disaster risk-mitigation funds and, as a result, is guaranteed to encounter damage that IS deemed ti nacceptable by society. In recent years, several countries have started to apply risk evaluations in the design and establishment of methods to prevent, reduce and mitigate damages from natural disas- ters. This chapter includes reviews of examples of these mcthods applied to: ( I ) the design of coastal protection works in The Netherlands, earthquake resistant structures in Mexico and japan, and flood-protection works in the USA; and (2) the establishment of flood mitigation via land-use planning in France. This chapter does not include an exhaustive review of risk evaluations, but rather presents examples to illustrate that the methods are available and have been successfully applied. This review provides a framework for the development and application of similar methods for mitigation of other natural disasters, as appro- priate, to conditions in other countries. Thus, in this chapter, assessment is defined as a survey and cvaluation to estimate the expected damages from a potential disaster and to recommend designs or measures to reduce damages to societally acceptable levels, if possible. S 8.1 In IMPLICIT SOCIETALLY ACCEPTABLE HA Z A K D It 15 vahable to review the history of the determination oi societally acceptable hazards in order to understand the need for risk assessment in the design and establishment of mitigation programmes for risks from natural disasters. In rhe design of structures and thc establishment of land-use management practices to prevent and/or reduce damages resulting from natural disasters, the risk or damage assess- ment typically has been implicit. An example can be taken from the area of flood protection where the earliest struc- tures or land-use management practices were designed or established on the basis of the ability to withstand previous disastrous floods. Chow (1962) notes that the Dun water- way table used to design railroad bridges in the early 1900s was primarily determined from channel areas correspond- ing to high-water marks studied during and after floods. Using this approach, previous large floods of unknown fre- quency would safely pass through the designed bridges. Also, after a devastating flood on the Mississippi River in 1790, a homeowner in Saint Genieve, Missouri, rebuilt his house outside the boundary of that flood. Similar rules were applied in the design of coastal-protection works in The Netherlands at the time the Zuiderzee was closed (L927-32) (Vrijling, 1993). In some cases, rules based on previous experience work well. For example, the house in Missouri was nor flooded until the 1993 flood on the Mississippi River and the Zuiderzee protection works survived the 1953 storm that devastated the southwestern part of The Netherlands However, in most cases these methods are inadequate because human experience with floods and other natural hazards do not include a broad enough range of events nor can they take into account changing conditions that could exacerbate natural disasters. As noted by Vrijling (1993) “‘one is always one step behind when a policy is only based on historical facts.” In the early part of the twentieth century, the concept ol trequency analysis began to emerge as a method to extend limited data on extreme events. These probabilistically based approaches allow estimates of the magnitude of rarely occurring events. Frequency analysis is a key aspect of mete- orological, hydrological and seismic hazard analyses as described in Chapters 2, 3 and 5, respectively. Thus, using frequency-analysis methods, it is possible to estimate events with magnitudes beyond those that have been observed. ’Chis necessitates the selection of a societally acceptable haz- ard frequency. In the USA. the societally acceptable frequency of occur- rence of flood damage was formally set to once on average in 100 years (the so-called 100-year flood) in the Flood Disaster and Protection Act of 1973. However, the 100-year flood had been used in engineering design for many years before 1973. h s Act, the US Congress specified the IOO-year flood as the . - 78 limit of the flood plain for insurance purposes. and this has become widely accepted as the standard of hazard (Linsley and Franzini, 1979, p. 634). This acceptable hazard frequency was to be applied uniformly throughout the USA, without regard to the vulnerability of the surrounding land or people. The selection was not based on a benefit-cost analysis or an evaluation of prcjbabk loss of life. Linsley (1986) indicated that the logic for one fixed level of flood hazard (implicit vulner- ability) was that everyone should have the same I ~ v c l of protection. Linsley fwthcr pointed out that many hydrologists readily accepted the implicit vulnerability assumption because a relatively uncommon flood was used to defint the hazard level, and, thus. "Thc probability t h a ~ xiyone will ever point a finger and 'you rv.ere SLY IS eqcally rcrnote. wrong' IS is obvioiri L$dt new If the klood flond 13 i j r g e r than the 10-year or 100-year flood, as the case may bc. If the estimate is not exceeded, there is no reason to think about it." exceeded, I: ard and 5ociety's the Cornprehensivc mitigation of risks resulting from potentially damaging natural phenomena requires a more rigorous consideration of the losses resulting from the haz- willingness to accept these losses. For other types of disaster, societally acceptable hazard levels also ha\-e bscn selected without formal eTialuation of benefits arid costs. For example, in the L'SAA, clam-failure storm surge) is assurnt.d to coincidr with rriitigated by designing darns to pass the probable maximum flood whew Iailure may result in the loss of life. Also, in The Nethcrlands, coastal-protection works are nor- mally designed by application of a semi-deterministic worst-case approach wherein the maximum storm-surge h e ! 000-year Ivdrer !evel. Comparison of the :vorst- soiic:ally ( prcscnted Tokyo lossidamage anaiyals risks are (10 the minimum interior case approach to the acceptabk probabilistic-load approach (described in section 8.2) resulted in a 40 per cent reduction in the design load xvhen the actual probability of failure was considercd L'rijling, This exarriple illus- trates that when a risk messmerit is performed, a societally oE the ioss/damage among flood- 1993). acceptahlc level of safety can be maintained and in some cases improved, while at the same time effectively using in the iollowing sections ilius- scarce financial resources. The exampies trate how risk assessment can be done to keep Ivsses! damages resulting from potentialty damaging natural phe- nomena within societally acceptable bounds. In the case of storm surges in The Netherlands, the loss/darnage analysis fatalities (section considered only the rat< of 8.7'). I n the case earthquakes in and Mexico City, the loss/damage analysis considered the cost, including the cost of fatalities, with possible additional constraints on the rate of fatalities (section 8.3). In the cast' of flood management in the USA, was approached in tzrrns of eco- nomic benefits and costs, with other consequences of flooding and the flood-management projects considered in the decision-making (wction Fillally, in the case of flood managemcnt in France, soc.ietalfy acceptable 8.4.1). \\'as dctcrmined by negotiation plain land owners and local or national government - representatives. These lossesldamagcs were transformed into hazard units for comparison with the flood hazard at a given localion (section 8.4.2). DESIGN OF COASTAL PROTECTION WORKS I N THE XETHERLANDS 1949. After a locg Chapter 8 - Strntegies for risk as.wsrrzent - case studies 10 lation of the country (Agema, 1982). Therefore, protection of coastal areas from storm surge and coastal waves is of paramount importance to the survival of The Netherlands. These concerns are especially importnnt in the southwest- ern coastal Province of Zeeland where the funnelling effect Eriglish Channel on m r r n surges in the Sorth Sea greatly magnifies the height and intensity of the surges. The following discussion of design of coastal-protection works in The Netherlands is based on Vrijling (1993) and readers are referred to this payer for further inlormation. Application of frequency analysis to storm-surge levels was first proposed in The Setherlands in debate in thc committee to provide coastal protection in the Maas-Rhine delta in a:id near Zeeland (Delta Committee), the acceptable return period €or storm surges was sct a t once on average in 000 years. This resulted in a storm-surge level of 5 IE above mean 5ea level t o which a freeboard for wave run-up would bc added in design. This Delta Standard design rulc has been applied in the analysis of all Dutch sea dikes since 1953. A new design method and evaluation of societally acceptable hazard levcls were needed for the design of the storm-surge barrier for :he Eastern Schrldt Estuary because of thc strui!ural and operational complexity of the barrier f h ~ a simple dikc.. .I'hrrefore, the design 10 t o those rules for dikes established by the Delta Committee in the 1930s had tcl bc transhrmed into a ser of rules suitable for a complicatcd srructure. A consistent approach to the struc- tural safrty o f the harrier was unlikdy if the components thc foundation, concrete piers, sill and gatcs were designed according t o the rules and principles prevailing in the various fields. Thus, the Delta Commission developed a procedure for probabiiistii design that rould be consisten:ly applied for each strucrurzl component of the barrier. The totd design load on the 13clta Conmiittee set the storm -surgc barrier at the load with an exceedance proba- bility 2.5 x !()-.' per !rear (that is, the 4 OOO-year water level) determined integration of the joint probability distribu- tion amoiig storm-surge levels, basin levels and the wavc-energy spectrum. A single iailure criterion then was developed for the functioning of all major components of the storm-surge barrier (concrete piers, steel gates, founda- load. The farlurr criterion was tentativcly established at 10-7 per year on the basis or 8.2 Lhari one half ofthe land surlaie in The Netherlands Iies below the 000-year storm-surge level. clcvatioiis These arcas include approximately 60 per cent of the p o p - More at of the cornpi ed such as result in bf tion, sill, t t c . ' , under the stlrcted &sign Netherlands from thc followirig reasoning. Fatality statistics for The indicatc Lhat the averagc probability of death resulting an accident is 10-4 per year. Previous experi- ence has shoi1.n that the failure of a st.,t-defense system ma); casualtics. Thus, a normal safety level can be Compreherrsive risk assessmenifor natiird hazards guaranteed oniy il the probability of failure of the system is less than or equal to 10-7 per year. The joint distribution of storm-surge leve1,basin level and wave energy was developed for The Netherlands a5 foliows. Frequency analysis was applied to available storm-surge-level data. Knowledge of the physical laws governing the storm- surge phenomenon was used to determine whether extreme water levels obtained by extrapolation were physically reahs- tic. ,4 Conditional distribution between storm-surge levels and basin levels was derived from a simple mathematical model of wind set-Ep and astronomical tide applied LO simulation of different strategies for closing the barrier gatcs. The basin level was found to be statistically independent of the wave energy.A conditional distribution between storm-surge levels and wave energy couId not be derived because of lack of data. Therefore, a maLhematica1 model m.5 developed considering the huo sources of wave energy. deep-water waves originating from the North Sea and local waves generated by local wind fields. The advanced first-order second-moment reliability analysis methud (Ang and Tang, 1984, p 333-433;Yen ef nl., 1986) was applied to determine the failure probability of each major system component of the storm-surge barrier. An advantage of this method is that the mntribution of each basic variable (model parameters, input data, model correc- tion or safety factors, etc.) to the probabllity of failure of a given iomponer:: can be determined. Thus, problematic aspects of the design can be identified and research effort can be directed to the variables that have the greatest effect on the probability of failure. Apphcatiun of the failure criterion uf to the design of each major component of the storm-surge barrier was a substantial step in achieving a societally acceptable safety level. However, the appropriate approach is to determine the safety of the entire barrier as a sea-defencc system.Thus, the probability of system failure was of the probability of component failure, and the probability 10-7 determined as a function of failure resulting from mismanagement, lire and ship col- lision through application of fault-tree analysis {Ang and Tang, 1984, p. 496-498). The fault treu for determining the probability that parts of Zeeland are flooded because of fail- ure of components of the barrier, mismanagement, andlor malfunction of the gates is shown in Figure 8 1. By using the fault tree, thr design of the barrier was refined in evsry aspect and the specified safety criterion of lo-’ per year ivas achieved in the n o s t economical manner. Through application of sophisticated probabilistic techniques, Dutch engineers were able to reduce the design load for the storm-surge barrier by 40 pcr cent relative to traditional design methods and still achieve a societally acceptable failure probability or hazard level. In this case, the societally acceptable hazard was defined by setting the fatality rate equal to levels resulting from accidents in The Netherlands. Thus. the completed structure reduced the risk resulting from storm surges to fatality rates accepted by the people of The Netherlands in their daily lives. It could be considered that the application of the new design procedures resulted in an increase in the hazard level resulting from storm surges faced by society relative to the application ol the previous design standards. However. the previous design standards were implicitly set without any _ _ 79 coilsideration of the consequcnccs of a storm surge and societally acceptable hazard levels. The population in the southwestern region of The Netherlands to be protected by the storm-surge barrier was already facing a substantial haz- ard. Thus, the question was to what level should the hazard be reduced? The Dutch Government decided that people in The Netherlands would be willing to accept a possibility of dying because of a failure of the sea defences that was equal to the probability of dying because of an accident. This resulted in substantial savings relative to the use of the implicit societally acceptable hazard level. The key point ot this example is that when faced with the construction of a large, complex and expensive structure for the protection of the public, the Dutch Government abandoned implicit soci- etaIIy acceptabIe hazard levels and tried to determine real, consequence-based societally acceptable hazard levels. MINIMUM LIFE-CYCLE COST EARTHQUAKE 8.3 DESIGN Earthquake-resistant design and seismic-safety assessment should explicitly consider the underlying randomness and uncertainties in the earthquake load and structural capacity and should be formulated i n the context of reliability (Pires rt d, 1996). Because it is not possible to avoid damage under all likely earthquake loads, thc must development of earthquake-resistant design criteria include the possibility of damage and an evaluation of the consequences of darnagc over the life of thc structure. To achieve this risk assessment for structures in earthquake-prone regions, Professor A. H-S. Ang and his colledgues at the University of California at Irvine haire proposed the design of earthquake-resistant structures on the basis of the minimum expected total life-cycle cost of the structure including initial (or upgrading) cost and darnage-related costs (Ang and De Leon, 1996, 1997; Pires , 1996) and a constraint on the probability of loss of life (1.e~‘ eta!., 1997). The minimum life-cycle cost approach consists of five steps as follows (Pires et d., 1996; Lee et al., 1997). ( 1 ) A set of model buildings is designed for different levels (3) et af of rellabihty (equal to one minus the probability of damage, pi) or performance following the procedure oi an existing design code. For reinforced concrete build- ings, this is done by following the design code except that the base-shear coefficient is varied from code values to yield a set of rnodcl buildings having different strengths, initial costs (construction or upgrading), and probabilities of damage. relation between the initial cost of the structure and the corresponding probability ofdamage under all pos- sible earthquake loads is established from the designs in step 1. (2) -4 made For each design, the expected total cost of structura! damage is estimated as a function of the probability ot damage under all possible earthquake loads and is the initial cost. ‘The expressed on a common basis with damage cost includes the repair and replacement cost, C,, loss of contents, C,, economic impact of structural 80 , Chapter 8 - Strategies for risk assessment - case studies - CLOSAlLE PART DAM SECTON SLIDING SYSTEM I (pyKp=+] MORE FlLUS SLlOlNG SYSTEM >BEARING CAP FOUNOATKJN for computation o l of the failure probability of the Eastern SchedIt storm-surge barrier in the Netherlands in (after and cost fatalities resulting from " " C] (8.1) Figure 8 1 - Fauh . tree ,,I- " . damage, damage, tural damage, Cf C,.., (4) The expected risk of death for all designs under all likely earthquake intensities also is expressed as a func- of the probability of damage. rion (5) A trade-off between initial cost of the structure and the damage cost is then done to determine the target relia- bility (probability of damage) that minimizes the total expected life-cycle cost subject to the constraint of the socially acceptable risk of death resulting from struc- Vrrilinfi 1993) - - C,,, COST of injuries resulting from suucrural struc- the Dererrnination of rhe relarion between damage cost and Drobabilitv of damage 3 is the key cornmnent of steD the minimum life-cycle-cost earthquake-design method. The estimate of the damage cost is described rnathemat- icaliy as given by Ang and De Leon ( 1997) and summarized in the following. Each of the damage-cost components wrll depend on the global damage level, x , as. where j If = C,td = r, c, ec, in andfare as previously described in item 3 the damage level x resdting from a given earthquake with tural damage. Comprthensivr risk msesmenf for natiird hazards a specified intensity A=LI, is defined with a conditional probability density function (pdf), IXxa(x), each of the expected damage cost items would be (8.2) E [ Cj"] 8,3.1 = Cj(~),fy~a(~) dx The intensity of an earthquake also may be defined as a pdf, ,fA(u), and the total expected damage cost under all likely earthquake intensities may be computed by integration as where the bounds of integration are urnin and ama, which are the minimum and maximum values of the likelysange of earthquake intensities, respectively. The evaluation of equations 8.1 to 8.3 requires: (a) development of relations between the level of physical, structural damage and the associated damage cost and loss of life; and (b) application of a structural model to relate earthquake intensity to structural damage. Further, the time of earthquake occurrence and the transformation of this future cost to an equivalent present cost are not considered in equation 8.3. Thus, the establishment of a probabilistic model to describe earthquake occurrence and an economic model to convert future damage cost to present cost also are key features of the minimum life-cycle-cost earthquake- design method. These aspects of the method are described in the following sections. was developed for 0.5; and C, = C,, D, Damage costs The global damage of a structure resulting from an earth- quake is a function of the damages of its constituent components, particularly of the critical components. In order to establish a consistent rating of the damage to rein- forced-concrete structures, Prof. Ang and his colleagues (Ang and De Leon, 1996, 1997; Pires et al., 1996; Lee et al., 1997) suggested applying the Park and Ang (1985) structural- member damage index. Each of the damage costs then is related to the median damage index, D,, for the structure. The repair cost is related to D, on the basis of available structural repair-cost data for the geographic region. For example, the ratio of repair cost, C,, to the initial construc- tion cost, Ci, for reinforced-concrete buildings in Tokyo is shown in Figure 8.2 as determined by Pires et al. (1996) and Lee et al. (1997). A similar retarion Mexico City by De Leon and Ang (1994) as: 0 D, C, > 0.5 C, = 1.64 C, D, where which is equal to Mexico City. (8.4) is the replacement cost of the original structure, 1.15 times the initial construction cost for The loss of contents cost, C, is typically assumed to reach a maximum of a futed percentage of the replacement cost, C,, and to vary linearly from 0 to this maximum with D, for intermediate levels of damage to the structure (D, < i ) . For reinforced-concrete structures, the loss of contents was assumed to be 50 per cent for Mexico City (Ang and De Leon. 0 Building B Building C Building D -7 Building E X Building F 1 0.25 0.50 0 + 0.75 Median Global Damage Index, d, Figure 8.2 - Damage repair cost function derived from datu for reinforced-concrete structures damaged by earthquakes C,,, in Tokyo {after Lee, 1996) 1996,1997) and 40 per cent for Tokyo (Fires et al., 1996). Lee et al (1997) applied a piecewise-linear relation €or the range of D,,, for intermediate levels of damage. The economic loss resulting from structural damage, may be estimated in several ways. Ideally, this Ioss should be evaluated by comparing the post-earthquake eco- nomic scenario with an estimate of what the economy would be if the earthquake had not occurred. A complete evaluation of all economic factors is difficult, and simplified estimates have been applied. For example, Pires et al. (1996) assumed that the loss of rental revenue, if the building col- lapses or exceeds the limit of repairable damage, is equal to 23 per cent of the replacement cost of the building, and varies nonlinearly with D, up to the limit of repairable damage (D, = 0.5). They developed this function on the basis of the average rental fees per square metre per month fur office buildings at the site, and assuming that 1.5 years will be needed to reconstruct the building. Lee (1996) used an economic input-output (1-0) model to compute Cec The 1-0 model (see Chapter 7 ) is a static general-equilibrium model that describes the transactions between various pro- duction sectors of an economy and the various final demand sectors. Lee aggregated 1-0 model data €or 46 eco- nomic sectors from the Kanto region of Japan, which includes the city of Tokyo, into 13 sectors for the estimation of the economic loss resulting from structural damage. Lee also used time-to-restore functionality curves for profes- sional, technical and business-service buildings reported by the Applied Technology Council (1985) to relate D, to eco- nomic losses as a piecewise-linear function. The cost of injuries, C,, also may be estimated in sev- (1996) and Lee er al. (1997) assumed that 10 per cent of all injuries are disabIing for D, 1, and that the loss due to a disabling injury was equal to the loss due to fatality (as described in the following paragraph). cost for non-disabling eral ways. Fires et al. el Pires af, [1996) estimated the injuries to be 5 million Yen (approximately US $50 000). A nonlinear function was used to estimate the cosr of injuries (D, < 1). Lee et a1 et D,. f The cost of fatalities, C also may be estlmated ways as discussed in detai in Chapter 7. Ang and De Leon (1996, 1997) and Pires et a1 (1996) estimated the cost of fatalities on the basis of the expected loss to the national gross domestic product determined through the human- capital approach. Pires et al. (1996) estimated the number of fatalities per unit floor area of a collapsed building on the basis of data from the earthquake in Kobe, Japan. in 1995 For intermediate values of D,n, Ang and De Leon (1996, made the cost of fatalities pro- 1997) and Pires el a!. [ 19%) portional to the 4th power of D,. Lee et d. (1997) estimated the cost of fatalities on the basis of the willingness-to-pay approach for saving a life through the revealed preferences method as given in Viscusi (1993). The difference in methodology between Pires et af. (1996) and Lee (1997) results in an increase in the cost of fatalities from approximately US $1 million for the human-capital approach, to US $8 million in the willingness-to-pay al. approach. Lee et al. (1997) used relations between collapse rate and fatality rate proposed by Shiono et al. (1991). Despite the differences in the economic-evaluation approaches taken by Pires er al. (1996) and Lee et al. (1997). the optrmal base-shear coefficients for the design of five- story reinforced-concrete buildings in Tokyo were essentially identical. 8, for the intermediate damage range (1997) estimated the cost of nonfatal accidents using labour market data compiled by Viscusi ( 1993). Lee et al related the injury rate to the fatality rate with the ratio of the injury rate to the fatality rate expressed in terms of in several 8.3.2 Determination of structural damage resulting from earthquakes Because the structural response under moderate and severe earthquake loads is nonlinear and hysteretic, the computa- tion of response statistics (e.g., the damage index) under random earthquake loads using appropriate random struc- tural models and capacities is an extremely complex task. Pires et al. (1 996) recommended that Monte Carlo slmula- tion be applied to determine the desired response statistics. The approach involves the selection of an appropriate struc- tural computer code capable of computing key structural responses, such as maximum displacement and hysteretic- energy dissipated. The earthquake ground motions used as input can be either actual earthquake records or samples of nonstarionary filtered Gaussian processes with both fre- quency and amplitude modulation (Yeh and Wen, 1990). In Monte Carto simulation, (1) a random earthquake load is selected, ( 2 ) the structural damage index is computed using the selected structural model considering uncertainties in the structural properties and capacities, and (3) the damage cost is computed as per section 8.3.1. This is essentially a numerical integration of equations 8.1 to 8.3. The prob- ability of damage and the probability of death resulting from earthquake damage also are computed in Monte Carlo simulation. An empirical ioint pdf amang the response sta- tistics is obtained by performing a large number of simulations. Chapter S - Strategies for risk assessment - case studies The uncertainties in the structural properties and capacities and critical earthquake load parameters may be modelled as lognormally distributed variables, and, thus, from the Central Limit Theorem, the distribution of the damage response statistics also can be expected to be log- normal. Therefore, to reduce computational costs and time, a relatively small number of simulations, on the order of a few hundred, are done, and the joint lognormal pdf of the response statistics is fitted from the empirical results. 8.3.3 Earthquake occurrence model tions 8.3.1 The expected damage costs computed as described in sec- and 8.3 2 are associated with structural damage or collapse resulting from future earthquakes, whereas the initial (or upgradrng) cost is normally a present value. Thus, the present worth of the respective damage costs will depend on the times of occurrence of these earthquakes (Ang and De Leon, 1996, 1997). A suitable earthquake occurrence model may be derived by assuming that: (1) the probability of occurrence of possible future damaging earthquakes at the building site constitutes a Poisson process, (2) earthquake occurrences and their intensity are statistically independent; and (3) the structure is repaired to its original condition after every damaging earthquake (Pires et al., 1996). These assumptions are common in seismic-hazard evaluation, although they may not always be appropriate as discussed in Chapter 5 If earthquake occur- rences follow a Poisson process, then the occurrence time of each earthquake is defined by the Gamma distribution (Ang and De Leon, 1996, 1997). The discount rate, q, used to transform future damage costs to present worth may be eas- ily incorporated into the gamma distribution for earthquake occurrences. This results in a present worth factor that is multiplied by the darnage cost computed as per sections 8.3.1. and 8 3 2 to obtain the present cost of damages result- ing from possibIe Euture earthquakes. The present worth factor for a structural design life of 50 years in Mexico City 7 5 CL 3 3 1 0.1 0.025 0.075 0 0.05 9 Figitre 8.3 -Present worth factorfor Mexico City RS a function of the annual discount rate, 9 (after Angand De Leon, 1996) Comprehensive rrsk ussessment for natural kazurds is shown as a functlon of the discount rate in Figure 8.3 (Ang and De Leon, 1946. 1997). Lee (19%) varied the dis- count rate between 2 and 8 per cent and found that although the total expected life-cycle costs decrease significantly as the discount rate decreases, the optrmai design levels and target reliabilities are much less sensitive to changes in dis- count rate. 8.4 ALTERNATIVE APPROACHES FOR RISK-BASED FLOOD MANAGEMENT When considering flood-management issues, the question that must be answered is not if the capacity of a flood- reduction project will be exceeded, but what are the impacts when the capacity is exceeded, in terms of economics and threat to human life (Eiker and Davis, 1996). Therefore, risk must be considered in flood management. Flood risk results from incompatibility between hazard and acceptable risk levels measured in commensurate units on the same plot of iand (Gilard, 1996). However, in traditional flood manage- ment in many countries, an impiicit acceptable-risk level is assumed, and only the hazard level is studied in detail. In the USA, the implicit acceptable-risk level for flood- plain delineation and other flood-management activities LS defined by the requirement to protect the public from the flood exceeded once on average in 100 years (the 100-year flood). Linsiey (1986) indicated that the logic for this fixed level of flood hazard {implicit acceptable risk) is that every- one should have the same level of protection. However, he et ul , 1994) These approaches noted that because of the uncertainties in hydrological and hydraulic analyses all affected persons do not receive equal protection. He advocated that the design level for flood haz- ard should be selected on the basis of assessment of hazard and vulnerability In this section, two approaches for risk assessment for flood management are described. These are the risk-based approach developed by the US Army Corps of Engineers (USACE, 1996) and the InondabilitC method developed in France (Gilard offer contrasting views of flood-risk management. The risk- based approach seeks to define optimal flood protection through an economic evaluation of damages including con- sideration of the uncertainties in the hydrologic. hydraulic and economic analyses; whereas the Inondabilitk method seeks to determine optimal land use via a comparison of flood hazard and acceptable risks determined through negotiation among interested parties. 8.4.1 Risk-based analysis for flood-damage-reduction projects A flood-darnage-reduction plan includes measures that decrease damage by reducing discharge, stage and/or dam- age susceptibility (USACE, 1996). For Federal projects in the USA, the objective of the plan is to solve the problem under consideration in a manner that will “... contribute to national economic development (NED) consistent with protecting the Kation’s environment, pursuant to national environmental statutes, applicable executive orders and 83 other Federal planning requirements (USA Water Resources Council, 1983)1’ In the flood-damage-reduction planning traditionally done by the USACE the level of protection pro- vided by the project was the primary performance indicator (Eiker and Davis, 1996). only projects that provided a set level of protection (typically from the 100-year flood) would be evaluated to determine their contribution to XED, effect on the environment and other issues.The level of pro- tection was set withour regard to the vulnerability level of the land to be protected. In order to account for uncertain- ties in the hydrological and hydraulic analyses applied in the traditional method, safety factors, such as freeboard, are applied in project design in addition to achieving the speci- fied level of protection. These safety factors were selected from experience-based rules and not from a detailed analy- sis of the uncertainties for the project under consideration. The USACE now requires risk-based analysis in the for- mulation of flood-damage-reduction projects (Eiker and Davis, 1996). In this risk-based analysis, each of the alterna- tive solutions for the flooding problem is evaluated to determine the expected net economic benefit (benefit minus cost), expected level of protection on an annual basis and over the prolect life, and other decision criteria. These expected values are computed with expliat consideration of the uncertainties in the hydrologic, hydraulic, and economic analyses utilized in plan formulation. The risk-based analy- sis is used to formulate the type and size of the optimal plan that will meet the study objectives. The USACE policy requires that this plan be identified in every flood-damage- reduction study. This pian may or may not be the recommended plan based on “additional considerations” (Eiker and Davis, 1996). These “additional considerations” include environmental impacts, potential for fatalities and acceptability to the local population. In the traditional approach to planning flood-damage- reduction projects, a discharge-frequency relation for the prolect site can be obtained through a variety of methods (see Chapter 3) These include a frequency analysis of data at the site or from a nearby gauge through frequency trans- position or regional frequency relations. Rainfall-runoff models or other methods described by the USACE (1996) can be used to estimate flow for a specific ungauged site or site with sparse record. If a continuous hydrological simula- tion model is applied, the model output is then subjected to a frequency analysis; otherwise flood frequency is deter- mined on the basis of the frequency of the design storm. Hydraulic models are used to develop stage-discharge rela- tions for the project location, if such relations have not been derived from observations. Typically, one-dimensional, steady flows are analysed with a standard step-backwater model, but in some cases, streams with complex hydraulics are simulated using an unsteady-flow model or a two- dimensional flow model. Stage-damage relations are developed from detailed economic evaluations of primary land uses in the flood plain as described in Chapter 7. Through integration of the discharge-frequency, stage- discharge and stage-damage relations, a damage-frequency relation is obtained. By integration of the damage-frequency relations for without-project and varlous with-project con- ditions, the damages avoided by implementing the vartous