A SCENARIO-BASED PROCEDURE FOR SEISMIC RISK...

23
Available at: http://www.ictp.it/ ~ pub_off IC/2006/142 United Nations Educational, Scientific and Cultural Organisation and International Atomic Energy Agency THE ABDUS SALAM INTERNATIONAL CENTRE FOR THEORETICAL PHYSICS A SCENARIO-BASED PROCEDURE FOR SEISMIC RISK ANALYSIS J.-U. Klügel 1 Kernkraftwerk Goesgen-Daeniken, Kraftwerkstrasse, 4658 Daeniken, Switzerland, L. Mualchin Retired from the California Department of Transportation (Caltrans), Sacramento, California, USA and G.F. Panza Dipartimento di Scienze della Terra, Università degli Studi di Trieste, Trieste, Italy and The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy. MIRAMARE - TRIESTE December 2006 1 Corresponding author: [email protected]

Transcript of A SCENARIO-BASED PROCEDURE FOR SEISMIC RISK...

  • Available at: http://www.ictp.it/~pub_off IC/2006/142

    United Nations Educational, Scientific and Cultural Organisation

    and International Atomic Energy Agency

    THE ABDUS SALAM INTERNATIONAL CENTRE FOR THEORETICAL PHYSICS

    A SCENARIO-BASED PROCEDURE FOR SEISMIC RISK ANALYSIS

    J.-U. Klügel1 Kernkraftwerk Goesgen-Daeniken,

    Kraftwerkstrasse, 4658 Daeniken, Switzerland,

    L. Mualchin Retired from the California Department of Transportation (Caltrans),

    Sacramento, California, USA

    and

    G.F. Panza Dipartimento di Scienze della Terra, Università degli Studi di Trieste, Trieste, Italy

    and The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy.

    MIRAMARE - TRIESTE

    December 2006

    1 Corresponding author: [email protected]

  • A scenario-based procedure for seismic risk analysis

    J.-U. Klügel a,⁎, L. Mualchin b, G.F. Panza c,d

    a Kernkraftwerk Goesgen-Daeniken, Kraftwerkstrasse, 4658 Daeniken, Switzerlandb Retired from the California Department of Transportation (Caltrans), Sacramento, California, United States

    c Dipartimento di Scienze della Terra – Universita di Trieste, Italyd The Abdus Salam International Centre for Theoretical Physics – Miramar, Trieste, Italy

    Received 10 March 2006; received in revised form 4 July 2006; accepted 13 July 2006Available online 18 September 2006

    Abstract

    A new methodology for seismic risk analysis based on probabilistic interpretation of deterministic or scenario-based hazardanalysis, in full compliance with the likelihood principle and therefore meeting the requirements of modern risk analysis, has beendeveloped. The proposed methodology can easily be adjusted to deliver its output in a format required for safety analysts and civilengineers. The scenario-based approach allows the incorporation of all available information collected in a geological,seismotectonic and geotechnical database of the site of interest as well as advanced physical modelling techniques to provide areliable and robust deterministic design basis for civil infrastructures. The robustness of this approach is of special importance forcritical infrastructures. At the same time a scenario-based seismic hazard analysis allows the development of the required input forprobabilistic risk assessment (PRA) as required by safety analysts and insurance companies. The scenario-based approach removesthe ambiguity in the results of probabilistic seismic hazard analysis (PSHA) which relies on the projections of Gutenberg–Richter(G–R) equation. The problems in the validity of G–R projections, because of incomplete to total absence of data for making theprojections, are still unresolved. Consequently, the information from G–R must not be used in decisions for design of criticalstructures or critical elements in a structure. The scenario-based methodology is strictly based on observable facts and data andcomplemented by physical modelling techniques, which can be submitted to a formalised validation process. By means ofsensitivity analysis, knowledge gaps related to lack of data can be dealt with easily, due to the limited amount of scenarios to beinvestigated. The proposed seismic risk analysis can be used with confidence for planning, insurance and engineering applications.© 2006 Elsevier B.V. All rights reserved.

    Keywords: Scenario-based seismic hazard analysis; Siesmic risk analysis

    1. Introduction

    Earthquakes, as many other natural disasters, haveboth immediate and long-term economic and socialeffects. Seismic hazard analysis based on the traditionalmethodology of probabilistic seismic hazard analysis asdeveloped by Cornell (1968), McGuire (1976, 1995) and

    ⁎ Corresponding author.E-mail address: [email protected] (J.-U. Klügel).

    0013-7952/$ - see front matter © 2006 Elsevier B.V. All rights reserved.doi:10.1016/j.enggeo.2006.07.006

    1

    expanded for the treatment of uncertainties by usingexpert opinion (SSHAC, 1997) cannot fill the gap ofknowledge in the physical process of an earthquake. Asdiscussed by Klügel (2005a,b,c,f), these methods lead toambiguous results due to their incapability to correctlymodel the dependencies between large numbers ofuncertain random parameters. Wang (2005) argued thata probabilistic seismic hazard analysis, as practiced today,leads to the loss of physical meaning in the results andprovides the decisionmaker with an infinite choice for the

    mailto:[email protected]://dx.doi.org/10.1016/j.enggeo.2006.07.006comarRectangle

    comarRectangle

    comarRectangle

  • Fig. 1. Information to characterise seismic sources.

    2 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    selection of a design basis earthquake. Klügel (2005e)demonstrated that the results of a probabilistic seismichazard analysis, presented as a uniform seismic hazardspectrum, do not provide the required input for a seismicprobabilistic risk assessment (PRA), as required for riskinformed regulation in nuclear technology. Furthermore,the multiscale seismicity model (Molchan et al., 1997)supplies a formal framework that describes the intrinsicdifficulty of the probabilistic evaluation of the occurrenceof earthquakes by using a simple probabilistic model likethe (truncated) Gutenberg–Richter equation withoutconsidering dependence on the scale of the problem.According to this model, only the ensemble of events thatare geometrically small, compared with the elements ofthe seismotectonic regionalisation, can be described by alog-linear magnitude frequency (FM) relation. Thiscondition, largely fulfilled in the early global investiga-tion by Gutenberg and Richter (e.g., see Figure 49 ofBath, 1973), has been subsequently violated in manyinvestigations. This violation has given rise to theCharacteristic Earthquake (CE) concept (Schwartz andCoppersmith, 1984), in disagreement with the Self-Organised Criticality (SOC) paradigm (Bak and Tang,1989). The main problem is proper choice of the size ofthe region for analysis, so that it is large enough toguarantee the applicability of the Gutenberg–Richer (G–R) law and related concepts. Additionally, G–R equationhas no objective time-series analysis for obtainingrealistic earthquake magnitude recurrent times andtherefore results using G–R projections have a profounduncertainty.

    Therefore, meaningful alternatives are essential forusers and decision makers in selecting a robust designbasis for civil infrastructures. Results from a determin-istic scenario-based seismic hazard analysis methodol-ogy (e.g., Field, 2000; Panza et al., 2001) provide ameaningful alternative both for design applications, aswell as for modern risk analysis.

    2. Methodology of deterministic scenario-basedseismic hazard analysis

    The methodology of deterministic scenario-basedseismic hazard analysis in this paper represents anextension of the methods, which have been used fordeterministic seismic hazard analysis in high seismicareas like California for more than 30 years (Mualchin,1996). The extension specialises in the treatment ofproblems specific to seismic hazard analysis for low tomoderate seismic areas, incorporates physical modellingapproaches and introduces a sound methodology for riskassessment.

    2

    2.1. Concept of scenario-based seismic hazard analysis

    The selection of one or a limited set of scenarioearthquakes is the central concept of the methodology.The selection of scenario earthquake(s) includes thefollowing steps:

    • Characterisation of seismic sources for capacity/potential and location.

    • Selection of hazard parameter(s) to characterise theimpact of an earthquake on the infrastructure.

    • Development of an attenuation model for theparameter to derive the values of the parameter(s)at the site.

    • Incorporation of site effects, and near-field andpotential directivity/focusing factors.

    • Definition of the scenario earthquake(s).

    2.2. Characterisation of seismic sources

    The selection of scenario earthquake(s) requires adetailed analysis of all regional seismogenic or activeseismic sources surrounding the site of interest andassessment of their capability and potential to produceearthquakes of a significant size. For this step, allavailable information shall be explored. Fig. 1 shows theconcept in a schematic way.

    In the understanding of Fig. 1, a “capable fault” is afault that has a significant potential for relativedisplacement at or near the ground surface.

    The selection of the scenario earthquake(s) focusedon the largest (magnitude) earthquakes expected fromeach source. These earthquakes traditionally are calledmaximum credible earthquakes (MCEs). The use of theMCE ensures that effects from all other magnitudes areexplicitly considered. In other words, by virtue ofdesigning a structure to withstand the MCE, it willautomatically withstand all other (smaller) earthquakes.The focus on large magnitudes is justified, because thedestructive potential of earthquakes primarily dependson its energy content (proportional to the magnitude)and the transfer of this energy into a structure. For

    comarRectangle

  • 3J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    specific source – propagating media – site configura-tion, it is obvious that larger magnitude earthquakes willproduce more impact on a given structure at the site, allother factors being fixed. The selection of the maximumcredible earthquake or the maximum possible magni-tude (PSHA using the truncated Gutenberg–Richterequation) under the given seismotectonic environment isa challenging task and requires the use of all availableinformation (geological, geophysical, geotechnical andseismological) especially for the design of criticalinfrastructures (IAEA, 2002a, b). The acquisition andinterpretation of the required information is an interdis-ciplinary task involving experts in different fields ofgeophysics, geology, and seismology, geotechnicians aswell as civil engineers and safety analysts, specifyingthe requested information. Destructive potential ofearthquakes does not depend on secondary propertiessuch as spike in the instrumental time-histories (e. g.,Uang and Bertero, 1990), which provide the basis forthe uniform hazard spectra— the common outputs oftraditional PSHA. That the results of traditional PSHAare based on statistical outliers can be demonstrated bythe mathematical formulation of the hazard integral usedin PSHA (EPRI, 2005; Abrahamson, 2006):

    mðSaNzÞ ¼Xnsourcei¼1

    NiðMminÞZM

    ZR

    ZefmiðMÞfRiðr;MÞfeP

    �ðSaNzjM ;R; eÞdedRdMð1Þ

    Eq. (1) represents the usual annual frequency ofevents, leading to a spectral acceleration Sa, exceeding ahazard value z. It is evaluated by summing up thecontributions of all relevant sources and by performingsource specific integration over magnitude, distance andthe error term (named aleatory uncertainty) of theattenuation equation, multiplying the source specificfrequency density distributions with the conditionalprobability of exceedance of the specified hazard levelz. The conditional probability of exceedance is calcu-lated based on the corresponding attenuation equation.The attenuation equation can have the following format:

    logðSaÞ ¼ gðM ;R;XiÞ þ e ð2Þ

    with the error term expressed as the multiple of thestandard deviation ε=aσlog of the attenuation model. Thestandard deviation σlog reflects the variability of mea-surement conditions under which the data points(including the data points used for the measurement ofmagnitude and location) used for the regression wereobtained. Xi represent additional explanatory variables (orclassification properties for the specific travel path from

    3

    the seismic source to the site) of the attenuation model,which may or may not be considered in the model.Examples for these additional explanatory variables are:

    • site conditions (e.g. shear wave velocity, depth ofsurface layer),

    • topographical and directivity effects,• hanging wall and footwall effects,• fault style,• aspect ratio of the seismic source,• material properties of the travel path of seismicwaves.

    The result of PSHAusing Eq. (1) is clearly driven by thenumber of standard deviations considered as the boundarycondition for the integral over ε. The number of standarddeviations considered is in principle unlimited, althoughphysical boundaries (e. g. maximal ground motion) can beprovided. Therefore, the conclusion is that the hazardintegral (1) can converge to infinity (this means that it doesnot converge at all) or to a maximum ground motion levelset by the analyst in advance. FromChebyshev's inequality

    Pr jX−EÞðX Þjzarð ÞV 1a2

    ð3Þ

    in conjunction with Eq. (1) it follows directly that theresults of a PSHA are driven by the recordings ofstatistically rare time-histories, which (due to the secondergodic assumption (Klügel, 2005c)) frequently wererecorded under measurement conditions completely dif-ferent from the site of interest. It is obvious that thetraditional PSHA (SSHAC, 1997) represents a worst-casemodel, leading systematically to ambiguous results. Klügel(2005c) discussed that the mathematical formulation of thehazard integral (Eq. (1)) or as formulated in the SSHAC-report (SSHAC, 1997) is incorrect. Its derivation is basedon a separation of random variables approach treating theerror term and the random explanatory variables of theattenuation equation (e. g. magnitude, distance and theother explanatory variables (site conditions, hanging walland foot wall effects, topographical and directivitycharacteristics, if the later are considered at all)) asstatistically independent. Obviously, this assumption isnot true.

    The development of Eq. (1) was based on a heuristicbias. Originally PSHA assumed that the uncertainty ofthe problem is completely concentrated in the error termε of the attenuation equation, regarding all othermodelling parameters as exactly known. This assump-tion was the result of the division of labour betweendifferent groups of geophysicists. One group wasresponsible for the evaluation of earthquake magnitude

    comarRectangle

  • 4 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    (or intensity) and earthquake epicentre location, whileanother group used this information to developattenuation models assuming earthquake magnitudeand location as exactly known. Later on, the uncertaintyof other modelling parameters were included in theanalysis (e. g. of a and b parameters of the Gutenberg–Richter equation, and of the epicentre location).Unfortunately, people forgot that the evaluation ofmagnitude and earthquake location is based onmeasurements. Therefore, the obtained values are notknown exactly and (in a probabilistic approach) have tobe treated as random parameters. This means that theerror term ε in the attenuation equation does include themeasurement uncertainties associated with the evalua-tion of magnitude and epicentre location (and the effectsof other explanatory variables not explicitly consideredin the attenuation equation). Therefore, an attenuationequation represents a multivariate distribution of theconsidered ground motion parameter expressing itsdependence on a set of random model parameters. Forreplacing this multivariate distribution by the simplifiedmodel of a lognormal distribution (or normal distribu-tion in log-scale) for use in a PSHA logic tree (heremagnitude and distance are “exactly known” for eachsingle path through the tree) it would have been requiredto consider the dependency between the modelparameters and the error term ε or to adjust the residualerror.

    In seismic highly active regions like California, theselection of seismic sources can be reduced to the iden-tification and assessment of seismogenic faults, which canproduce earthquakes of significant damaging potential. Inless active regions and where instrumentally recordedearthquakes are not available, as is the case for severalEuropean areas, historical intensity data should be used toobtain an overall picture of the spatial distribution of theshaking intensity during written historical time. Althoughthe epicentral locations and estimated magnitudes ofhistorical earthquakes may not be as accurate as those ofinstrumentally recorded earthquakes, they can providevaluable, although incomplete, information on (1) theseismicity over long periods, (2) a rough delineation ofseismic source zones and (3) reasonable estimates offuture earthquake magnitudes, by assuming stableseismotectonic conditions for the region. It may even bepossible to derive information on the frequency of largeearthquakes, which are of interest for a scenario-basedmethodology, by time-series analysis. What particularlydistinguishes the results obtained by a scenario-basedmethodology and the traditional PSHA is the way inwhich the methods are applied to different seismogeniczones. Both the identification and delineation of the

    4

    potential seismogenic sources (areas or lines) constituteone of the fundamental problems in seismic hazardanalysis. The assumption of traditional PSHA thatearthquakes can occur everywhere, is no replacementfor the resolution of this problem, because PSHAconsiders these “hidden” earthquakes in the output(Uniform Hazard Spectrum — UHS) only weighted bytheir (subjectively assessed) frequency of occurrence. Ifindeed a “hidden” earthquake occurs, the resultingresponse spectrum will differ systematically from (and itmay not be enveloped by) the calculated UHS. Asdiscussed in detail by Panza et al. (2003a) and Klügel(2005d), the assumption of spatially uniform activitywithin areal sources in traditional PSHA methodology isphysically unrealistic and mathematically questionable.Alternative procedures for source modelling that eludesource zones have been proposed. For example, one canmake use of seismic parametric catalogues (historical andinstrumental records) to define the possible locales ofseismic events (Molchan et al., 2002). This approach,called historical, has been widely applied in the past.

    Other proposals based on the seismic catalogues aredue to Veneziano et al. (1984), and Kijko and Graham(1998). In this context, Frankel (1995) also proposed aprocedure using spatially-smoothed historical seismicityfor the analysis of seismic hazard in Central and EasternUSA.

    Woo (1996) suggested another procedure for areasources statistically based on kernel estimation of theactivity rate density inferred from regional seismic cata-logue. Such approach considers that the form of kernel isgoverned by the concept of self-organised criticality andfractal geometry, with the bandwidth scaled according tomagnitude. In general, the epicentre distribution of histo-rical earthquakes gives a better indication of seismiczonation and generally leads to a non-uniform distributionof seismicity within the zone. Obviously, the mostappropriate method suitable for the region of interestshall be selected based on the available data. Because thedamaging effect of earthquakes also depends on the“distance” between the assumed earthquake location andthe site of interest, a decision for defining the distance hasto be made. For a mapped capable fault, the shortestdistance between fault and site is usually considered. In anarea with low and diffused seismicity, characterised by anareal source, the distance defined can be either betweenthe site and central area of the earthquake epicentres orbetween the site and the nearest approach to the epicentralzone. The former assumes a more likely location ofearthquakes in the zone interior, whereas the latterassumes the possibility of earthquakes at the zoneboundary. The use of the shortest distance corresponds

    comarRectangle

  • 5J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    statistically to the assumption of a beta-distribution for thespatial distribution of seismicity in the areal source withshape parameters below 1. In a Bayesian approach, thiscorresponds to a specific class of non-informative priorswithin an interval, whichmeans area in this case (Atwood,1996), which is more appropriate than the frequentassumption of a uniform distribution. Bayesian techni-ques based on new information (epicentre location) can beused to refine the spatial distribution.

    2.3. Selection of a parameter to characterise the impactof an earthquake

    Different parameters are used by engineers to evaluatestructural damage. For design purposes they often dependon national regulations and standards. Most standards areforce-based. The design basis forces are derived typicallyfrom linear-elastic response spectra, taking into accountsome damping of the structure. These are adjusted by loadcorrection factors for the required application. The anchorpoint (of a response spectrum for pseudo-spectralaccelerations) for scaling a generic design spectrum(often normalised to 1 g) is at a certain high frequency(typically around 33 Hz) and the final design spectrumcommonly used by engineers is scaled by peak groundacceleration (PGA). In the past, following the originalidea of Cancani (1904), PGA values were derived fromintensity attenuation equations and therefore closelyrelated to observed damage. At that time (before themid 70s), measurements of ground motions were few,being limited by available instrumentation and seismicnetworks. The measured values were actually “peak-damped” without high frequency contents, because thelatter were not measurable (high frequency peaks werefiltered). Therefore, the physical meaning of the measuredPGAvalues was quite close to the modern understandingof an effective ground acceleration (EGA) as usednowadays in engineering (with some minor differencein the values of the spectral amplification factors). This ledto an implicit correlation of the observed intensities withthe spectral acceleration reflecting the range of naturalfrequencies of civil structures. Indirectly, this correlationincorporates both the energy content of an earthquake, aswell as the energy (defined by spectral shape and level)transfer into a structure. This picture has changed due tothe development of modern seismic networks andinstrumentations capable of recording high frequencycontents of earthquake vibrations. Such high frequencyvibrations, except for very brittle failure modes, generallydo not cause damage to reasonably designed industrialstructures and even to those not especially designedagainst earthquakes. Indeed, it is known that intensities

    5

    (as a damage characteristic) correlate much better withpeak ground velocity (PGV) or with the spectralacceleration corresponding to the first natural frequencyof structures. Furthermore, ground motion measurementsat a free surface (e. g. free standing soil column) are hardlyrepresentative for the interactions of seismic waves withmassive buildings, which are considered by engineers.That the results of traditional PSHA are driven by extremeearthquake recordings (statistical outliers) as discussed inSection 2.2 leads to a critical issue with respect to thedevelopment of design basis earthquakes. The PSHA-approach of developing uniform hazard spectra in termsof spectral accelerations and then disaggregating (basedon spectral accelerations) to find controlling scenarioearthquakes (in terms of magnitude and distance bins)without consideration of the energy content of thecausative event results in the fact that low magnitude,near-site events are selected with preference. This hasbeen shown in a few case studies (e. g. Chapman, 1999)and this is also observed with respect to the PEGASOS-results for theNPPGoesgen. Furthermore, the selection ofcontrolling events is not unique. The obtained lowmagnitude, near-site events can be judged frequently bystructural engineers as not damaging. They can beeliminated from further consideration (e. g. EPRI,2005). The potential consequence is that the final designof the structure using PSHA may be inadequate withrespect to the impact of higher energy (larger magnitude)earthquakes from distant sources. This danger is reducedsubstantially by the deterministic scenario-based ap-proach, because it focuses on large earthquakes from thebeginning of the analysis.

    Nevertheless, the selection of appropriate physicalparameter(s) to describe the damaging impact of anearthquake on structures more reliable is an importantquestion for any methods The selected parameter isimportant for later analysis steps, because the attenuationmodels to estimate the site hazard uses the same parameter.Generally, the parameter characteristics can be classified asstructure-dependent or structure-independent.

    2.4. Structure-independent parameters for impactcharacterisation

    Due to the traditional division of labour betweengeophysicists and engineers, structure-independent im-pact parameters have some advantages due to theirpossible general-purpose applications. Spectral or peakground accelerations are traditional structure-independentparameters for characterisation of the impact of earth-quakes. As discussed in Section 2.3, the sole use ofspectral accelerations or spike peak ground accelerations

    comarRectangle

  • 6 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    may be misleading. Meaningful alternatives, which havefound practical application, are theArias-Intensity and theCumulative Absolute Velocity (CAV).

    The Arias Intensity (Arias, 1970) is defined as:

    IA ¼ p2gZ s0

    a2ðtÞdt ð4Þ

    where τ is the duration of the strong motion (eliminatingthe contribution of Coda waves) and a(t) is the accelerationtime-history. Because Arias-Intensity represents a measureof the elastic energy content of an earthquake groundmotion, it can be used to select design earthquakes in caseswhere inelastic behaviour of structures or components isnot permitted (e.g., for brittle failure modes).

    The Cumulative Absolute Velocity (EPRI, 1991) iscalculated as:

    CAV ¼XNI¼1

    Hðpga−0:025ÞZ tiþ1ti

    jaðtÞjdt ð5Þ

    where N is the number of 1-second time windows in thetime series, pga is the peak ground acceleration in the i-thtime window and H(x) is the Heaviside function. CAVcan be used to define the ductile (low cycle fatigue)failure mode condition of structures and components.

    2.5. Structure-dependent parameters for impactcharacterisation

    Structure-dependent parameters can provide valuableinformation for the characterisation of the destructivepotential of earthquakes. Most of them are based on anassessment of the energy transfer into a structure. Thedisadvantage is that the need to consider the physicalcharacteristics of the structure may be too elaborate forgeneral purpose applications. The analysis requires theuse of appropriate time-histories, which can be syntheticseismograms and/or recorded data. More effort isneeded here than for cases using structure-independentparameters. Therefore, the use of a structure-dependentparameter is recommended for specific structuralanalysis, for which deterministic scenario-based earth-quakes approach is most appropriate.

    Such an energy-based approach, more advanced thanstructures design by balancing energy demands andinputs, allows (1) proper characterisation of differenttypes of time-histories (impulsive, periodic with long-duration pulses, etc.) which may correspond to fairlyrealistic earthquake strong ground motions, and (2)simultaneous consideration of the dynamic response of astructure from elastic to ductile failure conditions.

    6

    The absolute energy input per unit of mass, can beexpressed by:

    EI ¼Z s0

    ̈ut �ugdt ð6Þwhere ut=u+ug is the absolute displacement of the mass,and ug is the earthquake ground displacement. Anotherenergy-based parameter, denoted as seismic hazardenergy factor (AEI), was introduced by Decanini et al.(1994), to take into account the global energy structuralresponse amount. AEI represents the area enclosed by theelastic energy input spectrum corresponding to differentintervals of time, T (from T1 to T2) and is expressed by:

    AEI ¼Z T2T1

    EI ðn ¼ 5%; TÞdT ð7Þ

    Other structure-dependent parameters for impactcharacterisation of earthquakes can be considered, too.

    2.6. Attenuation relationships

    For the selection of appropriate scenario earthquakes,as well as for the assessment of the impact on structures ata site, attenuation relationships are required. They shouldbe developed for the selected parameter characterising theimpact of earthquakes on structures. In principle, therelationships show the parameter values as a function ofdistances for earthquake magnitudes. In a deterministicscenario-based seismic hazard analysis, an iterativeapproach is feasible for the first (screening) stage inwhich simple empirical relations, based on traditionalparameters like spectral accelerations or peak groundvelocities or accelerations may be used. This is possiblebecause the analysis focused on large earthquakes and onwell-identified or characterised seismic sources. Forgeneral purpose applications (e.g., the development of ageneral seismic hazard map for a region), the analysis canbe limited to the first screening step. For site and projectspecific analysis, a second step refining the analysisresults is done.

    The scope of the refinement analysis shall be definedaccording to the importance of the construction project orthe infrastructure. The refinement also considers the costsassociated with the seismic design in comparison to thecosts for additional analysis required to reduce theinvestments in protecting hardware (cost benefit con-siderations). A general rule of thumb from the perspectiveof risk analysis practitioners is that the costs of additionalanalysis should not exceed 10% to 15% of the costs ofadditional hardware required to solve the problem in aconservative way.

    comarRectangle

  • 7J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    The refinement of the analysis can be based on (1) theuse of more sophisticated empirical attenuation modelsconsidering an in-depth characterisation of the physicaleffects important for the site, or (2) the incorporation ofphysical models for the selected scenarios consideringthe possible variability of source parameters as well asthe specific topography of wave propagating media fromsource to site (e.g. the use of synthetic seismograms,Panza et al., 2000; Panza et al., 2003b). The use ofsynthetic seismograms has the advantage that anydeconvolution of seismic wave propagation from sourceto site into separate modeling components like sourceeffects, attenuation and site effects can be avoided. It alsoallows to consider multi-dimensional effects. Thepractical disadvantage is that such an approach is labourintensive. For screening or general purpose applications,it is sufficient to define approximate source geometryfrom the geological and seismological evidences toassign source to site distance. The upper envelope orarithmetic mean of the mean (regression) curves can beselected as a more or less conservative screening model.In selecting candidate attenuation relationships, it isimportant to ensure that they are applicable for the region(e.g., Parvez et al., 2001).

    Such an assessment can be performed easily bycomparing the candidate attenuation equations withavailable instrumental earthquake records or with thehistorical intensity attenuation characteristics obtainedfrom a seismic catalogue of the region. Such a comparisoncan help to develop realistic relationships even with fewavailable records. This pragmatic approach constrains theinflating statistical effects introduced by the ergodicassumption (Anderson et al., 2000; Klügel, 2005b) madeby researchers to compensate for lack of data. Theapproach also constrains the uncertainties of the attenuationrelationships to a manageable size as supported by data.

    Empirical attenuation correlations for spectral accel-erations (even developed specifically for a region) havelimitations for near-fault conditions (e.g. Bolt andAbrahamson, 2002; Mollaioli et al., 2003). Thesecorrelations are typically based on a model of simpleamplitude decay with distance using a far-field approx-imation to a point (seismic) source characterisation (Akiand Richards, 2002). This approximation is not valid fornear-fault conditions because it neglects multi-dimen-sional wave interference effects (Richwalski et al., 2004).Near-fault earthquake hazards can best be assessed byapplying advanced dynamic source modelling. In generalpurpose applications (e.g., seismic hazard maps), theeffect of such earthquakes can be approximated by areasonably large value of hazard parameter for seismicdesign for any infrastructure in the near-field region.

    7

    2.7. Incorporation of site effects

    In general, site effects cannot be treated separately fromthe overall seismic waves propagation from causativeseismic sources under consideration to the site through thepropagating media (e.g., Field, 2000; Panza et al., 2001).For site effects, the scenario-based methodology againallows for an iterative procedure. For the selection ofscenario earthquakes, a first screening step can be based ontraditional site classification based on soil properties (e.g.,shear wave velocity, depth of soil surface layer, etc.). Forgeneral purpose applications such as the development ofregional hazard maps, it is possible to limit the analysis toits first step. For site-specific applications, the analysis shallbe refined for the selected scenario earthquakes. The mostappropriate way of doing this is the use of physicalmodelling based on synthetic broadband seismograms.This approach allows incorporating the solution of theattenuation problem with site effects in a physically correctmanner. It is important to note that the models shouldconform to the principle of empirical control. Accordinglythey have to be checked against earthquake recordingsfrom the region when available.

    2.8. Definition of deterministic scenario earthquakes —Maximum Credible Earthquakes (MCEs)

    After completion of the procedural steps for attenua-tion correlations and site effects on the screening level, thefinal set of scenario earthquakes can be developed bygeologically-based earthquakes which would causemaximum impact at the site. For this final selection,directivity, fling and topographic factors shall beconsidered because of their potential impact. The selectedscenario earthquakes will be the basis for detailed site-specific analysis aswell as for risk analysis applications. Itis important to note that the number of scenarioearthquakes to be considered for detailed analysis israther limited. Even for a complicated seismotectonicregion with considerations of directivity and topographiceffects, the final set will not likely exceed five scenariosfor site-specific applications and will frequently beconstrained to a single scenario as in California. The useof limited scenario earthquakes substantially reduces theeffort for additional analysis beyond the screening level.Fig. 2 illustrates the work-flow for the definition ofscenario earthquakes.

    3. Specific issues

    Specific important issues with respect to the applica-tion of the deterministic scenario-based methodology,

    comarRectangle

  • Fig. 2. Work-flow for the selection of scenario earthquakes.

    8 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    but also relevant for any other methodology, arediscussed below.

    The size or magnitude of an earthquake can beestimated by several approaches. Fault length, area anddisplacement for known faults have been empiricallycorrelated with moment magnitudes (Wells and Cop-persmith, 1994). Improved correlations have been madepossible by separating the data for different fault types.These relationships have been applied to seismogenicfaults for estimating MCE magnitudes. An importantassumption is the fault length used for MCE estimation(Mualchin, 1996). Empirical correlations for the assess-ment of earthquake magnitudes should not be appliedoutside the region they have been developed for. Itshould also be noted that fault mechanics (Scholz, 2002,p. 207) demonstrated different size regimes with respect

    8

    to the scaling of moment and slip to the aspect ratio(length to width) of the source area, indicating differentsimilarity regimes for earthquakes.

    Correlations like Wells and Coppersmith (1994) arebased on mixed data across these regimes and arecompromise fits (Scholz, 2002). The use of mixed datacan be a source of systematic error considered by someanalysts as epistemic uncertainty. The different scalingregimes can be attributed, in part, to thewayof propagationof the fault rupture. Seismic events with length less thanthe thickness of the brittle crust can propagate in alldirections within a planar surface. Larger earthquakes, thatrupture through the entire brittle crust (to the top of theductile zone) can propagate farther only in the horizontaldimension. Thus, small and larger seismic events may beself-similar, but not to each other, and source scaling forinterplate and intraplate tectonic regimes are different.Therefore, empirical correlations between magnitude andfault length should be based, as much as possible, onregional information. Fig. 3 shows magnitude dependenceon fault length from global earthquake data.

    In low seismic areas, the assessment of maximumcredible earthquake magnitudes is more complicated.The solution to this problem is based on observed data.The data is based on seismic catalogues compiled fromwritten records (historical approach). Fortunately,enough strong and damaging events in civilized areasare well recorded both in oral and written tradition.Statistical methods for the treatment of extreme valuesprovide a meaningful means to assess maximum credibleearthquake magnitudes in a region of interest. Possiblemethods are available, for example, by Noubary (2000):

    • bootstrap techniques (re-sampling of the distributionof observed maximum magnitude values),

    • threshold theory leading to the application of aGeneralised Pareto Distribution (GPD),

    • traditional extreme value statistics like the Gumbeldistribution.

    Additionally available information (e. g. from paleo-seismology) can be easily incorporated into the analysis.For example, paleo-seismological assessments of max-imal magnitudes with the associated assessment offrequency (or recurrence period) can be incorporated intothe empirical distribution of observed maximum values,which is re-sampled using a correspondingMonte Carlo-Procedure. It is recommended to use the 95%-quantile ofthe re-sampled distribution as the maximum credibleearthquake magnitude.

    In practical applications for high seismic areas withpronounced and well mapped seismic faults, magnitudes

    comarRectangle

  • Fig. 3. Fault length scaling to magnitudes based on Wells andCoppersmith (1994) and Pegler and Das (1996).

    9J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    are approximated up to the nearest quarter magnitude,reflecting the imprecision of magnitude scale (e.g.Calcagnile and Panza, 1973; Panza and Calcagnile,1974; Herak et al., 2001), the limited data sets used toestablish the relationships, and conservatism needed forseismic hazard estimates. It is easy to demonstrate thaterrors or changes in fault length by 100%, 50%, and 25%correspond to changes in magnitude estimates by only .3,.2, and .1 magnitude units, respectively. Therefore,typical MCE magnitudes, rounded off to a quarter ofmagnitude, are extremely stable and not likely to changefor particular faults or earthquake sources. A stableestimate of MCE magnitude is a special feature of thedeterministic scenario-based method, which is desirablefor critical infrastructure design and construction.

    4. Application of scenario-based seismic hazardanalysis for risk analysis

    For some applications, such as for safety analysis ofcritical infrastructures or for insurance companies, it isnecessary to perform a detailed risk analysis. Such ananalysis can be beneficial to assess the efficiency of designmeasures as well as to identify potential vulnerabilitiesespecially for existing facilities. Risk analysis thereforeprovides a meaningful complementary tool to traditionalsafety analysis and deterministic design procedures. It is acommon but erroneous belief that only a probabilistic

    9

    seismic hazard analysis (traditional PSHA) is able toprovide the required input for a probabilistic riskassessment (PRA) for critical infrastructures. Peopleoften prefer to believe in names (such as “probabilistic”seismic hazard analysis) instead of analysing the essentialpoints of a topic. Even in official technical standards(Budnitz et al., 2003), this wrong belief is common.Unfortunately, the question is not that simple and is worthinvestigating in more detail. A deterministic scenario-based seismic hazard analysis result is appropriate toperform detailed risk analysis, as demonstrated below.

    The key elements of a risk analysis (Kaplan andGarrick, 1981) are:

    1. Identification of events that can occur and haveadverse consequences.

    2. Estimation of the likelihood of those eventsoccurring.

    3. Estimation of the potential consequences.

    Therefore, the results of a risk analysis can bepresented as a set of triplets:

    R ¼ hHi;Pi;Cii ð8Þ

    Hi represents the set of i events with possible adverseconsequences.

    Pi represents the associated probabilities of theiroccurrence.

    Ci represents the associated intolerable consequences.This means that a seismic hazard analysis shall

    provide the following information as an input for PRA:

    • The events which may potentially endanger ourinfrastructure.

    • The frequency or probability of occurrence of theseevents.

    The consequences of these events are evaluated by therisk model of the plant, which essentially represents alogic model mapping the hazards to be investigated totheir consequences. What does a traditional PSHAprovide? The standard output consists of a uniformhazard spectrum and a set of hazard curves, whichrepresent the convoluted impact of a large amount orinfinite (Wang, 2005) number of earthquakes with respectto the chances of causing certain level of groundaccelerations at the site of interest. Therefore, PSHA isnot delivering the required frequency of events butexceedance probabilities of secondary properties. It isimportant to note that frequently damaging effects of anearthquake cannot be described by only a single

    comarRectangle

  • 10 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    secondary property (e.g. hazard curves expressed in termsof averaged spectral acceleration or even PGA). Theimpact of an earthquake event has to be described in therisk model of the plant, which can easily accommodateother impact effects besides the effects of acceleration(e.g. liquefaction, surface rupture below the basement). Itis obvious that PSHA output does not correspond to thedata necessary for providing the frequency of eventscausing damage required bymodern risk analysis (Klügel,2005e).

    In the early development of seismic risk studies,hazard curves and uniform hazard spectra from PSHAwere indeed used directly as an input for seismic PRA(Klügel et al., 2004) due to a lack of better alternatives.This approach could be justified as a conservativeattempt to provide a worst-case assessment of thepossible risk associated with seismic hazard. At thisearly time of seismic PRA, the hazard curves developedby PSHA for PGA or average spectral accelerations weremostly intensity-based. Therefore, they were bettersuited as a damage index for risk study. The replacementof intensity-based mean PGA values (“peak-damped”)by uniform hazard spectra based on statistically extremetime-history recordings loosing the logical link to thedamaging effects of earthquakes in the traditional PSHAmethodology does not justify using this simplifiedapproach anymore. Traditional PSHA methodologytries to resolve the problem by disaggregating theobtained seismic hazard into magnitude and distancepairs to be interpreted as scenario earthquakes. Theproblem here is that any source specific information islost in the process of analysis and the disaggregationresults are completely non-informative. As discussed inSection 2.3, if the disaggregation is based on spectralaccelerations instead of energy measures, inappropriatecontrolling earthquakes may be selected as scenarioearthquakes.

    The scenario-based seismic hazard analysis method-ology presented in this paper is much better suited toprovide the required and correct input for a seismic PRA.The scenario earthquakes developed essentially representthe hazard events to be considered in the risk study. Thefrequency of smaller earthquake events can be taken intoaccount in the calculation of the frequency of occurrenceof the stronger scenario earthquakes which envelope theimpact of smaller events, by using a classification system.This corresponds exactly to how probabilistic riskassessments of nuclear power plants are performed forother initiating events (IAEA (1995), IAEA (2002a,b),DOE (1996), Tregoning et al. (2005), Poloski et al.(1999)). A nuclear power plant has, for example, a largeamount of pipes in the reactor coolant circuit, which

    10

    potentially could break causing a loss of coolant accident(LOCA) inside the reactor containment. The calculationof all possible scenarios associated with each singlepossible pipe break is not possible. Therefore, pipe breakscausing similar consequences are combined together andmodelled by an enveloping, conservative, accidentscenario. The frequency of the scenario is calculated asthe sum of the frequencies of all underlying pipe breaks,assigned to the same class (e.g., small break LOCA,medium break LOCA, large break LOCA, etc.). The sameapproach is used in PRA for airplane crash analysis.Airplanes are classified by their impact characteristics andthe risk contribution of airplane crashes is calculated asthe sum of the contributions of each of the classes. Thefrequency assigned to each of the classes is developedfrom real data of airplane crashes and represents the totalfrequency of all crashes of airplanes belonging to theconsidered class.

    Let us have a look how the frequency of scenarioearthquakes can be calculated starting from the mostgeneral case for an area source, A, which completelyencloses our site of interest (e.g., area with radius/distanceof 300 kmor less from the site). Because the occurrence ofearthquakes is not invariant in time and space, thecalculation of an average frequency of occurrence for acertain earthquake (magnitude) class requires the solutionof the following equation:

    FðMiÞ ¼ 1Tlife

    ZA

    Z TLife0

    Z MupperMlow

    f1ðr;m; tÞdmdrdt ð9Þ

    Here, Mi∈ (Mlow,Mupper) is the magnitude valueassociated to the considered earthquake class, Mlow isthe lower interval limit for the considered class,Mupper isthe upper interval limit for the considered earthquakeclass,F is the average frequency of the earthquake class, ris the distance from a point seismic source located insidethe seismic area source A to the site, f1 is the multivariatefrequency density distribution of earthquakes within theconsidered area source, TLife is the expected (residual) lifetime of the infrastructure analysed in the study, m is themagnitude, t is time. It is easy to understand that only afew earthquake classes have to be considered in a riskanalysis (not more than 3 or 4). The impact assigned toeach of the earthquake classes can be defined by thesolution of the optimisation problem:

    findðroptÞYmaxZA

    Z TLife0

    Z MupperMlow

    f1ðm; r; tÞgðrjmÞdmdrdt

    ð10Þwhere g(r|m) calculates the value of the selected impactparameter (energy-based measure, spectral acceleration,

    comarRectangle

  • 11J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    etc.) as a function of the distance from the location of theearthquake with magnitude m to the site. The calculatedropt defines the location of the deterministic scenarioearthquake considered for this class. In many practicalcases, a simplification of the problem is possible byseparating the spatial distribution of seismicity from thefrequency distribution of earthquakes depending onmagnitude size and time. This means that the frequencydensity distribution f1 can be represented as:

    f1ðr;m; tÞ ¼ f2ðm; tÞf3ðrjmÞ ð11Þ

    Eq. (9) reflects the assumption that the spatialdistribution of seismic activity is invariant with time.This is of course a rather strong assumption, which for ashort-lived structure can be justified by the assumption ofstable seismotectonic conditions in the area of interest.The required density distribution f2 can be obtained muchmore easily, for example, using bivariate extreme valuedistributions (Noubary, 2000) or Markoff or Semi-Markoff models.

    In cases where the seismic activity can be allocated tospecific faults, the problem is simplified to a very largeextent. The frequency of an earthquake belonging to theclass i can be calculated as:

    FðMiÞ ¼ 1TLifeXNj¼1

    Z TLife0

    Z MUpperMLower

    fjðm; tÞdmdt ð12Þ

    Here, j is the summation index for the relevant faultsand N is the total number of faults potentiallycontributing to the magnitude class i. The optimisationproblem of Eq. (10) can also be simplified under theseconditions by making the bounding assumption that theshortest distance between fault and site will be selected.

    Once the probabilistic scenario earthquakes areselected and their frequency is calculated (this is therequired frequency of an initiating seismic event), it iseasy to calculate scenario-specific hazard spectra, whichwill provide the input for subsequent analysis within theframework of a seismic PRA. Within this probabilisticframework it is possible to calculate uncertainty boundson the average frequencies obtained from Eq. (9) or (12)by performing sensitivity analysis. It is also possible tocalculate uncertainty bounds for the hazard spectraassociated with each magnitude class, taking intoaccount the total empirically observed uncertaintyassociated with the attenuation of seismic waves in theregion of interest. Such estimates can easily beperformed by propagating the uncertainties associatedwith the lack of knowledge of the values of the modelparameters used through the model. Direct Monte Carlo

    11

    analysis or response-surface analysis techniques can beused in dependence of the complexity of the model.

    It is important to note that the proposed probabilisticextension of the deterministic scenario-based method is infull compliance with the likelihood principle, the basicprinciple of any meaningful risk analysis. In its originalmathematical formulation it says (Edwards, 1972, p. 30):“Within the framework of a statistical model, all theinformation which the data provide concerning the relativemerits of two hypotheses is contained in the likelihoodratio of those hypotheses on the data… For a continuum ofhypotheses, this principle asserts that the likelihoodfunction contains all the necessary information.”

    A shorter “common sense” formulation is:All information on any subject submitted to an

    investigation is contained in the data about this subject.Traditional PSHA violates this principle for the

    following reasons:

    • The decomposition of the multivariate probabilitydistribution of occurrence of earthquakes (size (mag-nitude), epicentre location and time being the mostimportant variates) into a set of independent univariateprobability distributions (introduced by Cornell, 1968)leads indirectly to the assumption of the existence of a“hidden” earthquake undetectable by any scientificmeans near a site. Any earthquake of any size canoccur anywhere in space and at any time with somefrequency.

    • Negative evidence is ignored (e.g. in many casesthere is no geological or geomorphological evidencefor the “controlling scenario earthquakes” derivedfrom a traditional PSHA especially in the near-sitearea, but this is ignored in the analysis).

    • Replacement of observable scientific data by subjec-tive probabilities (SSHAC, 1997) derived fromexpert judgement without empirical control.

    The proposed probabilistic extension of the scenario-based seismic hazard analysis method removes thesimplifying decomposition of the problem introducedby Cornell. Furthermore, it is focussed on the output astypically requested by risk analysts (frequency of criticalevents instead of exceedance probabilities of secondaryhazard parameters).

    5. A question of names

    Abrahamson (2006) repeated a frequent argumentof proponents of PSHA that deterministic seismichazard analysis by its true nature is also a probabilisticmethod. This is a misrepresentation of the question.

    comarRectangle

  • Fig. A1. Illustration of the numerical example.

    12 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    Deterministic seismic hazard analysis is called “deter-ministic” because it is based on facts, data andphysical models, describing the behaviour of earth-quakes. Statistical (probabilistic) techniques, which arebased on data analysis, are a natural part of this typeof “deterministic” analysis. Therefore, the term“deterministic” is used in this paper to characteriseapproaches which are based on an increasingly deeperunderstanding of the underlying phenomena.

    6. Summary and conclusions

    The methodology presented here for a deterministicscenario-based seismic hazard analysis incorporates allavailable information (in a geological, seismotectonicand geotechnical database of the site of interest), andadvanced physical modelling techniques can provide areliable and robust basis for the development for adeterministic design basis of civil infrastructures. Therobustness of this approach is of special importancefor critical infrastructures. At the same time, ascenario-based seismic hazard analysis can producethe necessary input for probabilistic risk assessment(PRA), as required by safety analysts and insurancecompanies. The scenario-based approach removes theambiguity of the results of traditional probabilisticseismic hazard analysis (PSHA) which relies on theprojections of Gutenberg–Richter (GR) equation, aspracticed in some countries. The problems in thevalidity of G–R projections, because of incomplete tototal absence of data for making the projections, arestill unresolved. Consequently, the information fromG–R must not be used in decisions for design ofcritical structures or critical elements in a structure.The methodology discussed here is strictly based onobservable facts and data and complemented byphysical modelling techniques, which can be subjectedto a formalised validation process. By sensitivityanalysis, knowledge gaps related to lack of data canbe resolved rapidly as scenarios are limited. In itsprobabilistic interpretation, the scenario-based ap-proach is in full compliance with the likelihoodprinciple, and therefore meets the requirements ofmodern risk analysis. The methodology of scenario-based seismic hazard analysis can easily be adjusted,so that its output is the required and correctinformation for safety analysts and civil engineers.The methodology incorporates parameters appropriatefor damage index in the design of critical infrastruc-tures and components, and thus supersedes outdatedand inappropriate assessments of spike instrumentalaccelerations.

    12

    In a nutshell, scenario-based seismic hazard analysisshould be preferred over the traditional PSHA for allapplications because of its flexibility, robustness, use ofphysically meaningful data, reasonable conservativeresults and for PRA when required by safety analystsand insurance companies.

    Acknowledgements

    The authors thank anonymous reviewers for thediscussion, which contributed significantly to improv-ing the paper.

    Appendix A. Numerical Example

    A numerical example will illustrate the suggestedscenario-based procedure. For simplicity, the solution ofthe optimisation problem will be performed, using thesimplifying assumption of Eq. (11) (numbering refers tothe one in the main paper).

    A.1. Task specification

    Fig. A1 illustrates the task. A critical infrastructureshall be designed against earthquakes. It is located in thecentre of the circle shown in Fig.A1. From the responsibleproject engineers it is known that modern design rulesensuring a ductile design of structures will be applied. It isalso known that the characteristic first natural frequenciesof the new structures are expected to be in the range of3 Hz. The design lifetime of the critical infrastructure is40 years. The very detailed site investigation performedallows the definition of an exclusion zone with respect to

    comarRectangle

  • 13J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    the existence of active, capable faults within a radius of5 km around the site. This means that inside this area onlysmall and deep earthquake events are feasible (Mwb5.0).From the available geological and seismological database,it was concluded that in the surroundings of the site twosignificant linear sources (faults) have to be considered.The shortest distances to site are D1=30 km andD2=25 km. The length of surface projection of the firstfault (line source LS1) is 21 km and of the second fault(line source LS2) is 15 km. For simplicity, it is assumedthat the perpendicular from the site to the faults subdividesthe fault surface projections of both line sources into twoparts at a ratio of 2:1. Available data does not indicate anypreferred location of epicentres along both faults,therefore a non-informative distribution of epicentrelocation has to be assumed. Based on historical data twoareal sources (AS1 and AS2) with some past seismicactivity have been discovered, which have to beconsidered in the analyses. The shortest distance of bothareal sources to the site is 5 km (joining the exclusionzone). Areal source AS1 is extended up to a distance of65 km,while areal sourceAS1 is extended up to a distanceof 98 km from the site. Detailed statistical analyses havebeen performed to develop temporal and spatial frequencydistributions of earthquake occurrences at the differentsources including spatial distributions of epicentres for theareal sources. For simplification, the model of bivariateexponential distributions (Gumbel Type 2, see Noubary,2000) is used both for the temporal distributions aswell asfor the spatial distributions. Detailed statistical analysisshowed that the 95%-quantil of themagnitude distributioncorresponds to a magnitude of 5.9 for source AS1 and 6.3for source AS2.

    The joint distribution function of the bivariateexponential distribution (X, T being the variates) isgiven as:

    Fðx; tÞ ¼ 1−e−k1x� � 1−e−k2t� � 1þ ae−ðk1xþk2tÞh i ð1ÞThe joint density is given as:

    f ðx; tÞ ¼ k1k2eð−k1x−k2tÞ 1þ að2ek1x−1Þð2ek2t−1Þ� � ð2Þ

    Maximum likelihood estimators for the parametersλ1 and λ2 are based on the empirical mean of X and Tand calculated simply as:

    ̂k1 ¼1

    X̄ð3Þ

    and

    ̂k2 ¼1

    T̄ð4Þ

    13

    α is calculated based on the empirical correlation co-efficient ρ:

    ̂a ¼ 4q̂ ð5Þ

    In our application the random parameter X has themeaning of magnitude, while the random parameter Tcorresponds either to the elapsed time between twoearthquakes (temporal distribution) or to the distancebetween epicentre location and site. Other statisticaldistributions, continuous as well as discrete ones, can beused in dependence of the results of data analysis.

    Upper limit estimates for the statistical models can beprovided by accounting for the error in the magnitude andlocation estimates. The simplest procedure consists of anestimate of the upper limit for themaximummagnitude (e.g. mean+2σ, or 95%-quantile) and the lower limit for thedistance (in case of a spatial distribution for an arealsource, e.g. mean−2σ). The procedure is similar withrespect to the elapsed time between events.

    Statistical analysis was performed in units of momentmagnitudes, years (time) and km (distance).

    Table 1 shows the information available for the linesources. Tables 2a and 2b shows the available informa-tion for the areal sources for the considered case.

    In addition, for our example it is assumed that

    • detailed physical modelling has confirmed that forthe relevant sources a simple amplitude-decay modelfor ground motion attenuation is acceptable,

    • validated attenuation models for each of the sourceshave been established in terms of ground motion(spectral accelerations).

    With respect to attenuation equations a set of foursource-specific equations is available, reflecting the diffe-rent topographical and directivity conditions with respectto seismic wave propagation from the different sources tothe site. The general format for these equations is:

    logðSaÞ ¼ aþ bMw þ clogðRÞ þ dRþ rP ð6Þ

    with R=(DJB2 +h2)0.5 and DJB representing the Joyner–

    Boore distance.Table 3 shows the coefficients of the equation for the

    line source LS1, Table 4 for LS2, Table 5 for the arealsource AS1 and Table 6 for the areal source AS2. Theseequations have been developed especially for this ana-lysis by modifying the baseline equation of Table 3.They are not to be used for any other purpose. Forsimplicity a constant standard deviation for all equationsof 0.28 (in log-scale) is assumed.

    comarRectangle

  • Table 1Data for line sources

    Source Faultlength,km

    Faultlength error,standarddeviation in km

    Shortestdistanceto site,km

    Applicable statistical model forf2 (see Eq. (11))

    Parameters of the model Parameters of the modelfor f2, upper limit

    λ1 λ2 α λ1 λ2 α

    LS1 25 5 30 Bivariate exponential (Gumbel) 0.17 0.009 0.79 0.15 0.013 0.79LS2 17 3 25 Bivariate exponential (Gumbel) 0.23 0.0051 0.69 0.20 0.0082 0.68

    14 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    For our example, it is assumed that from the inform-ation in the geological and seismological database thefollowing correlations for the relationship between faultrupture length and moment magnitude as well asbetween fault length and moment magnitude have beenestablished.

    logðLRÞ ¼ −3:6þ 0:75M þ Pr ð7Þ

    with σ=0.1 and

    logðLfaultÞ ¼ −3:25þ 0:72M þ Pr ð8Þ

    with σ=0.1.It is also assumed that the regression technique used

    for the development of these equations possesses theproperty of orthogonality.

    A.2. Deterministic scenario-based analysis

    According to the procedure of the deterministicscenario-based seismic hazard analysis, the first stepconsists in the evaluation of the maximum credibleearthquake.

    A conservative way of performing this task consistsin the assumption that the whole fault length establishedby measurement could rupture. Additionally, theuncertainty of the measurement should be considered.The analysis is performed for a critical infrastructure.Therefore, we base our analysis on the mean+1σ valueof the estimated fault length as well as on the mean−1σvalue (inverse problem) obtained from Eqs. (7) and (8).

    So, we obtain for the linear source LS1 a MCE-valueof Mw=6.9. In case we want to base our analysis on themore realistic correlation between fault length and

    Table 2aData for areal sources, distributions for f2 (Eq. (11) in main paper)

    Source Shortestdistance to site,km

    Applicable statistical model forf2 (see Eq. (11))

    AS1 5 Bivariate exponential (Gumbel)AS2 5 Bivariate exponential (Gumbel

    14

    magnitude, removing the assumption of a completerupture of the fault, the result would be Mw=6.7.Therefore, the difference is not very large. Neglectingthe uncertainty but keeping the assumption that the faultcan rupture completely results in a magnitude value of6.8. So the discussion confirms that MCE-magnitudesbehave robustly with respect to a modification of data onfault or rupture lengths.

    Therefore, we accept the following value

    MCELS1 ¼ 6:9:

    Repeating the same procedure for line source LS2,we obtain a magnitude value of

    MCELS2 ¼ 6:7:

    The next task consists in the evaluation of the MCE-magnitudes for the areal sources. According to ourprocedure we use the 95%-quantile of the historicalmagnitude distribution as the magnitude value for theMCE. The resulting MCE for the areal source AS1 is

    MCEAS1 ¼ 5:9

    Accordingly, we obtain for the second areal source:

    MCEAS2 ¼ 6:3:

    Once we have established the maximum credibleearthquakes (in practical applications the values may berounded off to the next larger quarter of a magnitudeunit, therefore the final values would be 7.0, 6.75 for theline sources and 6.5 and 6.0 for the areal sources), we

    Parameters of the model for f2 Parameters of the model forf2, upper limit

    λ1 λ2 α λ1 λ2 α

    0.35 0.132 0.84 0.32 0.143 0.810.31 0.124 0.89 0.28 0.17 0.88

    comarRectangle

  • Table 2bData for areal sources, distributions for f3 (Eq. (11) in main paper)

    Source Shortestdistance to site,km

    Applicable statistical model for f3 (see Eq. (11)) Parameters of themodel for f3

    Parameters of themodel for f3, upperlimit

    λ1 λ2 α λ1 λ2 α

    AS1 5 Conditional probability, based on a bivariate exponential model (Gumbel) 0.35 0.031 0.8 0.32 0.035 0.65AS2 5 Conditional probability, based on a bivariate exponential model (Gumbel) 0.31 0.022 0.72 0.28 0.03 0.64

    15J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    are able to calculate the corresponding hazard spectraand the associated CAV-values assuming the shortestdistance between source and site.

    Because in our casewe know that the new constructionwill correspond to modern requirements of a ductiledesign, we may use the CAV-value as the criterion whichscenario to select for the design of the new infrastructure.The alternative could simply consist in the use of anenvelope of the hazard spectra for all fourMCE-scenariosusing the source-specific attenuation models. The laterapproach is more conservative. It is close to the approachfrequently used by practitioners, where the most conser-vative of all attenuation equations for a region is used ifsource specific models are not available.

    Fig. A2 shows the resulting hazard spectra for the 4scenario earthquakes based on the mean regressionmodels. Fig. A3 shows the same comparison for themean+1σ value. It is observed that at lower frequenciesthe hazard is dominated by the line source 2, while in thehigher frequency range the areal sources contribute to thehazard envelope.Nevertheless, a design based on scenario2 only, is sufficient, because it results in the highestspectral acceleration values in the range of the first natural

    Table 3Coefficients of attenuation model for LS1

    Spectralfrequency, Hz

    a b c d h

    PGA (50) −1.5537 0.2396 −0.62494 −0.0081622 5.429435 −1.5558 0.2648 −0.66713 −0.0085626 5.65825 −1.6455 0.27332 −0.63828 −0.0087011 5.044820 −1.3713 0.23727 −0.63121 −0.0086357 4.951613.33 −1.3756 0.24517 −0.63336 −0.0086132 5.26810 −1.2412 0.23763 −0.63708 −0.0086018 5.6076.67 −0.96632 0.21371 −0.62504 −0.0083936 6.19665 −1.0168 0.21242 −0.57166 −0.0082279 5.81374 −1.103 0.22025 −0.56614 −0.0081654 6.7652.5 −2.053 0.31787 −0.50505 −0.0079937 4.86242 −2.5039 0.35523 −0.46556 −0.0079405 4.63531.34 −2.6029 0.357 −0.45591 −0.0078623 4.6171 −3.0338 0.38841 −0.42746 −0.0078021 4.06940.667 −3.521 0.42579 −0.41148 −0.0077495 4.59390.5 −3.9299 0.46231 −0.41078 −0.0077495 4.7113

    15

    frequency of the considered construction. Furthermore,the differences to the other scenarios at higher frequenciesare low. Additionally, we may prefer to consider theinformation available with respect to the epicentredistribution in the areal sources. They indicate that theexpected value for the distance to the site is much higherthan 5 km (28.6 to 33.3 km, according to the statisticalanalysis). This considerationwould allow the exclusion ofthe areal sources from further consideration.

    Fig. A4 shows the enveloping hazard spectra for themean regression and the mean+1σ model.

    It is interesting to observe that the most critical scenarioresults from line source LS2 with a smaller maximumcredible magnitude than line source LS1. This is the resultof the shorter minimal distance and the large differencesbetween the source-specific attenuation equations of thetwo sources. This emphasises the importance of thedevelopment of a source-specific attenuation model or theuse of detailed wave propagation models (e.g. the use ofsynthetic seismograms).

    Note that any smaller seismic event at any of the foursources will not exceed the enveloping hazard devel-oped from the scenarios.

    A.3. Probabilistic scenario -based hazard analysis

    A.3.1. Introductory discussionNote that an upper limit for the probability of the

    critical scenario 2 (this is not to be set equal to the totalfrequency of scenarios in the same magnitude class asdescribed in the main paper) can be assessed with the helpof the recommended bivariate exponential model(neglecting for simplicity the truncation at the “physicallimit of m=6.7” in our introductory discussion). Theconditional probability of occurrence of an earthquakewith magnitude X exceeding a specified value given acertain length of time (the lifetime of our structure) for thebivariate exponential model is calculated as:

    P XNmjTNTLifeð Þ ¼ PðXNm; TNTLifeÞPðTNTLifeÞ ð9Þ

    comarRectangle

  • Table 4Coefficients of attenuation model for LS2

    Spectral frequency, Hz a b c d h

    PGA (50) −1.320645 0.27554 −0.687434 −0.0081622 6.629435 −1.32243 0.30452 −0.733843 −0.0085626 6.85825 −1.398675 0.314318 −0.702108 −0.0087011 6.244820 −1.165605 0.2728605 −0.694331 −0.0086357 6.151613.33 −1.16926 0.2819455 −0.696696 −0.0086132 6.46810 −1.05502 0.2732745 −0.700788 −0.0086018 6.8076.67 −0.821372 0.2457665 −0.687544 −0.0083936 7.39665 −0.86428 0.244283 −0.628826 −0.0082279 7.01374 −0.93755 0.2532875 −0.622754 −0.0081654 7.9652.5 −1.74505 0.3655505 −0.555555 −0.0079937 6.06242 −2.128315 0.4085145 −0.512116 −0.0079405 5.83531.34 −2.212465 0.41055 −0.501501 −0.0078623 5.8171 −2.57873 0.4466715 −0.470206 −0.0078021 5.26940.667 −2.99285 0.4896585 −0.452628 −0.0077495 5.79390.5 −3.340415 0.5316565 −0.451858 −0.0077495 5.9113

    16 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    By using F(x,t) and the marginal distribution of T,yields

    PðXNmjTNTLifeÞ¼ e−k1m 1þ að1−e−k1mÞð1−e−k2TLifeÞ� � ð10ÞThe conditional probability of earthquake occurrence

    within an interval (m1,m2) is given as:

    PðXNm1jTNTLifeÞ−PðXNm2jTNTlifeÞ ð11ÞThe averaged annual frequency is obtained dividing the

    result by the lifetime of the structure. This approach canalso be used for line source 1 and, in a similar way, for theareal sources neglecting the spatial distribution of seismicactivity and for other magnitude values. Combining theobtained frequencies with the worst-case scenario (deter-ministic scenario earthquake at LS2 with magnitude 6.7)and summing up over all frequencies of the corresponding

    Table 5Coefficients of attenuation model for AS1

    Spectral frequency, Hz a b

    PGA (50) −1.70907 0.2755435 −1.71138 0.3045225 −1.81005 0.31431820 −1.50843 0.272860513.33 −1.51316 0.281945510 −1.36532 0.27327456.67 −1.062952 0.24576655 −1.11848 0.2442834 −1.2133 0.25328752.5 −2.2583 0.36555052 −2.75429 0.40851451.34 −2.86319 0.410551 −3.33718 0.44667150.667 −3.8731 0.48965850.5 −4.32289 0.5316565

    16

    magnitude class leads to a conservative risk model for theinfrastructure for the considered seismic initiating event,because the impact is maximized under our assumptions

    • the scenario earthquake it located at the shortestdistance to the site,

    • analysis of seismic wave attenuation indicated theapplicability of a simple amplitude-decay model.

    Risk analysts are interested in a more realistic assess-ment. Therefore a more detailed probabilistic analysisfollowing the procedure in the main paper is required.

    A.3.2. Detailed probabilistic analysisIn a first step it is necessary to scale the suggested

    probabilistic models (the bivariate exponential distribu-tion), which in principle allow infinite values of X(meaningmagnitude or distance to site), for application in

    c d h

    −0.812422 −0.0081622 5.4294−0.867269 −0.0085626 5.658−0.829764 −0.0087011 5.0448−0.820573 −0.0086357 4.9516−0.823368 −0.0086132 5.268−0.828204 −0.0086018 5.607−0.812552 −0.0083936 6.1966−0.743158 −0.0082279 5.8137−0.735982 −0.0081654 6.765−0.656565 −0.0079937 4.8624−0.605228 −0.0079405 4.6353−0.592683 −0.0078623 4.617−0.555698 −0.0078021 4.0694−0.534924 −0.0077495 4.5939−0.534014 −0.0077495 4.7113

    comarRectangle

  • Table 6Coefficients of attenuation model for AS2

    Spectral frequency, Hz a b c d h

    PGA (50) −1.39833 0.2396 −0.718681 −0.0081622 5.429435 −1.40022 0.2648 −0.7671995 −0.0085626 5.65825 −1.48095 0.27332 −0.734022 −0.0087011 5.044820 −1.23417 0.23727 −0.7258915 −0.0086357 4.951613.33 −1.23804 0.24517 −0.728364 −0.0086132 5.26810 −1.11708 0.23763 −0.732642 −0.0086018 5.6076.67 −0.869688 0.21371 −0.718796 −0.0083936 6.19665 −0.91512 0.21242 −0.657409 −0.0082279 5.81374 −0.9927 0.22025 −0.651061 −0.0081654 6.7652.5 −1.8477 0.31787 −0.5808075 −0.0079937 4.86242 −2.25351 0.35523 −0.535394 −0.0079405 4.63531.34 −2.34261 0.357 −0.5242965 −0.0078623 4.6171 −2.73042 0.38841 −0.491579 −0.0078021 4.06940.667 −3.1689 0.42579 −0.473202 −0.0077495 4.59390.5 −3.53691 0.46231 −0.472397 −0.0077495 4.7113

    17J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    an interval. Due to the correlation betweenX and time, thecalibration factor K is time dependent. The factor can becalculated from the joint distribution function (Eq. (1)):

    KðtÞ ¼ Fðl; tÞ−Fð0; tÞFðxu; tÞ−Fðx; tÞ

    � �ð12Þ

    For t→∞ the calibration coefficient obtains its usualunivariate format:

    K ¼ e−k1x1

    1−e−k1ðxu−xlÞð13Þ

    Here, xu and xl are the upper and lower limits of therandom variable X (here meaning magnitude or distance).

    The first step in our risk analysis consists in thecalculation of the frequency of initiating events for each

    Fig. A2. Comparison of scenario hazard spectra — mean regressionmodel.

    17

    class of events. After the initial seismological analysis,we decided to consider the following event classes:

    • magnitude between 6.5 and 6.9— magnitude class 1• magnitude between 6.0 and 6.5— magnitude class 2• magnitude between 5.5 and 6.0—magnitude class 3.

    Because the design of the considered infrastructure willbe very robust (designed against a conservative MCE-scenario), it is not necessary to consider more events in theanalysis.

    First, we calculate the frequency of class 1 events.Only the linear sources LS1 and LS2 contribute to thisclass. Additionally, the magnitude truncation at magni-tude 6.7 has to be considered for LS2. Therefore, the

    Fig. A3. Comparison of scenario hazard spectra — mean+1 sigmamodel.

    comarRectangle

  • Fig. A4. Comparison of enveloping hazard spectra.

    Table 7Initiating event frequencies of the scenario earthquakes

    Scenario earthquake(magnitude class)

    Magnituderange

    Frequency(best estimate)

    Frequency,upper limit

    1 6.5–6.9 0.00107 0.001132 6.0–6.5 0.0284 0.143 5.5–6.0 0.0742 0.316

    18 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    frequency of the events in class 1 can be calculated as theintegral over time of a sum of two integrals:

    Fð6:5VmV6:9Þ

    ¼ 1=TlifeZ Tlife0

    Z 6:96:5

    fLS1ðm; tÞdmþZ 6:76:5

    flS2ðm; tÞdm�

    dt

    ð14ÞHere, fLS1 and fLS2 are the calibrated joint density

    functions of the bivariate exponential model for the linesources LS1 and LS2 correspondingly.

    For event class 2 areal source AS2 has to be consideredadditionally besides the line sources. Because our analysisis based on Eq. (11) of the main paper, an integration overthe area is not required for the evaluation of the totalfrequency of earthquake events in this class. Therefore, theresulting equation is again an integral over time of a sum ofintegrals:

    Fð6:0VmV6:5Þ

    ¼ 1=TlifeZ TLife0½ Z 6:5

    6:0ð fLS1ðm; tÞ þ fLS2ðm; tÞÞdm

    þZ 6:36:0

    fAS2ðm; tÞdm�dtð15Þ

    Similarly we obtain the frequency for the event class 3:

    Fð6:0VmV6:5Þ ¼ 1=TlifeZ TLife0½ Z 6:5

    6:0ð fLS1ðm; tÞ

    þ fLS2ðm; tÞ þ fAS2ðm; tÞÞdmþZ 5:95:5

    fAS1ðm; tÞdm�dtð16Þ

    18

    The calculation's results of the frequency of eventsare shown in Table 7. A lower magnitude level ofml =2.0 was used in the analysis.

    A.3.3. Solution of the optimisation problemFor a more realistic derivation of the scenarios the

    optimisation problem according to Eq. (10) in the mainpaper has to be solved. The optimisation problem can besimplified under certain conditions. For example, if

    • the selected ground motion characteristic follows asimple amplitude-decay model,

    • and the spatial distribution over the source is non-informative (uniform distribution or beta-distributionwith shape parameters smaller than 1within the distanceinterval), then the scenario earthquake can be assumedto occur at the shortest distance between source and site.Under these conditions a simple comparison betweenthe resulting hazard spectra (as performed for thedeterministic case in Section 2) is sufficient to identifythe critical scenario for each magnitude class.

    In our example, these conditions are fulfilled for theline sources but not for the areal sources. Because the firstnatural frequency of the considered infrastructure is in therange of 3 Hz, we solve the optimisation problem withrespect to the spectral acceleration at 3 Hz.

    A.3.3.1. Magnitude class 1. Only the two line sourcesactually contribute to this magnitude class. According tothe task description we don't have any relevant infor-mation on the spatial distribution of seismicity along thefaults. Therefore the earthquake scenario to be consideredin the risk study corresponds to the deterministic scenarioearthquake occurring at the closest distance between linesource LS2 and the site. Fig. A5 shows the correspondinghazard spectrum (regression mean).

    A.3.3.2. Magnitude class 2. Contributors to this classare both line sources and the areal sourceAS2. For the arealsource a probabilistic model for the spatial distribution ofseismicity is given. For the line sources once again asimplified analysis is sufficient assuming the occurrence ofthe candidate scenario earthquakes at the shortest distance

    comarRectangle

  • Fig. A6. Hazard spectra of the candidate scenario earthquakes ofmagnitude class 2.

    Fig. A7. Hazard spectra of the candidate scenario earthquakes ofmagnitude class 3.

    Fig. A5. Hazard spectrum of the scenario earthquake ofmagnitude class 1.

    19J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    to the site. Therefore, the optimisation problem convertsinto the task of finding the location of the candidatescenario earthquake for area source 2 and a comparison ofthe hazard spectra of all candidate scenarios. Because forthe areal source we also apply an amplitude-decay model itis sufficient to solve the reduced optimisation problem

    findðroptÞYmaxZ TLife0

    Z 5:95:5

    Z rmaxrmin

    fAS2ðm; tÞfAS2ðrjmÞdmdrdt

    ð17Þto find the candidate scenario earthquake for the arealsource 2. The integration variable can be separated.Therefore, it is possible to perform a further reduction ofthe optimisation problem:

    findðroptÞYmaxZ rmaxrmin

    fAS2ðrjmÞ ð18Þ

    The conditional probability can be calculated inanalogy to Eq. (10). For the solution it is sufficient tofind the location ropt maximising the conditional proba-bility for the lower magnitude value of the consideredinterval (5.5). From Eq. (10) it can be concluded that thecandidate scenario earthquake for the areal source AS2 isalso located at the boundary of the source (the modalvalue is located at the shortest distance). Therefore, for thefinal selection of the scenario earthquake for magnitudeclass 2 we have to compare the hazard spectra from the 3contributing sources LS1, LS2 and AS2. For the linesources, the magnitude values to be considered arem=6.5, while for the areal source the magnitude value is6.3 (maximal value). Fig. A6 shows the comparison of thehazard spectra for the 3 candidate scenarios. The hazardspectrum of candidate scenario from line source 2 showsthe highest value for the spectral acceleration at 3 Hzalthough the corresponding value for areal source 2 is

    19

    close. Because the candidate scenario earthquake fromline source LS2 is associated with a larger magnitudevalue (with a larger energy content), the candidatescenario from line source LS2 has to be selected as thefinal scenario earthquake for magnitude class 2.

    A.3.3.3. Magnitude class 3. The solution of theoptimisation problem for magnitude class 3 follows thediscussion in Section A.3.3.2. All sources do contribute tothis magnitude class. Once again the candidate scenarioearthquakes are located at the boundary of the areal sourcesand at the shortest distance between the line sources and thesite. Therefore, the final scenario earthquake is to beselected by a comparison of the hazard spectra of thecandidate scenarios from each source. Fig. A7 shows the

    comarRectangle

  • Fig. A8. Comparison of probabilistic (magnitude class 1) anddeterministic hazard spectra.

    20 J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    comparison. Again the candidate scenario earthquake ofline source LS2 leads to the largest spectral acceleration at3 Hz. Therefore, it has to be selected as the final scenarioearthquake for magnitude class 3.

    A.4. Seismic risk evaluation

    Most seismic risk studies (e.g. for nuclear powerplants) as well as the corresponding software are based onhazard curves. The newmethodology does not require thedevelopment of hazard curves because the frequency ofseismic initiating events is calculated directly. Instead ofhazard curves it is required to calculate the conditionalprobability of exceedance of the scenario earthquakes'hazard spectra including the corresponding uncertaintydistribution. Together with the calculated frequencies ofinitiating events this allows to use the existing risksoftware to perform a seismic PRA (Probabilistic RiskAssessment).

    For the calculation of the conditional hazard spectraexceedance probability it is possible to use the model of alognormal distribution of spectral accelerations for agiven scenario earthquake. To use thismodel correctly, wehave to adjust the uncertainty values of our attenuationequations. Attenuation equations represent multivariatedistributions of spectral accelerations in dependence ofmagnitude, distance and additional parameters not usedexplicitly as explanatory variables in the equation. Theuncertainty caused by these additional explanatoryvariables is frequently confused with inherent random-ness of earthquakes and named aleatory uncertainty(Abrahamson, 2006; SSHAC, 1997). Because thisuncertainty is epistemic by nature, it is more appropriateto call this uncertainty “(temporarily) irreducible episte-mic uncertainty”. This irreducible part has to be treated asrandom in our model. The contribution of uncertainty ofmagnitude and distance can be eliminated from ourprobabilistic model because the selected scenarios arecharacterised by a fixed (upper estimate) and knownmagnitude value and a fixed and known distance betweenthe earthquake location and the site. Furthermore, theselected scenarios are conservative with respect to allscenarios within the same magnitude class. Consideringthat the error term in our attenuation Eq. (6) can berepresented as

    r ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiAgðm; rÞ

    Amrm

    � �2þ2qrmrr þ Agðm; rÞ

    Arrr

    � �2þr2ired

    s

    ð19Þ

    we can calculate the irreducible, residual part ofuncertainty σired to be considered in the probabilistic

    20

    model. g is the attenuation equation functional form (Eq.(6)). The correlation coefficient ρ can be set to 1, becausea strong physical correlation exists between magnitudeand epicentre location at the fault rupture plane. Fur-thermore, these two parameters are correlated in our casebecause the scenarios in terms of magnitude and distancepairs represent the solution of an optimisation problem.The errors ofmagnitude and distance can be evaluated. Asan example we perform the calculation for magnitudeclass 1. For σm we have to consider a value of 0.4magnitude units, because the selected scenario earthquakecompletely envelopes all scenarios within this magnitudeclass with respect to the used impact parameter (Sa). Thevalue for σr should be evaluated from the spatialdistribution of seismicity in the area surrounding thesite. For magnitude class 1 we have to consider the twoline sources as contributors. For our analysis we use theminimal value for σr of both faults (conservativeassessment). Based on the data in our example and thetheorem of Pythagoras we get for each of the line sourcesthe following relation for the error

    rr ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ D2

    p� D ð20Þ

    where a is the larger of the two fault sections formed bythe perpendicular between the line source and the site; Dis the length of the perpendicular (shortest distance in ourexample).

    In our example we obtain the value of σr=1.9 km forline source LS2.

    It is important to mention that the location uncer-tainty associated with an areal source is much higherthan for a line source.

    comarRectangle

  • 21J.-U. Klügel et al. / Engineering Geology 88 (2006) 1–22

    The partial derivatives can be calculated from thesource-specific attenuation equations.

    In our example, all considered scenarios originate fromline source LS2. Therefore, we have to use the attenuationequation for the line source LS2 for the calculation of thepartial derivatives. The partial derivative for m is just thecoefficient b in our Eq. (6). For simplicity, we evaluate theuncertainty (as an example) for the spectral frequency of2.5 Hz. Therefore, b=0.31787. Then the resultingcontribution of magnitude uncertainty to the uncertaintyof the attenuation equation is 0.127. The partial derivativewith respect to r is:

    Agðm; rÞAr

    ¼ c 1rlnð10Þ þ d ð21Þ

    We evaluate the derivative for r at the shortest distancebetween fault and site neglecting the contribution ofdepth:

    rcDJB ¼ 25

    Because the coefficient d is very small we can neglectits contribution. For c=0.556 (line source LS2, 2.5 Hz)we obtain for the resulting contribution of locationuncertainty to the uncertainty of the attenuation equationa value of 0.02. Based on Eq. (20) we can calculate theirreducible part of uncertainty. This irreducible uncer-tainty is σired=0.191 (instead of 0.28 obtained fromregression). Using the model of a lognormal distributionof spectral accelerations for a given scenario, we cancalculate a “mean” hazard spectrum and the requiredquantile spectra.

    We can also calculate the conditional probability ofexceedance of our design spectrum. This delivers therequired information for a subsequent probabilistic riskassessment.

    Fig. A8 shows a comparison between the probabilis-tic “mean” hazard spectrum