THOR Projects for 2020
Geodynamic Underpinnings of Great Earthquake Occurrence
Great earthquakes occurring at subducting plate boundaries are the greatest source of seismic hazards globally. Not only does the associated ground shaking topple cities, but the resulting tsunamis inundate vast stretches of low-lying areas. Unfortunately for seismic hazard assessment, there is substantial uncertainty in understanding which subduction zones should host great earthquakes and how often. Some subduction zones, like Chile and the Aleutians, have experienced great earthquakes, while others like the Marianas apparently do not. Are the Marianas and Tonga deficient in great earthquakes because the underlying physics precludes their occurrence or is it that great thrust events have yet to occur?
There is now no accepted geophysical framework that explains the spatial and temporal variability in great earthquakes. This variability was thought to be related to plate age and convergence – clear geodynamic controls from plate tectonics. Application of this principle led to the expectation that Cascadia should be capable of great earthquakes. This ‘prediction' spurred additional work showing that indeed a great earthquake had occurred in 1700. But the simple plate age-convergence rate framework did not anticipate the great Sumatra earthquake of 2004 and, over the last decade, has failed to account for spatial variability in great earthquake occurrence. Nevertheless, state-of-the-art global computational models reproduce the overall state-of-stress in subduction zones and show the basic end-member states globally between Chilean-type subduction zones that have strong coupling and Mariana-type that have weak coupling. Consequently, tectonic forces are likely to modulate the occurrence of great earthquakes but there is no widely accepted model on how this works.
We propose a new collaborative research direction to develop such a framework. This will require development of geophysical insights and methods that close the gap between global- scale long-time-scale approaches and regional-scale shorter-time-scale approaches. Gurnis and Lapusta have made enormous progress on these two ends of the spectrum and this project aims to seed an effort to close this scale-gap. This research will likely contribute to a deeper understanding of the relation between the tectonic and earthquake-source processes while paving the way for a new framework to forecast which subducting plate boundaries are likely capable of having great earthquakes.
Gurnis and colleagues have achieved considerable insight into how plate motions are related to the forces transmitted through the slab, across the plate interface, and into the converging plate through global flow models with 1 km resolution by solving the steady-state, long-time- scale motion of plates as a Stokes flow problem. These models predict the first-order differences seen been highly compressive subducting margins and highly tensile ones. Gurnis and colleagues have been developing approaches to estimate the long-time scale (million-year) resistances expressed at these boundaries with inverse (adjoint) models.
Lapusta and colleagues have made considerable progress in understanding earthquake sequences of regional-scale, crustal faults, like the San Andreas Fault, by developing and applying multi-time-scale models that include fault nucleation, dynamic rupture along the fault, energy dissipation, and seismic wave propagation. These models have simultaneously addressed how rupture is governed by friction and other fault properties, while optimizing parameters against geophysical observations, like regional geodetic and seismic observations.
For the subduction-zone earthquake problem, an emerging class of computations use two- dimensional (2-D) small-scale cross-sections of generic subduction zones with visco-elastic constitutive relations. Such computations lead to earthquake-like ruptures and reveal earthquake-cycling phenomena but are not self-consistent with plate motions and do not incorporate the right fault physics. There is enormous room to develop different and better modeling approaches. But perhaps more fundamentally, by being 2-D, the existing approaches cannot capture the phenomenon of great earthquakes, as it is the along-strike rupture extent that governs how big a subduction zone earthquake becomes.
Our goal is a new generation of time-dependent, multiscale plate-mantle models in which the details of plate boundaries are resolved in time and space. These developments will require substantial external support and this THOR project will seed the activity and allow us to obtain initial test software and science results required for successful external new proposals. We propose to work with one or two graduate students or post docs over an initial two-year period. The primary activities will be:
- Formulating the multi-time-scale problem as fully dynamic from the perspective of "tectonics", that is, plate motions become a consequence of a sequence of great earthquakes. This will allow assessment of the computational challenges faced with multiple time-scales, and would represent an important new direction for our research. There could be first-order outcomes transcending the applied seismic-hazard problem. For example, the controls on long-term plate motions and topography could be governed by transient rheologies excited by the high strain-rates associated with great earthquakes.
- The global flow model uses software that solves for Stokes flow, code that underlies our 2015 Gordon Bell Prize in Supercomputing. We will explore if this software can be modified for visco-elastic, time-dependent flow, while still scaling on the largest supercomputers. Ultimately, considerable work will be required to scale this problem to a new generation of high-performance computers.
- Three-dimensional (3-D) modeling is essential for great earthquakes, but these computations will be substantially more challenging than the 2-D problem, and the full spherical one even more so. Consequently, for the visco-elastic problem, we will need to explore methods with space-dependent time stepping, a novel variant of multi-scale modeling. In other words, the region of a great earthquake with high strain rates will have small time steps while the remainder of the globe will have larger time steps. As strain-rates readjust globally and earthquakes unfold, these time-steps will be readjusted.
Development of a Simulation Method for Compressible Geophysical Flows
With support from the Terrestrial Hazard Observation and Reporting (THOR) Center at Caltech, we propose a research collaboration to develop a new computational model for predicting complex, unsteady multiphase flows such as occurs in avalanches and landslides. A critical aspect of these flows is capturing the transition from load-bearing, solid-like behavior to a deformable and compressible fluid behavior. Experiments show that several mechanisms may contribute to the initiation of flow, including the fluid saturation level and external vibrations; once initiated the flow evolves downstream and granular material can become either entrained in the flow or deposited by the flow resulting in changes in flow density and flow depth [1-4].
State-of-the art predictive methods have not satisfactorily captured the solid-to-fluid transition or developed methodologies to determine and track the interface between the stationary and moving material. Here we propose a collaboration to tackle these research challenges, which would build on the numerical techniques developed by Tim Colonius' research group with the experimental methodologies used by Melany Hunt's group. In addition, we hope that this collaboration can extend to other researchers at Caltech. Ultimately, the goal would be to develop numerical modeling techniques that would provide predictive capability to reduce the risks of these natural hazards.
Dr. Cheng-Chuan Lin is currently a postdoctoral scholar working at the National Taiwan University. With support from THOR, we would be able to provide partial support for him to continue his postdoctoral work at Caltech; he currently has some limited support that he could bring from Taiwan. Dr. Lin and collaborators have developed a pressure-implicit-splitting- operator scheme based on the finite volume method . His work uses a non-local non- Newtonian constitutive relation to capture the stress variation at the onset of flow and a flow- dependent constitutive model after the flow commences. He has used this scheme to predict steady, incompressible, flows of granular material down an inclined flat surface for different flow depths and inclination angles. He is able to predict deposition layers, which prior studies have not been able to capture. The results compare moderately well with discrete particle simulations.
At Caltech, we would use his computational framework to consider other constitutive models and flow-dependent boundary conditions; we are also interested in considering compressible flows. To do so, we would introduce jump interface conditions, which could more accurately capture the evolving solid-liquid transition. Through this new collaboration, we anticipate the development of an unsteady flow solver that can tackle complex geotechnical problems and provide predictive capability for flow initiation, flow deposition rates and run-out distances.
- G. Lorenzini & N. Mazza, Debris Flow Phenomenology and Rheological Modeling, WIT Press (2004).
- R.M. Iverson, J. Geophysical Research, 117, F03006, doi:10.1029/2011JF002189 (2012).
- Y. Forterre & O. Pouliquen, Annual Review Fluid Mechanics, 40: 1-24 (2008).
- J.P. Prancevic & M.P. Lamb, J. Geophysical Research: Earth Surface, 120 doi:10.1002/2014JF003323 (2015).
- C.C. Lin & F.L. Yang, Continuum simulation for regularized non-local m(I) model of dense granular flows, under review.
Laboratory Tsunamis: How Super-Shear Ruptures Affect Tsunamigenesis
( Rosakis )
The existence of super-shear ruptures (speeds exceeding the shear wave speed) in interfaces and faults was first discovered experimentally by the Rosakis (EAS)/Kanamori (GPS) group in the 90s. Since then, a great number of super-shear earthquakes in land and underwater have been reported in both thrust and strike-slip faults worldwide. We propose to construct a highly-instrumented facility to study the effects of super-shear earthquakes on tsunamigensis.
It is commonly believed that destructive tsunamis are generated by earthquakes occurring on thrust faults and subduction zones, such as the 2004 Mw = 9.0 Sumatra, Indonesia and the 2011 Mw = 9.0 T¯ohoku-Oki, Japan events. Current theories associated with tsunami generation assume a uniform source (step displacement) at the bottom of the ocean and do not model the complex, transient, non-uniform seafloor displacements and velocities which strongly depend on rupture speed. Furthermore, the often substantial seafloor velocities are completely omitted in the models. In particular, models do not include the possibility of super-shear rupture speeds and the associated, long-range, ground shaking effects of the resulting shock waves. Indeed, one of the biggest unknowns in tsunami generation is how rupture speeds (e.g. super-shear or sub-Rayleigh) affect the seafloor motion and the resulting water wave excitation and coastal run-up. Recently, the catastrophic 2018 tsunami in Palu Bay, Indonesia was a result of the Mw = 7.5 Sulawesi earthquake, which surprisingly occurred on a strike-slip rather than a thrust fault, mystifying tsunami modelers around the world [Fig. 1].
Fig. 1: Can an underwater super-shear earthquake in a strike-slip fault configuration generate a tsunami?
In the past few months, our group has used modeling together with near-fault GPS stations to conclusively demonstrate that the Palu earthquake rupture was super-shear and to investigate possible mechanisms of tsunamigenesis .
We propose to construct a new facility which would enable us to germinate the new field of "laboratory tsunamis." This facility will enable us to experimentally investigate: 1) the effect of rupture speed on highly non-uniform, transient seafloor motions and tsunamigenesis for both thrust and strike-slip fault configurations, 2) our working hypothesis that despite conventional wisdom, super-shear earthquakes along strike-slip faults are also able to cause major tsunamis (e.g. Palu Bay), and 3) the effect of non-uniform bathymetry, in particular of narrow bays on tsunami amplification.
This facility will enable us to study the coupled fluid-solid interaction in a highly-instrumented and controlled environment. In order to directly measure both the full seafloor ground motion profile and to address the above questions, our opto-mechanical capabilities must include a 2-camera imaging system for stereo-vision measurements and a separate camera for fluid visualization. We currently employ a single high-speed camera (monoscopic system) to measure the in-plane components of surface motions in scaled laboratory earthquakes using digital image correlation (DIC). Examples of the estimated out-of-plane velocities associated with super-shear and sub-Rayleigh ruptures are shown in [Fig. 2]. These out-of-plane fields are computed from in-plane velocities assuming plane stress conditions, and are thus approximate. While this assumption allows an estimate of the out-of-plane velocity from the in-plane components, it is not strictly valid since a 3D state of stress develops through the fault depth. However, these preliminary fields clearly show how the out-of-plane velocity of super-shear ruptures can be an order of magnitude larger than their sub-Rayleigh counterparts, with important implications for tsunamigenesis.
The full characterization of the spatiotemporal nature of the solid response is critical when fluids are introduced to the system; the traveling rupture footprint is a morphing control surface that drives the fluid-solid interface mechanics. Knowledge of the fluid-solid interfacial mechanics is a first step to assess earthquake tsunamigenesis.
Fig. 2: Estimated out-of-plane seafloor velocity fields for a super-shear and sub-Rayleigh rupture in a strike-slip fault obtained with monoscopic techniques. Direct measurements require dynamic stereo-vision .
The test section design merges the fluid and solid cells into a self-contained mobile unit [Fig. 3]. The test section is placed into a pressing machine, aligning with the loading axis by mating with the press. The applied load is resolved along the fault plane (gray) into normal and shearing stresses, modeling strike-slip (A) or thrust fault (B) geometries based on the plate arrangements. An articulate alignment mechanism can accommodate any right rectangular solid with a maximum dimension of 400 mm in any desired orientation by rotating about the loading axis. A laboratory earthquake is generated in a surrogate materials via fluid injection
Fig. 3: Strike-slip (A) and thrust (B) fault geometries join a fluid cell in a self-contained mobile test section and environmental chamber.
or other methods at the interface (star). The ground motions are transferred to the fluid cell; appropriate selection of fluid(s), fluid depth, and bay topography can simulate laboratory tsunamis based on similitude principles and field data across the globe (e.g. the San Francisco Bay area, Tomales Bay, and Mendocino Bay in northern California). The solid cell can also be configured as a multi-layered laminate with a compliant layer on top, to better model the sediments on the seafloor. The test section contains 5 viewports for maximum visibility into the chamber. The horizontal press must be customized to match the viewports of the test section. The additional viewports allow measurements in transmission through both horizontal and vertical axes, enabling dynamic, multi-technique measurements to track the fluid response at various locations in real time; these include the seafloor fluid interface during rupture growth, generation of fluid surface waves directly above the growing earthquake, as well as the amplification of waves and run-up at the simulated shore. By closing the lateral doors and adding the top plate, the environmental chamber can both be pressurized to conduct high-pressure, multi-fluid experiments or evacuated for truly dry experiments in a vacuum. Temperature control will also be implemented. Controlling these parameters will help in 1) implementing the necessary scaling and 2) enhancing the observational accuracy.
The diagnostic manifold is the structure that ties the entire facility together [Fig. 4]. The structure is designed for versatility of diagnostic placement. It is inspired by filmography, specifically virtual reality imaging. The traditional laboratory arrangement is replaced with a free-form rail system surrounding the test section that can accommodate any perspective necessary to achieve unique arrangements of multiple diagnostics at a fast pace. This manifold can be thought of as a sphere surrounding the test section. Various segments of the manifold, e.g. curved rails for meridian and horizon placement, straight sections, etc., can be assembled to provide the desired structure for a given test. The structure is also scalable to fit any laboratory.
Fig. 4: The proposed "laboratory tsunamis" facility consists of 1) the horizontal press, 2) the test section, and 3) the diagnostics manifold.
The envisioned multi-diagnostic facility will be the first of its kind, capable of coupling a realistic sea-floor excitation resulting from a miniature earthquake to a fluid in a fully-scaled system. It will further provide a turn key solution to scale down and study tsunami hazards around the world and at home. Indeed, close to home there exist multiple narrow bay configurations (e.g. Mendocino and Tomales bays), traversed by long strike-slip or other faults, creating tsunami hazards similar to Palu [Fig. 5].
Fig. 5: Narrow bay tsunami hazards closer to home: Tomales Bay is traversed by the San Andreas Fault north of San Francisco.
 Amlani, F., Bhat, H.S., Simons, W.J.F., Schubnel, A., Vigny, C., Rosakis, A.J., Efendi, J., Elbanna, A., Abidin,
H.Z. Supershear tsunamis: insights from the Mw 7.5 Palu earthquake, Under review in Nature Geosciences.
 Rosakis, A.J., Rubino, V., Lapusta, N. Recent milestones in unraveling the full-field structure of dynamic shear cracks and fault ruptures in real time: from photoelasticity to ultrahigh-speed digital image correlation. To appear in the Journal of Applied Mechanics, February 2020.
The envisioned facility will be augmented by a number of optical diagnostics including high-speed and low-speed cameras, laser velocimeters, and light sources capable of visualizing both the sea floor and fluid motions during a simulated earthquake/tsunami event. Most of these instruments are currently available in the Rosakis lab and the remaining will be either rented or borrowed as necessary.
Ground Motion Synthesis with Generative Adversarial Networks
Problem statement: The great promise of physics-based earthquake simulations is to provide ground motion time-series that will be useful for hazard and risk assessment applications in engineering. Scenario- based earthquake simulations have been made possible by the prevalence of supercomputing resources and parallel/GPU programming, but simulations for engineering applications have yet to overcome computa- tional constraints, and to reduce prohibitively large uncertainties associated with available models of source physics, geology, rheology and anelastic attenuation in the crust. The state-of-engineering practice for more than a decade has thus relied upon statistical models of recorded ground motions to generate time-series for high risk projects and critical infrastructure. Increased availability of recorded data has led to increasingly complex statistical models, referred to as Ground Motion Prediction Equations (GMPEs), which, when fed a given earthquake magnitude and distance from the source, provide ground motion intensity proxies known as response spectra. Alas, time-series cannot be recovered from response spectra, so to overcome this obsta- cle, the engineering community has developed a set of increasingly complex empirical rules which, based on representative shapes of real seismograms, modulate the frequency and amplitude of random noise to 'match' the GMPE-derived response spectra. The range of uncertainties associated with this route to ground motion time-series is hard to overstate.
Since 2012, there have been major advances in the field of artificial intelligence, most notably in the area of deep learning. In particular, a type of unsupervised learning algorithm called generative adversarial networks (GANs) has shown exceptional ability to learn the probability distributions of high-dimensional variables (e.g. images or time series) from large datasets. GANs were originally developed in the field of computer vision to synthesize realistic-looking images on demand (e.g. faces, handwritten numbers, or landscapes), and rapidly became state of the art for a variety of tasks. For example, given a training dataset consisting of images of faces, GANs can learn a probability distribution such that any of the images in the training set could be sampled from it, without ever specifying the functional form of this distribution. Once learned, every sample drawn from the distribution is a synthesized image that is intended to be sufficiently realistic that a second neural network cannot tell whether it is real or fake. It is this property that makes GANs desirable for engineering seismological applications, as they can learn the distribution of strong-ground motion records produced by large earthquakes.
Here, we propose to develop a framework for synthesizing ground motions for earthquake engineering ap- plications using GANs. Our method offers several distinct advantages to traditional approaches and provides a fundamentally different perspective on ground motion characterization. By learning the distribution of strong-ground motion records produced by many past large earthquakes –the same records used to calibrate the parametric structure of GMPEs, only to empirically deconstruct it to approximate time-series– GANs can be employed to generate realistic-looking 3-component seismograms on demand that are conditioned on attributes of the earthquake source, path and site response. Thus, rather than predicting response spectra, GANs can directly synthesize seismograms that encompass the statistics of ground motion characteristics and the physics of source, path and site effects embedded in ground motion time-series. Since they are probabilistic models, seismograms can be generated without limit, and then used for any engineering appli- cation in which one or 3-component time-series are necessary. We foresee that GANs offer the potential to dramatically change the way that ground motions are modeled for hazard assessment and mitigation.
2. Previous results
The idea for the proposed research originated as a class project in Cs101a (Projects in Machine Learning, taught by Yisong Yue). We supervised a trio of computer science undergraduates who implemented GANs for 3-component accelerograms. The main goal of the project was to see if GANs were capable of learning the rich statistical tendencies of a dataset consisting of > 100, 000 seismograms. We tested conditional Wasserstein and progressive GANs, in which the seismograms generated are conditional on a user prescribed earthquake magnitude and distance from the source. GANs are notoriously difficult to train properly, and after testing a variety of models and tricks to improve convergence, we identified a model architecture that was able to successfully generate realistic waveforms for these very simple input conditions. Results demon- strate that the synthesized seismograms exhibit all key features of true earthquake motions, and motivate more comprehensive research of these models to examine the different ways in which they can be used to reduce seismic risk and promote earthquake resilience.
3. Proposed research
Our research plan will focus on three primary components for the coming year: (1) generalization of the method to continuous-valued features, (2) expansion of the approach to incorporate additional site charac- teristics as input features, and (3) application and evaluation of the generated seismograms to fundamental structural and geotechnical applications.
Our proof-of-concept GAN project was performed by grouping seismograms into magnitude-distance bins, and treating each bin as a categorical input. We found that this helped to improve convergence of the GANs during training, compared with treating magnitude and distance as continuous valued features. The binned features led to better convergence because they helped alleviate issues arising from the large imbalance in the distribution of records as a function of magnitude and distance. While usable by the engineering community in binned form, models that take continuous-valued features are much more desirable. We will revisit this issue by testing several training strategies, such as curriculum learning, which rearranges the order of the training dataset to make learning more efficient.
Our current model can only generate seismograms conditional on a given magnitude and distance from the earthquake. However, the geotechnical characteristics of a given site a building is to be constructed have a huge influence on the shaking, as sites underlain by soft soils will shake very differently from those sitting on hard rock. Indeed, many of the terms in GMPEs are designed to capture these variations. We will modify the architecture of our network to include the average shear wave velocity over the top 30 meters and site response information derived from the ratio of horizontal to vertical spectra (H/V). Our approach could move beyond the limitations of GMPEs, which focus on certain values extracted from the H/V spectra, by taking –for example– the entire H/V spectrum as input. This would allow for much more sophisticated site characteristics to be captured by our ground motion models.
Lastly, we will compare how well the synthetically generated seismograms can capture the statistics of widely used intensity measures in engineering design practice. Examples include the distribution of peak ground acceleration (quasi-static proxy of forces applied on a building during an earthquake), duration (es- sential for the prediction of ground failure such as liquefaction or landslides), peak ground velocity (as proxy of ground strain and thus of soil nonlinearity) and possibly their inelastic spectral response, as a simplified approximation of the damage that these time series would induce on a single story elastoplastic frame struc- ture. All the above will provide more concrete evidence of the promise of our proposed approach, and will serve as a springboard for us to seek funding from local and federal agencies.