Countless dollars and entire scientific careers have been dedicated to predicting where and when the next big earthquake will strike. But unlike weather forecasting, which has significantly improved with the use of better satellites and more powerful mathematical models, earthquake prediction has been marred by repeated failure.
Some of the world’s most destructive earthquakes — China in 2008, Haiti in 2010 and Japan in 2011, among them — occurred in areas that seismic hazard maps had deemed relatively safe. The last large earthquake to strike Los Angeles, Northridge in 1994, occurred on a fault that did not appear on seismic maps.
Now, with the help of artificial intelligence, a growing number of scientists say changes in the way they can analyze massive amounts of seismic data can help them better understand earthquakes, anticipate how they will behave, and provide quicker and more accurate early warnings.
“I am actually hopeful for the first time in my career that we will make progress on this problem,” said Paul Johnson, a fellow at the Los Alamos National Laboratory who is among those at the forefront of this research.
Well aware of past earthquake prediction failures, scientists are cautious when asked how much progress they have made using A.I. Some in the field refer to prediction as “the P word,” because they do not even want to imply it is possible. But one important goal, they say, is to be able to provide reliable forecasts.
The earthquake probabilities that are provided on seismic hazard maps, for example, have crucial consequences, most notably in instructing engineers how they should construct buildings. Critics say these maps are remarkably inexact.
A map of Los Angeles lists the probability of an earthquake producing strong shaking within a given period of time — usually 50 years. That is based on a complex formula that takes into account, among other things, the distance from a fault, how fast one side of a fault is moving past the other, and the recurrence of earthquakes in the area.
A study led by Katherine M. Scharer, a geologist with the United States Geological Survey, estimated dates for nine previous earthquakes along the Southern California portion of the San Andreas fault dating back to the eighth century. The last big earthquake on the San Andreas was in 1857.
Since the average interval between these big earthquakes was 135 years, a common interpretation is that Southern California is due for a big earthquake. Yet the intervals between earthquakes are so varied — ranging from 44 years to 305 years — that taking the average is not a very useful prediction tool. A big earthquake could come tomorrow, or it could come in a century and a half or more.
This is one of the criticisms of Philip Stark, an associate dean at the University of California, Berkeley, at the Division of Mathematical and Physical Sciences. Dr. Stark describes the overall system of earthquake probabilities as “somewhere between meaningless and misleading” and has called for it to be scrapped.
The new A.I.-related earthquake research is leaning on neural networks, the same technology that has accelerated the progress of everything from talking digital assistants to driverless cars. Loosely modeled on the web of neurons in the human brain, a neural network is a complex mathematical system that can learn tasks on its own.
Scientists say seismic data is remarkably similar to the audio data that companies like Google and Amazon use in training neural networks to recognize spoken commands on coffee-table digital assistants like Alexa. When studying earthquakes, it is the computer looking for patterns in mountains of data rather than relying on the weary eyes of a scientist.
“Rather than a sequence of words, we have a sequence of ground-motion measurements,” said Zachary Ross, a researcher in the California Institute of Technology’s Seismological Laboratory who is exploring these A.I. techniques. “We are looking for the same kinds of patterns in this data.”
Comments