NSU Researchers Train Neural Networks to Restore Earth's Crust

Neural networks can accelerate already known methods for detecting cracks and rock formations in the earth's crust. Currently, these calculations are a long, complex, and computationally intensive process. Artificial intelligence technologies can help solve this problem much faster and produce an image that geoscientists can interpret. However, first, the neural network model needs to be trained in these calculations. This is currently being done by researchers fr om the Laboratory for Streaming Data Analytics and Machine Learning at the NSU Department of Mechanics and Mathematics

In June, the first results of neural network training were presented by researchers at the international conference on computational science and computer applications, which was held online in Athens. Later, this data was published on the SpringerLink resource. Now the project participants are preparing to publish new results. Recently, they made significant progress in teaching neural networks to "see" through the layers of the earth's crust. 

The Laboratory Head Evgeny Pavlovsky talked about this work, 

We are doing this work together with colleagues from the A.A. Trofimuk Institute of Petroleum Geology and Geophysics (IPGG) as part of a grant from the Russian Science Foundation. They contacted us and suggested we join their project. The proposal was interesting because this is another application of machine learning. It is clear that neural network models have become a researcher’s tool that solves problems at a new level, similar to when the microscope appeared.  

IPGG provided the necessary seismic data for this work, as well as the mathematical models calculated by them, which are the correct answers. This approach to generating synthetic data from a model is called surrogate modeling. 

Pavlovsky explained,

We knew wh ere the cracks were and what rocks lay at a certain depth. It remained to enter the sensor data, see what answers the neural network gives as a result of machine learning, and evaluate how they coincide with the “correct”, that is, model answers.

To train the neural network model, the researchers conducted many different experiments. As a result, the UNet architecture and the AdamW optimizer were chosen. With their help, it was possible to restore cracks in the model. However, the proportion of matches according to the Dice metric on the test was 30%. According to Pavlovsky,

This is a very good result. We published it in a recent article. But the percentage of matches between real results and neural network calculations can be increased. So, we continued our experiments and this figure is now 65%. Two more articles have been written about this, which will be published in scientific journals in the near future. Further, we intend to reach 80% and soon we will switch from model data to real data. 

Pavlovsky noted that the transition from model data to real data will be very difficult. The model does not take into account many other factors that occur in a real situation, and the neural network is quite dependent on the specifics of the initial data on which it was trained. 

The first attempts to work with real data were not very successful. The researchers added new tricks to the finished models - plausible noise, displacement, and signal mixing. The graphic image generated by the neural network turned out to be not as high quality as that obtained on surrogate models, but this was just the first step towards the transition to real data. It will require a search for non-trivial approaches to training a neural network and a lot of experiments and calculations. Pavlovsky clarified that they are using the computing resources at his Laboratory, but in the future it is possible he will have to use a University supercomputer.