Abstract:
In seismology, while training a specific deep learning model for each task is common, it often faces challenges such as the scarcity of labeled data and limited regional ...Show MoreMetadata
Abstract:
In seismology, while training a specific deep learning model for each task is common, it often faces challenges such as the scarcity of labeled data and limited regional generalization. Addressing these issues, we introduce SeisCLIP: a foundation model for seismology, leveraging contrastive learning during pretraining on multimodal data of seismic waveform spectra and the corresponding local and global event information. SeisCLIP consists of a transformer-based spectrum encoder and an multilayer perceptron (MLP)-based information encoder that are jointly pre-trained on massive data. During pretraining, contrastive learning aims to enhance representations by training two encoders to bring the corresponding waveform spectra and event information closer in the feature space, while distancing uncorrelated pairs. Remarkably, the pre-trained spectrum encoder offers versatile features, enabling its application across diverse tasks and regions. Thus, it requires only modest datasets for fine-tuning to specific downstream tasks. Our evaluations demonstrate SeisCLIP’s superior performance over baseline methods in tasks like event classification, localization, and focal mechanism analysis, even when using distinct datasets from various regions. In essence, SeisCLIP emerges as a promising foundational model for seismology, potentially revolutionizing foundation-model-based research in the domain.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 62)