Research

Developing methods for understanding model behavior

Data efficiency and extrapolation trends in neural network interatomic potentials

Recently, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in test accuracy, this metric is still considered the main target when developing new NNIP architectures. In this work, we show how architectural and optimization choices influence the generalization of NNIPs, revealing trends in molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using the 3BPA dataset, we uncover trends in NNIP errors and robustness to noise, showing these metrics are insufficient to predict MD stability in the high-accuracy regime. With a large-scale study on NequIP, MACE, and their optimizers, we show that our metric of loss entropy predicts out-of-distribution error and data efficiency despite being computed only on the training set. This work provides a deep learning justification for probing extrapolation and can inform the development of next-generation NNIPs. 

Citation: Joshua A. Vita, Daniel Schwalbe-Koda, "Data efficiency and extrapolation trends in neural network interatomic potentials", Machine Learning: Science and Technology (2023), https://iopscience.iop.org/article/10.1088/2632-2153/acf115.

Designing interpretable machine learning interatomic potentials

Spline-based neural network interatomic potentials: blending classical and machine learning models

While machine learning (ML) interatomic potentials (IPs) are able to achieve accuracies nearing the level of noise inherent in the first-principles data to which they are trained, it remains to be shown if their increased complexities are strictly necessary for constructing high-quality IPs. In this work, we introduce a new MLIP framework which blends the simplicity of spline-based MEAM (s-MEAM) potentials with the flexibility of a neural network (NN) architecture. The proposed framework, which we call the spline-based neural network potential (s-NNP), is a simplified version of the traditional NNP that can be used to describe complex datasets in a computationally efficient manner. We demonstrate how this framework can be used to probe the boundary between classical and ML IPs, highlighting the benefits of key architectural changes. Furthermore, we show that using spline filters for encoding atomic environments results in a readily interpreted embedding layer which can be coupled with modifications to the NN to incorporate expected physical behaviors and improve overall interpretability. Finally, we test the flexibility of the spline filters, observing that they can be shared across multiple chemical systems in order to provide a convenient reference point from which to begin performing cross-system analyses.


Citation: Joshua A. Vita, Dallas R. Trinkle, "Spline-based neural network interatomic potentials: blending classical and machine learning models", arXiv (2023), https://arxiv.org/abs/2310.02904.

The application of machine learning models and algorithms towards describing atomic interactions has been a major area of interest in materials simulations in recent years, as machine learning interatomic potentials (MLIPs) are seen as being more flexible and accurate than their classical potential counterparts. This increase in accuracy of MLIPs over classical potentials has come at the cost of significantly increased complexity, leading to higher computational costs and lower physical interpretability and spurring research into improving the speeds and interpretability of MLIPs. As an alternative, in this work we leverage “machine learning” fitting databases and advanced optimization algorithms to fit a class of spline-based classical potentials, showing that they can be systematically improved in order to achieve accuracies comparable to those of low-complexity MLIPs. These results demonstrate that high model complexities may not be strictly necessary in order to achieve near-DFT accuracy in interatomic potentials and suggest an alternative route towards sampling the high accuracy, low complexity region of model space by starting with forms that promote simpler and more interpretable interatomic potentials. 

Citation: Joshua A. Vita, Dallas R. Trinkle, "Exploring the necessary complexity of interatomic potentials", Computational Materials Science, Volume 200 (2021), 110752, https://doi.org/10.1016/j.commatsci.2021.110752

Enabling open-source model development and benchmarking

ColabFit exchange: Open-access datasets for data-driven interatomic potentials

Data-driven interatomic potentials (IPs) trained on large collections of first principles calculations are rapidly becoming essential tools in the fields of computational materials science and chemistry for performing atomic-scale simulations. Despite this, apart from a few notable exceptions, there is a distinct lack of well-organized, public datasets in common formats available for use with IP development. This deficiency precludes the research community from implementing widespread benchmarking, which is essential for gaining insight into model performance and transferability, and also limits the development of more general, or even universal, IPs. To address this issue, we introduce the ColabFit Exchange, the first database providing open access to a large collection of systematically organized datasets from multiple domains that is especially designed for IP development. The ColabFit Exchange is publicly available at https://colabfit.org, providing a web-based interface for exploring, downloading, and contributing datasets. Composed of data collected from the literature or provided by community researchers, the ColabFit Exchange currently (September 2023) consists of 139 datasets spanning nearly 70 000 unique chemistries, and is intended to continuously grow. In addition to outlining the software framework used for constructing and accessing the ColabFit Exchange, we also provide analyses of the data, quantifying the diversity of the database and proposing metrics for assessing the relative diversity of multiple datasets. Finally, we demonstrate an end-to-end IP development pipeline, utilizing datasets from the ColabFit Exchange, fitting tools from the KLIFF software package, and validation tests provided by the OpenKIM framework.

Citation: Joshua A. Vita, Eric G. Fuemmeler, Amit Gupta, Gregory P. Wolfe, Alexander Quanming Tao, Ryan S. Elliott, Stefano Martiniani, Ellad B. Tadmor, "ColabFit exchange: Open-access datasets for data-driven interatomic potentials", The Journal of Chemical Physics, Volume 159, Issue 15, https://doi.org/10.1063/5.0163882.

Resources: colabfit.org, https://github.com/colabfit/data-lake