Generative adversarial networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains, including computer vision and other applied areas, since their inception in 2014. Consisting of a discriminative network and a generative network engaged in a minimax game, GANs have revolutionized the field of generative modeling. In February 2018, GAN secured the leading spot on the 'Top Ten Global Breakthrough Technologies List' issued by the Massachusetts Science and Technology Review. Over the years, numerous advancements have been proposed, leading to a rich array of GAN variants, such as conditional GAN, Wasserstein GAN, cycle-consistent GAN, and StyleGAN, among many others. This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants. We also delve into recent theoretical developments, exploring the profound connection between the adversarial principle underlying GAN and Jensen–Shannon divergence while discussing the optimality characteristics of the GAN framework. The efficiency of GAN variants and their model architectures will be evaluated along with training obstacles as well as training solutions. In addition, a detailed discussion will be provided, examining the integration of GANs with newly developed deep learning frameworks such as transformers, physics-informed neural networks, large language models, and diffusion models. Finally, we reveal several issues as well as future research outlines in this field.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 2632-2153
Machine Learning: Science and Technology is a multidisciplinary open access journal that bridges the application of machine learning across the sciences with advances in machine learning methods and theory as motivated by physical insights.
Tanujit Chakraborty et al 2024 Mach. Learn.: Sci. Technol. 5 011001
Arsenii Senokosov et al 2024 Mach. Learn.: Sci. Technol. 5 015040
Image classification, a pivotal task in multiple industries, faces computational challenges due to the burgeoning volume of visual data. This research addresses these challenges by introducing two quantum machine learning models that leverage the principles of quantum mechanics for effective computations. Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era, where circuits with a large number of qubits are currently infeasible. This model demonstrated a record-breaking classification accuracy of 99.21% on the full MNIST dataset, surpassing the performance of known quantum–classical models, while having eight times fewer parameters than its classical counterpart. Also, the results of testing this hybrid model on a Medical MNIST (classification accuracy over 99%), and on CIFAR-10 (classification accuracy over 82%), can serve as evidence of the generalizability of the model and highlights the efficiency of quantum layers in distinguishing common features of input data. Our second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process. The model matches the performance of its classical counterpart, having four times fewer trainable parameters, and outperforms a classical model with equal weight parameters. These models represent advancements in quantum machine learning research and illuminate the path towards more accurate image classification systems.
Ivan S Novikov et al 2021 Mach. Learn.: Sci. Technol. 2 025002
The subject of this paper is the technology (the 'how') of constructing machine-learning interatomic potentials, rather than science (the 'what' and 'why') of atomistic simulations using machine-learning potentials. Namely, we illustrate how to construct moment tensor potentials using active learning as implemented in the MLIP package, focusing on the efficient ways to automatically sample configurations for the training set, how expanding the training set changes the error of predictions, how to set up ab initio calculations in a cost-effective manner, etc. The MLIP package (short for Machine-Learning Interatomic Potentials) is available at https://mlip.skoltech.ru/download/.
Ross Irwin et al 2022 Mach. Learn.: Sci. Technol. 3 015022
Transformer models coupled with a simplified molecular line entry system (SMILES) have recently proven to be a powerful combination for solving challenges in cheminformatics. These models, however, are often developed specifically for a single application and can be very resource-intensive to train. In this work we present the Chemformer model—a Transformer-based model which can be quickly applied to both sequence-to-sequence and discriminative cheminformatics tasks. Additionally, we show that self-supervised pre-training can improve performance and significantly speed up convergence on downstream tasks. On direct synthesis and retrosynthesis prediction benchmark datasets we publish state-of-the-art results for top-1 accuracy. We also improve on existing approaches for a molecular optimisation task and show that Chemformer can optimise on multiple discriminative tasks simultaneously. Models, datasets and code will be made available after publication.
Philippe Schwaller et al 2021 Mach. Learn.: Sci. Technol. 2 015016
Artificial intelligence is driving one of the most important revolutions in organic chemistry. Multiple platforms, including tools for reaction prediction and synthesis planning based on machine learning, have successfully become part of the organic chemists' daily laboratory, assisting in domain-specific synthetic problems. Unlike reaction prediction and retrosynthetic models, the prediction of reaction yields has received less attention in spite of the enormous potential of accurately predicting reaction conversion rates. Reaction yields models, describing the percentage of the reactants converted to the desired products, could guide chemists and help them select high-yielding reactions and score synthesis routes, reducing the number of attempts. So far, yield predictions have been predominantly performed for high-throughput experiments using a categorical (one-hot) encoding of reactants, concatenated molecular fingerprints, or computed chemical descriptors. Here, we extend the application of natural language processing architectures to predict reaction properties given a text-based representation of the reaction, using an encoder transformer model combined with a regression layer. We demonstrate outstanding prediction performance on two high-throughput experiment reactions sets. An analysis of the yields reported in the open-source USPTO data set shows that their distribution differs depending on the mass scale, limiting the data set applicability in reaction yields predictions.
Mario Krenn et al 2020 Mach. Learn.: Sci. Technol. 1 045024
The discovery of novel materials and functional molecules can help to solve some of society's most urgent challenges, ranging from efficient energy harvesting and storage to uncovering novel pharmaceutical drug candidates. Traditionally matter engineering–generally denoted as inverse design–was based massively on human intuition and high-throughput virtual screening. The last few years have seen the emergence of significant interest in computer-inspired designs based on evolutionary or deep learning methods. The major challenge here is that the standard strings molecular representation SMILES shows substantial weaknesses in that task because large fractions of strings do not correspond to valid molecules. Here, we solve this problem at a fundamental level and introduce SELFIES (SELF-referencIng Embedded Strings), a string-based representation of molecules which is 100% robust. Every SELFIES string corresponds to a valid molecule, and SELFIES can represent every molecule. SELFIES can be directly applied in arbitrary machine learning models without the adaptation of the models; each of the generated molecule candidates is valid. In our experiments, the model's internal memory stores two orders of magnitude more diverse molecules than a similar test with SMILES. Furthermore, as all molecules are valid, it allows for explanation and interpretation of the internal working of the generative models.
Alexandr Sedykh et al 2024 Mach. Learn.: Sci. Technol. 5 025045
Finding the distribution of the velocities and pressures of a fluid by solving the Navier–Stokes equations is a principal task in the chemical, energy, and pharmaceutical industries, as well as in mechanical engineering and in design of pipeline systems. With existing solvers, such as OpenFOAM and Ansys, simulations of fluid dynamics in intricate geometries are computationally expensive and require re-simulation whenever the geometric parameters or the initial and boundary conditions are altered. Physics-informed neural networks (PINNs) are a promising tool for simulating fluid flows in complex geometries, as they can adapt to changes in the geometry and mesh definitions, allowing for generalization across fluid parameters and transfer learning across different shapes. We present a hybrid quantum PINN (HQPINN) that simulates laminar fluid flow in 3D Y-shaped mixers. Our approach combines the expressive power of a quantum model with the flexibility of a PINN, resulting in a 21% higher accuracy compared to a purely classical neural network. Our findings highlight the potential of machine learning approaches, and in particular HQPINN, for complex shape optimization tasks in computational fluid dynamics. By improving the accuracy of fluid simulations in complex geometries, our research using hybrid quantum models contributes to the development of more efficient and reliable fluid dynamics solvers.
Koji Hashimoto et al 2024 Mach. Learn.: Sci. Technol. 5 025079
Understanding the inner workings of neural networks, including transformers, remains one of the most challenging puzzles in machine learning. This study introduces a novel approach by applying the principles of gauge symmetries, a key concept in physics, to neural network architectures. By regarding model functions as physical observables, we find that parametric redundancies of various machine learning models can be interpreted as gauge symmetries. We mathematically formulate the parametric redundancies in neural ODEs, and find that their gauge symmetries are given by spacetime diffeomorphisms, which play a fundamental role in Einstein's theory of gravity. Viewing neural ODEs as a continuum version of feedforward neural networks, we show that the parametric redundancies in feedforward neural networks are indeed lifted to diffeomorphisms in neural ODEs. We further extend our analysis to transformer models, finding natural correspondences with neural ODEs and their gauge symmetries. The concept of gauge symmetries sheds light on the complex behavior of deep learning models through physics and provides us with a unifying perspective for analyzing various machine learning architectures.
Cynthia Shen et al 2021 Mach. Learn.: Sci. Technol. 2 03LT02
Computer-based de-novo design of functional molecules is one of the most prominent challenges in cheminformatics today. As a result, generative and evolutionary inverse designs from the field of artificial intelligence have emerged at a rapid pace, with aims to optimize molecules for a particular chemical property. These models 'indirectly' explore the chemical space; by learning latent spaces, policies, and distributions, or by applying mutations on populations of molecules. However, the recent development of the SELFIES (Krenn 2020 Mach. Learn.: Sci. Technol. 1 045024) string representation of molecules, a surjective alternative to SMILES, have made possible other potential techniques. Based on SELFIES, we therefore propose PASITHEA, a direct gradient-based molecule optimization that applies inceptionism (Mordvintsev 2015) techniques from computer vision. PASITHEA exploits the use of gradients by directly reversing the learning process of a neural network, which is trained to predict real-valued chemical properties. Effectively, this forms an inverse regression model, which is capable of generating molecular variants optimized for a certain property. Although our results are preliminary, we observe a shift in distribution of a chosen property during inverse-training, a clear indication of PASITHEA's viability. A striking property of inceptionism is that we can directly probe the model's understanding of the chemical space on which it is trained. We expect that extending PASITHEA to larger datasets, molecules and more complex properties will lead to advances in the design of new functional molecules as well as the interpretation and explanation of machine learning models.
Majd Ghrear et al 2024 Mach. Learn.: Sci. Technol. 5 035009
We present the first method to probabilistically predict 3D direction in a deep neural network model. The probabilistic predictions are modeled as a heteroscedastic von Mises-Fisher distribution on the sphere , giving a simple way to quantify aleatoric uncertainty. This approach generalizes the cosine distance loss which is a special case of our loss function when the uncertainty is assumed to be uniform across samples. We develop approximations required to make the likelihood function and gradient calculations stable. The method is applied to the task of predicting the 3D directions of electrons, the most complex signal in a class of experimental particle physics detectors designed to demonstrate the particle nature of dark matter and study solar neutrinos. Using simulated Monte Carlo data, the initial direction of recoiling electrons is inferred from their tortuous trajectories, as captured by the 3D detectors. For
keV electrons in a 70% He 30% CO2 gas mixture at STP, the new approach achieves a mean cosine distance of 0.104 (26∘) compared to 0.556 (64∘) achieved by a non-machine learning algorithm. We show that the model is well-calibrated and accuracy can be increased further by removing samples with high predicted uncertainty. This advancement in probabilistic 3D directional learning could increase the sensitivity of directional dark matter detectors.
Naoki Koyama et al 2024 Mach. Learn.: Sci. Technol. 5 035028
In the pursuit of detecting gravitational waves, ground-based interferometers (e.g. LIGO, Virgo, and KAGRA) face a significant challenge: achieving the extremely high sensitivity required to detect fluctuations at distances significantly smaller than the diameter of an atomic nucleus. Cutting-edge materials and innovative engineering techniques have been employed to enhance the stability and precision of the interferometer apparatus over the years. These efforts are crucial for reducing the noise that masks the subtle gravitational wave signals. Various sources of interference, such as seismic activity, thermal fluctuations, and other environmental factors, contribute to the total noise spectra characteristic of the detector. Therefore, addressing these sources is essential to enhance the interferometer apparatus's stability and precision. Recent research has emphasised the importance of classifying non-stationary and non-Gaussian glitches, employing sophisticated algorithms and machine learning methods to distinguish genuine gravitational wave signals from instrumental artefacts. The time-frequency-amplitude representation of these transient disturbances exhibits a wide range of new shapes, variability, and features, reflecting the evolution of interferometer technology. In this study, we developed a convolutional neural network model to classify glitches using spectrogram images from the Gravity Spy O1 dataset. We employed score-class activation mapping and the uniform manifold approximation and projection algorithm to visualise and understand the classification decisions made by our model. We assessed the model's validity and investigated the causes of misclassification from these results.
Edoardo Fazzari et al 2024 Mach. Learn.: Sci. Technol. 5 035027
This study applies an effective methodology based on Reinforcement Learning to a control system. Using the Pound–Drever–Hall locking scheme, we match the wavelength of a controlled laser to the length of a Fabry-Pérot cavity such that the cavity length is an exact integer multiple of the laser wavelength. Typically, long-term drift of the cavity length and laser wavelength exceeds the dynamic range of this control if only the laser's piezoelectric transducer is actuated, so the same error signal also controls the temperature of the laser crystal. In this work, we instead implement this feedback control grounded on Q-Learning. Our system learns in real-time, eschewing reliance on historical data, and exhibits adaptability to system variations post-training. This adaptive quality ensures continuous updates to the learning agent. This innovative approach maintains lock for eight days on average.
Xiaofei Guan et al 2024 Mach. Learn.: Sci. Technol. 5 035026
This paper presents an innovative approach to tackle Bayesian inverse problems using physics-informed invertible neural networks (PI-INN). Serving as a neural operator model, PI-INN employs an invertible neural network (INN) to elucidate the relationship between the parameter field and the solution function in latent variable spaces. Specifically, the INN decomposes the latent variable of the parameter field into two distinct components: the expansion coefficients that represent the solution to the forward problem, and the noise that captures the inherent uncertainty associated with the inverse problem. Through precise estimation of the forward mapping and preservation of statistical independence between expansion coefficients and latent noise, PI-INN offers an accurate and efficient generative model for resolving Bayesian inverse problems, even in the absence of labeled data. For a given solution function, PI-INN can provide tractable and accurate estimates of the posterior distribution of the underlying parameter field. Moreover, capitalizing on the INN's characteristics, we propose a novel independent loss function to effectively ensure the independence of the INN's decomposition results. The efficacy and precision of the proposed PI-INN are demonstrated through a series of numerical experiments.
Maximilian P Niroomand et al 2024 Mach. Learn.: Sci. Technol. 5 035025
Prior beliefs about the latent function to shape inductive biases can be incorporated into a Gaussian process (GP) via the kernel. However, beyond kernel choices, the decision-making process of GP models remains poorly understood. In this work, we contribute an analysis of the loss landscape for GP models using methods from chemical physics. We demonstrate ν-continuity for Matérn kernels and outline aspects of catastrophe theory at critical points in the loss landscape. By directly including ν in the hyperparameter optimisation for Matérn kernels, we find that typical values of ν can be far from optimal in terms of performance. We also provide an a priori method for evaluating the effect of GP ensembles and discuss various voting approaches based on physical properties of the loss landscape. The utility of these approaches is demonstrated for various synthetic and real datasets. Our findings provide insight into hyperparameter optimisation for GPs and offer practical guidance for improving their performance and interpretability in a range of applications.
Alessandro Bombini et al 2024 Mach. Learn.: Sci. Technol. 5 035024
Extended vision techniques are ubiquitous in physics. However, the data cubes steaming from such analysis often pose a challenge in their interpretation, due to the intrinsic difficulty in discerning the relevant information from the spectra composing the data cube. Furthermore, the huge dimensionality of data cube spectra poses a complex task in its statistical interpretation; nevertheless, this complexity contains a massive amount of statistical information that can be exploited in an unsupervised manner to outline some essential properties of the case study at hand, e.g. it is possible to obtain an image segmentation via (deep) clustering of data-cube's spectra, performed in a suitably defined low-dimensional embedding space. To tackle this topic, we explore the possibility of applying unsupervised clustering methods in encoded space, i.e. perform deep clustering on the spectral properties of datacube pixels. A statistical dimensional reduction is performed by an ad hoc trained (variational) AutoEncoder, in charge of mapping spectra into lower dimensional metric spaces, while the clustering process is performed by a (learnable) iterative K-means clustering algorithm. We apply this technique to two different use cases, of different physical origins: a set of macro mapping x-ray fluorescence (MA-XRF) synthetic data on pictorial artworks, and a dataset of simulated astrophysical observations.
Thomas Penfold et al 2024 Mach. Learn.: Sci. Technol. 5 021001
Computational spectroscopy has emerged as a critical tool for researchers looking to achieve both qualitative and quantitative interpretations of experimental spectra. Over the past decade, increased interactions between experiment and theory have created a positive feedback loop that has stimulated developments in both domains. In particular, the increased accuracy of calculations has led to them becoming an indispensable tool for the analysis of spectroscopies across the electromagnetic spectrum. This progress is especially well demonstrated for short-wavelength techniques, e.g. core-hole (x-ray) spectroscopies, whose prevalence has increased following the advent of modern x-ray facilities including third-generation synchrotrons and x-ray free-electron lasers. While calculations based on well-established wavefunction or density-functional methods continue to dominate the greater part of spectral analyses in the literature, emerging developments in machine-learning algorithms are beginning to open up new opportunities to complement these traditional techniques with fast, accurate, and affordable 'black-box' approaches. This Topical Review recounts recent progress in data-driven/machine-learning approaches for computational x-ray spectroscopy. We discuss the achievements and limitations of the presently-available approaches and review the potential that these techniques have to expand the scope and reach of computational and experimental x-ray spectroscopic studies.
Tanujit Chakraborty et al 2024 Mach. Learn.: Sci. Technol. 5 011001
Generative adversarial networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains, including computer vision and other applied areas, since their inception in 2014. Consisting of a discriminative network and a generative network engaged in a minimax game, GANs have revolutionized the field of generative modeling. In February 2018, GAN secured the leading spot on the 'Top Ten Global Breakthrough Technologies List' issued by the Massachusetts Science and Technology Review. Over the years, numerous advancements have been proposed, leading to a rich array of GAN variants, such as conditional GAN, Wasserstein GAN, cycle-consistent GAN, and StyleGAN, among many others. This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants. We also delve into recent theoretical developments, exploring the profound connection between the adversarial principle underlying GAN and Jensen–Shannon divergence while discussing the optimality characteristics of the GAN framework. The efficiency of GAN variants and their model architectures will be evaluated along with training obstacles as well as training solutions. In addition, a detailed discussion will be provided, examining the integration of GANs with newly developed deep learning frameworks such as transformers, physics-informed neural networks, large language models, and diffusion models. Finally, we reveal several issues as well as future research outlines in this field.
Jakub Rydzewski et al 2023 Mach. Learn.: Sci. Technol. 4 031001
Analyzing large volumes of high-dimensional data requires dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. Such practice is needed in atomistic simulations of complex systems where even thousands of degrees of freedom are sampled. An abundance of such data makes gaining insight into a specific physical problem strenuous. Our primary aim in this review is to focus on unsupervised machine learning methods that can be used on simulation data to find a low-dimensional manifold providing a collective and informative characterization of the studied process. Such manifolds can be used for sampling long-timescale processes and free-energy estimation. We describe methods that can work on datasets from standard and enhanced sampling atomistic simulations. Unlike recent reviews on manifold learning for atomistic simulations, we consider only methods that construct low-dimensional manifolds based on Markov transition probabilities between high-dimensional samples. We discuss these techniques from a conceptual point of view, including their underlying theoretical frameworks and possible limitations.
James Stokes et al 2023 Mach. Learn.: Sci. Technol. 4 021001
This article aims to summarize recent and ongoing efforts to simulate continuous-variable quantum systems using flow-based variational quantum Monte Carlo techniques, focusing for pedagogical purposes on the example of bosons in the field amplitude (quadrature) basis. Particular emphasis is placed on the variational real- and imaginary-time evolution problems, carefully reviewing the stochastic estimation of the time-dependent variational principles and their relationship with information geometry. Some practical instructions are provided to guide the implementation of a PyTorch code. The review is intended to be accessible to researchers interested in machine learning and quantum information science.
Bahram Jalali et al 2022 Mach. Learn.: Sci. Technol. 3 041001
The phenomenal success of physics in explaining nature and engineering machines is predicated on low dimensional deterministic models that accurately describe a wide range of natural phenomena. Physics provides computational rules that govern physical systems and the interactions of the constituents therein. Led by deep neural networks, artificial intelligence (AI) has introduced an alternate data-driven computational framework, with astonishing performance in domains that do not lend themselves to deterministic models such as image classification and speech recognition. These gains, however, come at the expense of predictions that are inconsistent with the physical world as well as computational complexity, with the latter placing AI on a collision course with the expected end of the semiconductor scaling known as Moore's Law. This paper argues how an emerging symbiosis of physics and AI can overcome such formidable challenges, thereby not only extending AI's spectacular rise but also transforming the direction of engineering and physical science.
Li et al
Identifying partial differential equations (PDEs) from data is crucial for understanding the governing mechanisms of natural phenomena, yet it remains a challenging task. We present an extension to the ARGOS framework, ARGOS-RAL, which leverages sparse regression with the recurrent adaptive lasso to identify PDEs from limited prior knowledge automatically. Our method automates calculating partial derivatives, constructing a candidate library, and estimating a sparse model. We rigorously evaluate the performance of ARGOS-RAL in identifying canonical PDEs under various noise levels and sample sizes, demonstrating its robustness in handling noisy and non-uniformly distributed data. We also test the algorithm's performance on datasets consisting solely of random noise to simulate scenarios with severely compromised data quality. Our results show that ARGOS-RAL effectively and reliably identifies the underlying PDEs from data, outperforming the sequential threshold ridge regression method in most cases. We highlight the potential of combining statistical methods, machine learning, and dynamical systems theory to automatically discover governing equations from collected data, streamlining the scientific modeling process.
Sequeira et al
This research explores the trainability of Parameterized Quantum Circuit based policies in policy-based Reinforcement Learning, an area that has recently seen a surge in empirical exploration. While some studies suggest improved sample complexity using quantum gradient estimation, the efficient trainability of these policies remains an open question. Our findings reveal significant challenges, including standard Barren Plateaus with exponentially small gradients and gradient explosion. These phenomena depend on the type of basis-state partitioning and the mapping of these partitions onto actions. For a polynomial number of actions, a trainable window can be ensured with a polynomial number of measurements if a contiguous-like partitioning of basis-states is employed. These results are empirically validated in a multi-armed bandit environment.
DON-TSA et al
The high computational demand of the Density Functional Theory (DFT) based method for
screening new materials properties remains a strong limitation to the development of clean and
renewable energy technologies essential to transition to a carbon-neutral environment in the coming
decades. Machine Learning comes into play with its innate capacity to handle huge amounts of
data and high-dimensional statistical analysis. In this paper, supervised Machine Learning models
together with data analysis on existing datasets obtained from a high-throughput calculation using
Density Functional Theory are used to predict the Seebeck coefficient, electrical conductivity,
and power factor of inorganic compounds. The analysis revealed a strong dependence of the
thermoelectric properties on the effective masses, we also proposed a machine learning model for the
prediction of highly performing thermoelectric materials which reached an efficiency of 95 percent.
The analyzed data and developed model can significantly contribute to innovation by providing a
faster and more accurate prediction of thermoelectric properties, thereby, facilitating the discovery
of highly efficient thermoelectric materials.
Hadad et al
Attention layers are a crucial component in many modern deep learning models, particularly those used in natural language processing and computer vision. Attention layers have been shown to improve the accuracy and effectiveness of various tasks, such as machine translation, image captioning, etc. Here, the benefit of attention layers in designing optical filters based on a stack of thin film materials is investigated. The superiority of Attention layers over fully-connected Deep Neural Networks is demonstrated for this task.
Bandi et al
Machine learning approaches have recently emerged as powerful tools to probe structure-property relationships in crystals and molecules. Specifically, machine learning interatomic potentials (MLIPs) can accurately reproduce first-principles data at a cost similar to that of conventional interatomic potential approaches. While MLIPs have been extensively tested across various classes of materials and molecules, a clear characterization of the anharmonic terms encoded in the MLIPs is lacking. Here, we benchmark popular MLIPs using the anharmonic vibrational Hamiltonian of ThO2 in the fluorite crystal structure, which was constructed from density functional theory (DFT) using our highly accurate and efficient irreducible derivative methods. The anharmonic Hamiltonian was used to generate molecular dynamics (MD) trajectories, which were used to train three classes of MLIPs: Gaussian Approximation Potentials, Artificial Neural Networks (ANN), and Graph Neural Networks (GNN). The results were assessed by directly comparing phonons and their interactions, as well as phonon linewidths, phonon lineshifts, and thermal conductivity. The models were also trained on a DFT molecular dynamics dataset, demonstrating good agreement up to fifth-order for the ANN and GNN. Our analysis demonstrates that MLIPs have great potential for accurately characterizing anharmonicity in materials systems at a fraction of the cost of conventional first principles-based approaches.