Graph Neural Networks and its Application in Complex Physics Simulations
Kevin Aguilar, Zhecheng Wang, Johnson Zhang
Motivation
Physical simulations are important in many fields of science. They allow us to predict outcomes between objects in a safe and accurate manner. There are some advanced simulation softwares that are able to accomplish high-level physical simulations, but they can only be operated on high-performance computers that are not easily accessible. By using graph neural networks, these physical simulations can be conducted in such a manner that they are of similar quality to the professional software but are computationally inexpensive so they can be used by a wider audience. With such an outcome, not only would scientific testing be more efficient, but it could potentially open doors to allow education centers to have access to sophisticated simulations as a teaching aid for a fraction of the original cost.
Graph neural networks are unique in that they add an extra dimension compared to typically neural networks. Graph neural networks take in a graph as input and spit out a graph as an output. The graph structure not only allows for efficient computations to be done but also allows us to know the relative spacing between the objects which are represented as nodes. This extra dimension is very useful for this application as it allows us to know which particles are next to each other so that they know how to interact with each other. As more data becomes available, the models will be able to be better trained to get closer to the ground truth. Even with little data though, these models are still highly effective and will only get better with time.
Graph Neural Network and Its Application
Graph
A graph is a data structure consisting of nodes and edges connected together to represent information with no definite beginning or end.
The edges in a graph represent the relationship between nodes, they can either be directional or undirected depending on the relationship.
Currently, graphs are used in a variety of algorithms around us. For example, the transportation system can be represented as a graph with locations as the nodes and connections as edges. And something we are all familiar with, a social network graph has people as nodes and the relationships as edges.
Traditionally, algorithms such as Dijkstra’s, require prior knowledge of the graph to a certain degree, and there’s no way to perform graph-level classification.
Graph Neural Networks
The graph network is a type of neural network, it’s getting increasingly popular in recent years due to its unique ability to learn from graph data efficiently. It allows for node level, edge level, and graph level prediction tasks.
Every node has an embedding associated with it that defines the node in the data space. The node receives information from its neighbors via edge neural networks, then the information is aggregated and passed to the activation unit of a node to get a new set of embeddings for the node. After various iterations of message passing, the node learns more and more about its neighborhood and gets a rough idea about the complete graph.
Libraries such as PyTorch, graph nets, and deep graph made graph neural networks very accessible for people to implement and experiment with.
What Can Graph Neural Networks (GNN) Do?
Node Classification: the task here is to determine the labeling of samples, which are represented as nodes, by looking at the labels of their neighbors. Typical applications for node classification include citation networks and Facebook friends relationships.
Link prediction: here, the algorithm has to understand the relationship between nodes in graphs, and it also tries to predict whether there’s a connection between two nodes. It’s essential in social networks for functions such as friend suggestions.
Graph Classification: the task here is to classify the whole graph into different categories. It’s like image classification, but the target changes into the graph domain. It can be used for determining protein types, categorizing documents in natural language processing, and social network analysis.
What you see in this animation is an example of categorizing social networks. Each node is a member of a club and the edges are the pairwise links between members who interact outside of the club. Now, this club is divided into two communities led by node 0 and node 33. And we want to predict which side the members will join. As we can see, the graph neural network does a great job identifying the categories.
Graph clustering: refers to the clustering of data in the form of graphs. There are two distinct forms of clustering performed on graph data. Vertex clustering seeks to cluster the nodes of the graph into groups of densely connected regions based on either edge weights or edge distances. The second form of graph clustering treats the graphs as the objects to be clustered and clusters these objects based on similarity.
In this example on the right, the image was first fed through a conventional neural network to identify the objects in the image, it was then fed into a graph neural network for relationship prediction. The outcome is a generated graph that shows the relationships between objects in this image. As we can see here, it shows the man relates to the horse, and the relationship is riding.
Graph Neural Networks (GNN) in Physical Simulation
In these simulations, the graph used is created in a stage called the encoding stage. Here, the nodes and edges are each assigned values where the nodes represent the particles and the edges connect adjacent particles in the simulation. Both nodes and edges are given features that will vary depending on the object that is being simulated. For instance, simulating water will have a different set of features than simulating sand as water tends to move around more when it is acted upon by a force while sand does not. From here, we move on to the message passing stage which is where the interaction between particles occurs. At every timestep, each node uses its edges to figure out which particles it will interact with and in what manner. This is done by using basic physics equations to calculate acceleration when given a particle’s attributes. Once each of these acceleration vectors is calculated, they are given to the decoder where the values are updated and the particles move in their respective directions. The process is repeated until there are no more interactions that occur and all the particles reach their final state.
Background Information and Benefits
Physical simulations are a tool that scientists and engineers use to test a prediction so that they can get a sense of an outcome to expect. This helps minimize costly errors and reduces the chance of a safety issue occurring. The main reason for using this approach compared to the software that already exists so that hopefully with this approach, you would be able to run these simulations on a cheap basic computer. There are some promising signs that this goal will be attainable. One of these is the fact that these simulations are able to run at a similar quality while having a silky smooth performance. For instance, in some tests run, the professional software was only able to run a simulation at half a frame per second while the graph neural network approach was able to run at thousands of frames per second. Additionally, a simulation can be run with multiple different objects at the same time, say water and goop, and still be highly accurate. We can see the many benefits that come with the graph neural network approach to this problem.
For the background, it is necessary to introduce the ground truth simulator DeepMind used in the paper. Material Point Method (MPM), a hybrid Eulerian/Lagrangian simulation algorithm, is introduced to computer graphics by Dr. Alexey Stomakhin at UCLA. MPM is known for its accuracy and decoupling nature, which made it the snow simulation algorithm Disney Animation Studio chose to use in the animated feature film Frozen.
In summary, we construct patches of graphs for particles within a certain radius, this process is called encoding. Then we pass patches of graphs contains information about position and velocity to the graph network. After training, we can reconstruct particle position and velocity with acceleration, this process is called decoding.
Training
For every case, we trained on a single RTX 2080 and used Graph-Nets, TensorFlow, and Sonnet libraries. The three cases took two days to achieve an ideal outcome. The paper chooses particle-wise Mean Squared Error (MES) as the loss function, described as following:
We did not wait till a minimalized loss for time and cost considerations. Instead, we choose to stop training when we observe an acceptable physical simulation. Therefore, the viewers will observe fluctuating loss curves with a downward trend for all three cases.
Results
The first simulation is water simulation. This case has 2000 water particles, and 1000 steps per simulation. While the querying model is significantly slower than the other two models because of the long simulation time, the water simulation has a fast training time of 4 hours. An educated guess is that water has little to no viscosity, therefore the model does not need to learn the complex particle coupling behavior between two viscous particles. Note that in reality, water has viscosity but it is very low, and in Eulerian/Hybrid fluid simulation, we often introduce some numerical friction in the properties transfer between particles and grid. Therefore, it is usual for computer scientists to drop the viscosity term in the Navier-Stokes equations in water simulation. From the simulation video, you could notice that the water simulation prediction preserved some vorticity of water.
The second simulation is sand simulation. This case has 2,000 sand particles and 400 steps per simulation. The sand model is relatively under-trained compared to the other two models due to time constraints, but we still put in 11 hours of training. As you can imagine, sand behaves like fluid to some extent and naturally resembles particles. Just like water, and has no viscosity, in simulation and in reality. Therefore, you can observe a very splashy simulation.
The third simulation is goop simulation. This case has 2,000 goop particles and 400 steps per simulation. Different from the previous two cases goop has a very high viscosity, which implies it’s very sticky, and not splashy. We put in 20 hours of training for the Goop case and we believe the long training time is due to the model needs to learn very complex particle interactions, unlike the water or the sand case.
Since we are using machine learning to solve a highly complex physical system, which involves thousands of particles and millions of data entries. It is normal to have a long training time. In the process of training, and even after training, we observed some apparent visual artifacts in some simulation cases. We will show two faulty cases here.
The first faulty case is a sand simulation during the training. Due to under-training, the learning algorithm did not properly preserve the momentum of a group of sand particles on the right at the end of this simulation.
The second faulty case is a goop simulation during the training. Clearly, the goop case did not follow the law of gravity.
Related Works And Future Works
With graph networks, researchers also did similar works in cloth simulation. The triangle meshes used in cloth modeling contain edges and nodes, which naturally resemble a graph. Therefore, the researchers from DeepMind applied similar encoding, processing, and decoding scheme to the triangle meshes.
Reference
Anand, Rishabh. “An Illustrated Guide to Graph Neural Networks.” Medium, Dair.ai, 30 Mar. 2020, medium.com/dair-ai/an-illustrated-guide-to-graph-neural-networks-d5564a551783.
Pfaff, Tobias, et al. “Learning Mesh-Based Simulation with Graph Networks.” arXiv preprint arXiv:2010.03409 (2020).
Sanchez-Gonzalez, Alvaro, et al. “Learning to simulate complex physics with graph networks.” International Conference on Machine Learning. ICML, 2020.
Stomakhin, Alexey, et al. “A material point method for snow simulation.” ACM Transactions on Graphics (TOG) 32.4 (2013): 1–10.
Menzli, Amal. “Graph Neural Network and Some of GNN Applications — Everything You Need to Know.” Neptune.ai, 9 Apr. 2021, neptune.ai/blog/graph-neural-network-and-some-of-gnn-applications.