Deep Learning Helps Detect Gravitational Waves
Hunting for Black Holes with Artificial Intelligence
In 1915, Albert Einstein helped build a geometric description of gravity through his general theory of relativity. “But any new theory that aims to replace an existing (Newtonian) theory should provide experimentally verifiable predictions that are unique to it,” says Dr. Nikhil Mukund, a scientist at the Max Planck Institute for Gravitational Physics.
Albert Einstein’s theory of relativity predicted gravitational waves as ripples in space-time. These waves can help us measure properties such as distance, mass, and the spin of astronomical objects, including neutron stars and black holes.
In 1916, the instruments did not exist to measure the gravitational waves resulting from astrophysical events such as the collision of two stars. In the following decades, scientists and engineers gradually developed the technology required to measure these elusive waves.
“In the late ’60s and early ’70s, interferometric techniques were investigated and were found to be a viable solution,” Mukund says. “If you push the right technology and suppress the different noises, you can effectively detect perturbations the size of around one ten-thousandth the diameter of a proton. And that will be sufficient to detect events such as two merging black holes or two merging neutron stars at a few hundred megaparsecs.”
Laser interferometers are devices that extract information from disturbances in the trajectory of laser beams. An interferometer splits a beam into two components. These beams will travel several kilometers and reflect off two mirrors and recombine at the beam splitter, leading to constructive or destructive interference. Propagating gravitational waves will introduce a relative phase shift between these beams, changing the final interference pattern. These very minuscule perturbations indicate gravitational waves caused by phenomena such as black holes or neutron star collisions.
The first prototype of interferometric gravitational wave detectors and experiments in the late 1960s showed that laser interferometry could achieve the measurements needed to detect gravitational waves. Subsequently, there was a worldwide effort by the science and engineering community to create better interferometers. In the following decades, similar facilities were established in Massachusetts, Munich, Glasgow, and other locations. The current generation of sophisticated gravitational wave observatories are the advanced LIGO detectors in the United States, Virgo in Italy, GEO 600 in Germany, and KAGRA in Japan.
In 2015, the LIGO Scientific Collaboration and Virgo Collaboration jointly published a paper about detecting gravitational waves from the binary stellar-mass black holes merging 1.3 billion light-years from Earth.
“This detection opened a new window to the universe. Until then, we had an electromagnetic way of looking at the universe with telescopes in multiple wavelengths ranging from radio waves to gamma rays,” Mukund says. “We found a new way to observe these exotic objects such as black holes, which have no electromagnetic signature. I would say it’s just as exciting as the invention of the optical telescope in the 1600s. We now have almost 100 confirmed detections of these sources, with many more to come.”
Detecting Black Holes with Neural Networks
Robust detection of gravitational waves is a problem that has yet to be fully solved. The signals picked up by laser interferometers are weak. And since the detectors are extremely sensitive, their output can be affected by environmental noise, such as ground vibration from urban traffic, machinery, sea tides, and seismic activity. The scientific community is constantly developing new methods to filter out the noise from the signals.
Mukund used Deep Learning Toolbox to select and fine-tune a neural network for his application. In this case, he found that the Inception convolutional neural network was a suitable architecture for gravitational wave detection.
“Although the aim was to detect very sensitive astrophysical signals, in the last 50 years or so, the R&D efforts have led to the development of highly accurate sensors and actuators,” Mukund says.
Some of the developed technologies include active and passive seismic vibration isolation systems to keep optics unperturbed, low-noise laser systems, and interferometry-based optical alignment sensors.
“Although we have astrophysics and cosmology is the primary aim, many of the developed technologies are useful to the general community,” Mukund says.
The newer wave of efforts in the field is leveraging advances in artificial intelligence. At the Max Planck Institute for Gravitational Physics (Germany) and IUCAA (India), Mukund and his colleagues used machine learning models to filter noise from the true signals received by laser interferometers such as the LIGO interferometers. For this, the researchers defined gravitational wave detection as a supervised learning problem. The machine learning model receives the data from the laser interferometer as input and predicts whether it contains a gravitational wave signal or noise transient.
“The gravitational wave community and data analysis community has developed models to solve Einstein’s equation,” Mukund says. “We have analytical post-Newtonian, phenomenological, and numerical relativity–based methods to generate the signals from two colliding binary black holes or neutron stars.”
The researchers applied these mathematical models to synthesize the data needed to train their machine learning models. To make the data realistic, they added the kind of noise picked up by LIGO-like detectors.
Mukund used Deep Learning Toolbox™ to select and fine-tune a neural network for his application. In this case, he found that the Inception convolutional neural network was a suitable architecture for gravitational wave detection. With Deep Learning Toolbox, his team configured and retrained the Inception-v3 network on his data set to classify interferometer signals.
“We had started this work during the final year of my Ph.D. and had to experiment with different network architectures in a limited amount of time,” Mukund said. “Support for different architectures and GPU computing helped reduce the time it took to search across all these different models.”
“It was truly fascinating,” Mukund says. “Not only did the trained network recover all the previously reported GW events, but it also helped us to claim the detection of a previously missed binary blackhole event.”
Tackling the Control Problem
One of the major challenges of laser interferometry is controlling the optics, which becomes more acute when used to detect gravitational waves. Unlike simple Michelson interferometers, the ones used at labs such as LIGO and GEO 600 have dozens of mirrors, all of which must be controlled, aligned, and maintained.
“We are trying to assess if we can build a data-driven control strategy that can perform at the level of human-engineered filters and maybe even go beyond.”Dr. Nikhil Mukund, Max Planck Institute for Gravitational Physics
“There are multiple degrees of freedom, and we need to minimize the jitter even before we start the experiments,” Mukund says.
To adjust the mirrors, scientists establish hundreds of control loops that run in parallel, with extra care taken to minimize the cross-couplings between all the optics.
“It’s a very complicated optomechanical control systems problem,” Mukund says. “The way we have been doing it until now requires very experienced control systems engineers and scientists.”
Previously, engineers and scientists would study the system, decouple the different degrees of freedom, and decide, based on their knowledge, the best control filters. They periodically reassess the results of their control systems and modify them.
“It has been done in a distributed, human knowledge–based way using linear control systems,” Mukund says. “However, we have a nonlinear system, and we had to use classical control theory, linearize these systems through a model, and then decide the best filter. But we see that there are a lot of cross-couplings, and linear control is not the optimal way to adjust the optics.”
Recently, Mukund has been working on tackling the control problem through reinforcement learning.
“We are trying to assess if we can build a data-driven control strategy that can perform at the level of human-engineered filters and maybe even go beyond,” Mukund says.
Reinforcement learning differs from other machine learning methods because it is based on actions, states, and rewards. A reinforcement learning agent is presented with an environment and a set of actions. It must learn to take sequences of actions that result in the optimal state and maximize its reward.
“The first step in designing the reinforcement learning control strategy is to sense the misalignments,” Mukund says. “The traditional way to measure this is to use wavefront sensors. Instead, we used a neural sensor composed of convolutional and LSTM layers, which measures misalignments by analyzing the video from cameras placed in the optomechanical layout.”
The sensing neural network, created using Deep Learning Toolbox, became the component that determines the state of the environment. The action space was the adjustments made to the optics. And the reward function was designed based on the principles of classic control theory to optimize for actions that improve mirror alignment.
“The nice thing about this reward function was that I could automatically generate part of it using Simulink Design Optimization™,” Mukund says. “We gave it our model specifications, such as bandwidth, design performance, robustness, and speed. Simulink Design Optimization is nicely integrated with Reinforcement Learning Toolbox™, and it was able to generate a template reward to which we added our physics knowledge.”
“One of the benefits of MATLAB® was that I could iterate across different reinforcement learning algorithms quickly, which greatly reduced the time it took to reach the final choice,” Mukund says. “I determined the soft actor-critic algorithm was the optimal solution for the control problem.”
Modeling the Environment
The team faced another challenge in training the model. Reinforcement learning agents require many episodes of trial and error to learn the dynamics and optimal policy of their environment. Initially, the agent takes random actions to explore its environment and learn the different reward and punishment schemes. Gradually, it converges on optimal action sequences. However, this is often not an option for applications that require interactions with physical equipment and the real world.
“Our model-based reinforcement learning design was very helpful because we could test different settings in the simulated environment and capture and solve problems and oscillations before deploying the learned model on the equipment.”Dr. Nikhil Mukund, Max Planck Institute of Gravitational Physics
“Reinforcement learning is very sensitive to the number of training episodes,” Mukund says. “We cannot use our sensitive GEO 600 interferometer for live-action training, because it takes time and could damage the equipment during the initial exploration phase.”
To overcome this challenge, Mukund and his team adopted Model-Based Design. They created a simulation environment in Simulink® based on the measurements they had obtained from the physical system using System Identification Toolbox™. They trained their reinforcement learning agents on the Simulink model, removing the need to interact with the physical equipment.
“We are testing an ensemble of reinforcement learning models, which uses multiple agents training with different configurations. Our model-based reinforcement learning design was very helpful because we could test different settings in the simulated environment and capture and solve problems and oscillations before deploying the learned model on the equipment,” Mukund says.
Their success in using reinforcement learning at GEO 600 is the first-ever implementation of neural network–based alignment sensing and control at a gravitational-wave detector. With the improved sensitivity and enhanced duty cycle witnessed with this scheme, Mukund sees a very bright future for the use of artificial intelligence (AI) in detecting gravitational waves.
“AI is a very interesting and fast-growing field, and people in the gravitational waves community are excited about it,” Mukund says. “In the past five years, we have used deep learning extensively for data analysis. We are now at the stage where we want to use reinforcement learning in the control part of our work. The technology still must stand up to the hype that surrounds it. But I see a lot of promise here. The next generation of gravitational wave detectors, like the Einstein Telescope and the Cosmic Explorer, will be much more sophisticated. Artificial intelligence and reinforcement learning, in particular, will be indispensable tools to tackling these challenges.”
Read Other Stories
The Orion Spacecraft Is Headed to the Moon, with Help from the SLS Rocket
NASA’s Artemis Program Targets a Long-Term Lunar Presence
PocketQube Satellite Scans the Atmosphere for Electrosmog Pollution
Academic Team Sends Pocket-Sized Satellite into Orbit