An extrasolar planet, or exoplanet, is a planet that orbits a star other than the Sun in our solar system. As of 1 April 2017, there have been 3,607 exoplanets discovered in 2,701 planetary systems confirmed, since the first confirmed detection of an exoplanet in 1992.
On 20 April, the MEarth Project announced the discovery of a potentially habitable “super-earth” that could be the most likely exoplanet to host alien life, to date.
The discovery of exoplanets has fundamentally transformed our understanding of planets, solar systems and our place in the galaxy. With the vast amount of these planets discovered in recent years, characterising them is rapidly becoming a big data problem. As a solution, machine learning techniques that mimic human recognition and dreaming processes are being deployed in the search for habitable worlds beyond our solar system.
Ingo Waldmann, Senior Research Scientist at UCL, specialises in this area, creating models of non-linear Bayesian inverse problems and deep learning applied to atmospheric physics of extrasolar planets and solar system objects. He and other astronomers at UCL have developed a deep belief neural network called RobERt (Robotic Exoplanet Recognition), with other astronomers at UCL to sift through detections of light emanating from distant planetary systems, and retrieve spectral information about the gases present in the exoplanet atmospheres.
At the the Machine Intelligence Summit in Amsterdam, Ingo will explore how deep learning can be used to rapidly characterise the chemistry and prevalent weather patterns of these exoplanets. I spoke to him ahead of the summit on 28–29 June to learn more about the recent uptake of deep learning in this field.
Please tell us a bit more about your work.
I am working on characterising the atmospheres of exoplanets, i.e. planets orbiting other stars. When a planet passes in front of the host star in our line of sight, some of the stellar light filters through the thin atmospheric annulus of the planet. By observing and analysing this light, we can work out the chemistry of the planet, search for signs of habitability and even determine the prevailing weather patterns. We currently use space-based telescopes such as Hubble and Spitzer but are also in the process of designing the next generation of space observatories dedicated to exoplanet atmospheric characterisation.
What do you feel has enabled the recent uptake of using deep learning for the characterization of exoplanets?
Deep learning can help the characterisation of exoplanets in two ways: On one hand, we can mine fainter signals by optimising our instrument calibrations. On the other, we are able to classify planetary chemistry at a fraction of the time needed by more classical algorithms.
Observing the tell-tale signs of planetary atmospheres that are often 100s of lightyears away is at the very limit of instrument feasibility. All too often we find the planetary signal being of the same amplitude than the instrument noise. Here deep learning can help by ‘learning’ the optimal instrument calibration strategy to mine for those weak signals. As we push for ever fainter signals, we are simultaneously entering an era of dedicated all-sky surveys.
Future space missions, such as the NASA/ESA James Webb Space Telescope (JWST) or the ESA Ariel mission will provide us with thousands of exoplanet measurements. Far too many for conventional modelling approaches to cope with in the short 3–5 year mission lifetimes. Here deep learning can replace much of the atmospheric modelling at a fraction of the time and enable a much more responsive mission scheduling than otherwise possible.
What present or potential future applications of deep learning excites you most?
In the next decade, deep learning will help us to map and understand complex correlations in our galactic population of planets and constrain possible planet formation and evolution scenarios. By characterising thousands of planetary systems, we can put our own solar system into the galactic context. In the present and near-term future, we can use deep learning to better understand our own solar system planets. Space missions such as the Cassini-Huygens mission, Venus Express and various Mars missions have mapped our solar system bodies in exquisite detail. These are huge archives of often unseen data. Using deep learning, we are working towards algorithms that can effectively map chemical compositions of entire planetary surfaces or search for weather patterns in the martian atmosphere.
Which industries do you feel will be most disrupted by the characterization of exoplanets with AI in the future?
Many fields of astronomy, astrophysics and planetary science will change beyond recognition in the next decade. Large scale surveys ranging from stellar (the Gaia space mission), to extra-galactic (e.g. Large Synoptic Survey Telescope) to exoplanets (the JWST and Ariel space-missions) force us to rapidly re-think our data analysis strategies.
I believe big data will be the driving theme in modern astronomy, and with expected data volumes in the 100s of Petabytes, deep learning algorithms will become essential in classification and the mapping of the intrinsic correlations in that data.
What developments can we expect to see in deep learning in the next 5 years?
From an astronomer’s perspective, I hope to see nearly fully autonomous surveys with space and ground-based facilities. Deep learning will allow these surveys to instantaneously respond to new data in ways that are not possible with ‘manual’ data analysis by astronomers. First attempts towards these goals have been made but at the moment the interaction between the astronomy and machine learning communities remain minimal. This said, I am confident that deep learning will play a central role in scientific data exploitation in the future.
Ingo Waldmann will be speaking at the Machine Intelligence Summit, taking place alongside the Machine Intelligence in Autonomous Vehicles Summit in Amsterdam on 28–29 June. Meet with and learn from leading experts about how AI will impact transport, manufacturing, healthcare, retail and more.
Other confirmed speakers include Roland Vollgraf, Research Lead, Zalando Research; Neal Lathia, Senior Data Scientist, Skyscanner; Alexandros Karatzoglou, Scientific Director, Télefonica; Sven Behnke, Head of Autonomous Intelligent Systems Group, University of Bonn; and Damian Borth, Director of the Deep Learning Competence Center, DFKI. View more speakers and topics here.
Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.