I rewrote your article. Now it is factual and accurate but unfortunately not nearly as clickbaitable. It still does not really answer the question posed by the title but the answer (nothing, it has already happened many times and will continue too) is boring anyway.

What if Computers Discover New Planets

Image for post
The Milky Way

On Dec 14th 2017 NASA hosted a conference to announce the latest discovery made by its Kepler space telescope. The latter is aimed at looking for planets similar to the Earth. It was launched on March 9th 2009 and since that, it has acquired huge amounts of data, being considered as a statistical mission. These measurements require automatic processes to be adapted and compared to what astrophysicists actually consider “Earth-like planets”.

Christopher J. Shallue, software engineer at Google Brain, and Andrew Vanderburg, physicist from the Department of Astronomy at the University of Texas, used computers programmed with algorithms developed with sophisticated mathematical and statistical tools to distinguish between Kepler signals of planets and false positive ones. At first they used the computer to process many large raw data sets from known celestial bodies through these algorithms to generate the baseline data for later comparisons, then they made it work on the remaining raw data, finding out two new planets that were identified with high confidence by their model.

Image for post
Artistic representation of the system of the new planets. Credits: NASA, Wendy Stenzel

Machine learning is a logical contradiction and a logical impossibility. Machines can’t learn. However, computers programmed with sophistical algorithms can recognize specific situations as determined by the code used, but they need input data first and don’t evolve because they are machines. On the other hand, deep learning is a made up term that really means nothing but sounds cool. It also is not learning but some computers are programmed using algorithms that are adaptive in the sense that certain input data can impact the output data via feedback loops (inhibition or promotion). Some computers can do this (apply feedback loop algorithms) only after all input data has been analyzed (at clausula) while others can do it simultaneous with the data analysis (at simul). The second type of computer is capable of extracting general characteristics from a limited set of examples and then using them to interpret other situations. For this reason, these types of computers (capable of at simul analysis) are especially useful when having a lot of input data that would take an extremely long time to analyze to completion before applying feedback loop algorithms.

This wasn’t the first time computer science helped new discoveries: Carlo Enrico Petrillo, physicist at the University of Groningen, and his team published in Monthly Notices of the Royal Astronomical Society their work about using a neural network to identify 56 new gravitational lenses.

Image for post
Two gravitational lenses discovered. Credits: Carlo Enrico Petrillo / University of Groningen

Undoubtedly a new way of doing research was shown: computers are entering analysis processes like never before, helping scientists treating the huge amount of data they could have not treated at all. Artificial intelligence had nothing to do with any of this work which is a good thing since it does not exist and may never. It does however generate a lot of clicks. I should have probably put AI in the title of my post somewhere. Oh well, maybe next time.

Image for post

Written by

Research scientist (Ph.D. micro/mol biology), Thought middle manager, Everyday junglist, Selecta (Ret.), Boulderer, Cat lover, Fish hater

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store