I already responded to the article linked below but as I thought more about that response I felt it needed a bit of expanding on and some clarification.
No, Machine Learning is not just glorified Statistics
This meme has been all over social media lately, producing appreciative chuckles across the internet as the hype around…
As I say in my too clever by half title I am prepared to accept the author’s conclusion that machine learning is not just glorified statistics, or even as I cheekily noted in my original response, glorified statistics plus advanced (not really that advanced) mathematics, as long as he and other proponents of machine learning will concede that no machines are learning anything when this form of modern computing is employed. They must be willing to finally admit that machines cannot in fact learn, are incapable of learning, logically incapable, and if one ever did learn something it would no longer be a machine. In fact by virtue of it even having the capability to learn (irrespective of it actually learning) it would no longer be a machine. Only of a (mostly) whole human person (and some non human animals) with a (mostly) fully functional nervous system can we say it is capable of learning. Machine learning proponents have dug their own hole by insisting on calling something which is arguably only slightly more interesting than any other form of modern computing by a term which cannot apply to it. Logically cannot apply to it. Now they are stuck in the same trap the neuroscientists fell into when they began assigning cognitive states to the brain that can only logically be said of (mostly) whole human persons with (mostly) fully functioning nervous systems. This was dubbed the mereological fallacy by M.R. Bennett and P.M.S. Hacker. I have dubbed the analogous situation which now ensnares machine learning and also (partly) AI the compulogical fallacy.
All that blather aside, it seems a simple enough trade off to me. What about for you, machine learning advocates? Are you willing to make that trade? All you would need to do is come up with a new name. Luckily for you I have been cogitating for a long time on a possible term that one could substitute for machine learning that would convey the meaning I think most people want to convey when they use the term, and is not a logical contradiction. The best I have come up with so far is
neuroarchitecture inspired computing (NiC).
I realize It does not exactly roll off the tongue but it does have the advantage of forming a rather neat three letter, memorable acronym. It is still less precise than I would like. Neuroarchitecture for example does not convey the fact that most people use the term machine learning to suggest learning like a human does with a human brain. Neuro could refer to any brain or neurosystem from any animal. Also, it is too strong in the sense that it implies that our understanding of how the brain works is settled science, when this is far from the case. This is another one of my beefs with machine learning. It suggests that we know enough about how learning works in ourselves that we now have built machines that can replicate that process. This is most definitely not the case. How the brain (brains can’t learn but forgive me this offense against logic)/a person learns is still hotly debated and far from settled. There are many theories but none are without controversy or critics. This is exactly why the inclusion of the word “inspired” is so important in the term NiC. Its use conveys a hint of uncertainty which I think is totally appropriate given how little we know about how brains/people actually learn. Architecture also has problems as it implies a hardware component to machine learning that is as important as the coding/software component. In the case of some artificial neural networks this may be true but it is certainly not always the case. Overall I would rate it as acceptable but not great. Any takers?