The Compulogical Fallacy

With Thanks and Apologies to M.R. Bennet and P.M.S. Hacker

Image for post
Image for post
Logic gates. Ironic isn’t it?

In their classic work, The Philosophical Foundations of Neuroscience M.R. Bennet and P.M.S. Hacker (BH) gave the name mereological fallacy to the logical disorder at the heart of much neuroscientific thought at the time. Then, and sadly still to this day, neuroscientists commonly assigned various cognitive attributes to the brain that can only logically be attributed to a whole human being. Examples include things like having memories, desiring things, seeing, tasting, judging, evaluating, etc. Their intent was to show the logical contradictions that arise as a result. In my view they were quite successful in that endeavor.

Today other fields, technology/computer science are falling into the same trap that befell and continues to befall neuroscientists. In an analogous fashion to the mereological fallacy the computer sciences are assigning various cognitive attributes to computers that can only logically be assigned to human persons and some (non human) animals. I have dubbed this, the compulogical fallacy in honor of BH’s work. Table 1 shows a comparison of the two fallacies.

Image for post
Image for post
Table 1: Mereological fallacy vs. Compulogical fallacy

In essence the compulogical fallacy describes the logical contradictions that arise when we apply characteristics/behaviors/attributes/skills/abilities to machines and computers that can only rightly be applied to human beings and some (non -human) animals. The term machine learning is one of the most oft cited (by me) examples of this fallacy. The two words (each by their very definitions) when combined in that order result in a term that is a logical contradiction and the creation of something that is logically impossible, a learning machine. A machine cannot learn for if it did it would no longer be a machine. The same could be said for any computer (machine) and intelligence. A truly intelligent computer/machine, were it someday possible to create, or were it to be “born” or “emerge” would no longer be a computer/machine but something else entirely, something not human or machine.

No one approach to this problem works best but there are at least three viable solutions. One could redefine the words in the terms or one could argue that the act of creation of the term somehow changes the meanings of the words of which it is composed. A much easier solution would be to drop the use of the term machine learning and replace it with something that is actually descriptive and logically coherent. Any of these solutions could be acceptable thought the first two come with a host of problems. The first would be the most difficult as each word’s meaning has been fixed in the English lexicon with it’s standard/ accepted definition for over 100 years. The second has similar problems and arguably another which is that word/term mutations of the sort described are rarely successful and typically fail to catch hold with the general public. The last would be the most appropriate and easiest though it seems there is very little chance of it ever happening as the natural law offending term has been in use for so long now. Instead the proponents of machine learning have selected none of the above and continue to insist on using an absurd term (without any acknowledgement of its absurdity) to describe something they believe is a foundational field and critically important to many aspects of modern computing.

Written by

Research scientist (Ph.D. micro/mol biology), Thought middle manager, Everyday junglist, Selecta (Ret.), Boulderer, Cat lover, Fish hater

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store