Exactly like machines and algorithms, neural networks cannot learn, nor can they forget. To suggest that they can is to commit two logical fallacies, (a form of) the mereological fallacy and the compulogical fallacy. In the first case suggesting a neural network can learn is exactly as wrong as suggesting the same thing of a brain, which also is not capable of learning. Only of a whole human person (and some non human animals) with a (mostly) fully functioning nervous system including a brain can we say they are capable of learning. To say otherwise is to commit the mereological fallacy, assigning traits/characteristics/attributes, etc. to a part of a thing that can only (logically only) be applied to the whole thing. In an analogous fashion the same case can be made for an artificial neural network. Interesting that I am using an analogy as part of my argument here.

The second logical fallacy is related to the first but is more specific and it is the logical crime that is committed when attributes/capabilities etc. that can only be (logically) assigned to human persons and some non human animals are instead assigned to machines/computers. Examples include things like feeling, remembering, forgetting, learning, etc. Simply put if a machine (or in this case a part of a machine-ANN) were capable of any of these things it would no longer be a machine.

Research scientist (Ph.D. micro/mol biology), Thought middle manager, Everyday junglist, Selecta (Ret.), Boulderer, Cat lover, Fish hater

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store