…rtificial neural networks can now avoid catastrophic forgetting, which was a major stumbling block. Previously, when a neural network was trained on a new task, it was either too malleable, learning the new task while forgetting the old one, or it was too rigid, remembering the first task while never learning the second. This advancement is an important step toward transfer learning, yet it only partitions new tasks i…
Exactly like machines and algorithms, neural networks cannot learn, nor can they forget. To suggest that they can is to commit two logical fallacies, (a form of) the mereological fallacy and the compulogical fallacy. In the first case suggesting a neural network can learn is exactly as wrong as suggesting the same thing of a brain, which also is not capable of learning. Only of a whole human person (and some non human animals) with a (mostly) fully functioning nervous system including a brain can we say they are capable of learning. To say otherwise is to commit the mereological fallacy, assigning traits/characteristics/attributes, etc. to a part of a thing that can only (logically only) be applied to the whole thing. In an analogous fashion the same case can be made for an artificial neural network. Interesting that I am using an analogy as part of my argument here.
The second logical fallacy is related to the first but is more specific and it is the logical crime that is committed when attributes/capabilities etc. that can only be (logically) assigned to human persons and some non human animals are instead assigned to machines/computers. Examples include things like feeling, remembering, forgetting, learning, etc. Simply put if a machine (or in this case a part of a machine-ANN) were capable of any of these things it would no longer be a machine.