When someone has built something which was modeled on something else I think most people assume the something else that is being used as a model is well understood and well characterized. ANN’s are often pitched as being “modeled on the human brain” but only rarely as a “model of the human brain.” Neither is accurate though in my judgement the first is a larger offense against the accepted definitions of the word model (at least as it is defined for the biological sciences). These two main uses/definitions of the word model are 1. A representation of something, often idealised or modified to make it conceptually easier to understand, and 2. Something to be imitated. Clearly the way ANN’s are described is not as a representation of the human brain as a way of making it (the human brain) easier to understand. They are almost always sold as ‘imitations’ of the human brain. Specifically imitations of some of the functions of the human brain. Substitutes for the human brain or some functions of the human brain to put it another way. That they are poor substitutes is a topic for another post, but my quibble in this discussion is with the very idea of ‘imitation’ as it relates to the structure/functions of the human brain. It brings me right back to where I began, which is that in order to imitate something (to model it), one must know what the thing is one is imitating (modeling). I cannot imitate an ape if I have never seen an ape and do not know how it looks and acts. I can attempt to imitate it, but I will never even know if I have been successful in my attempts as I have no fixed reference against which to judge the accuracy of my imitation. In the case of ANN/brains it is not that no one has ever seen a brain, nor that we do not (sort of) know what it does, rather it is that the specific mechanisms and structures of the brain and how they relate to function are only theoretical. You can ‘model’ something on a theoretical system as much as you like, but the accuracy of your model as it relates to the actual thing can never be assessed. Therefore you are not justified in making any claims as to the performance of that thing as it relates to the performance of the actual thing.
Theories abound that attempt to describe the structure of various neural networks in the brain, how they are organized, how they function, and ultimately how/if they are important to human intelligence/consciousness or a million other more mundane processes. Ultimately, it is still not even “proven” that such networks exist, let alone that they are somehow the key structures that should be the focal point of any model attempting to mimic how the human brain works. Each of these theories has a viable claim to some level of “correctness” and corresponding data to support it. None are completely accurate and no doubt most are mostly wrong.
Given this brute fact it is simply impossible for anyone to claim that an artificial neural network works anything like a network of neurons in the human brain. This is obviously true because, to put it simply, it is still not known how these function and/or how they are structured in the human brain. Depending on the whims of the particular programmers/engineers designing/programming any particular artificial neural network they might select any of fifty competing theories of neuronal structure and function to model. Mostly, because they are ignorant of the complexities of biology and neuroscience they will select the most tried and true/easiest/most previously used approaches. These will be incorrect and will produce the exact same non-intelligent, non learning machines we have been producing since day 1 of this ridiculous quest for artificial intelligence way back in the 1950s. Wasting time attempting to design/develop/create an artificial intelligence is absolute nonsense when we have yet to understand actual, natural intelligence.