More on the Importance of Precision in Language

Image for post
Image for post
Thank You Andrea Sonda and Unsplash for another gorgeous photo

Many of us in the Deep Learning community know that the major models of Deep Learning, i.e. Convolutional Neural Nets, LSTM Recurrent Neural Nets or Neural networks in general have existed since the 90s. It is now that we have the data (thanks to the Internet) and the computational power that we are able to see Deep Learning making an impact on our daily lives.

It is great that many of you recognize that so called “deep” learning models have been around for almost three decades now. Why is it so difficult then for you to accept the fact that machines can’t learn and ‘deep’ is nothing more than an important sounding but ultimately meaningless signifier term invented to make something which really is just a (slightly) different form of the actual thing, seem more important and special than it actually is. Alternatively it can be used as a distraction from the fact that the thing it is being used to modify is not actually a real thing as in the case of machine learning, the logically impossible, logical contradiction. It is interesting that all of the things you mention have existed since the 90s, because computers and computing have also. Today, computers still exist and they are still computing, albeit in different and more interesting and complicated ways. What they are most definitely not doing is learning. Something they have also not been doing since the 90s.

The so called “AI’ revolution is really nothing more than the evolution of modern computing. Like machines that “learn” machines that are “intelligent” also have not ever, and do not currently exist. This is to take nothing away from the incredible advances modern computing has made and the impact it has had, and will continue to have on all our lives. It is only to point out that all of those advances are trivialized and distracted from when language is constantly abused in the name of what? I used to think it was all about the hype these terms always engender. Writing an article about powerful computers doing such and such interesting new thing might get some attention, but if those boring old normal powerful computers are ‘AI’ instead watch the page views grow before your very eyes. Same could be said for that grant application you have been slaving away on for the past year. Of course your work is novel and could potentially pave the way for the next generation of computers, but if you work paves the way for the next generation of machines that learn all of a sudden your funding chances have gotten a whole lot brighter. Maybe at one time it was mostly about the hype but I think now it has become so commonplace to talk about machines that learn and are intelligent that the people who say those things have simply ‘forgotten’ what these words actually mean. Importantly, however, the vast majority of persons have not and still believe their ordinary everyday meanings continue to apply. This dichotomy of understanding and belief has important implications and may have grave and unforeseen consequences.

Language does evolve and grow and words that once meant one thing become words that mean another. That is a normal and natural thing and it is mostly healthy and productive. However, typically words evolve new meanings, or new words are invented in order to clarify confusions, to improve our understanding of the thing or things that are now to be described by the word(s) or term. In the case of ‘AI’, and ‘machine learning’, and ‘deep {insert tech thing}’ there is no clarification, only obfuscation and distraction. Either one accepts the current meanings of learning, and intelligence, in which case there is no such thing as AI or machine learning, as I do, or one believes these words now have new meanings. Of course, everyone is free to make this choice and it seems that most people for whom these things are important have chosen the second option. There is however a third position and this is the position that is held by the vast majority of persons currently alive on the planet. It is a confused and illogical position because the people who hold it still believe in the ordinary meanings of the terms ‘learning’ and ‘intelligence’ but they actually believe computers and machines are capable of these things. Moreover, they believe that computers and machines that are intelligent and can learn actually exist.

And so what is the harm in that, what does it really matter how we define these words, how we use them, or who believes what meaning applies when and to what? The problem is that although the technorati and other digitally literate among us understand fully what is meant when we say machines “learn” or a computer is “intelligent” the average, everyday person, is still operating under the assumption that the ordinary, everyday meanings still apply (my third position above). So they begin to fear these machines and the people who build them and work with them. They fear them because they feel they are on the cusp of becoming even more marginalized than they already are. Never forget that these people outnumber the techno-literate by many orders of magnitude. It is ironic that what began as a way of building up interest and excitement in technology and computers, and has succeeded wildly in doing so, is now the same thing that may one day spell their demise at the hands of people made afraid of things that don’t even exist.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store