We Have a Serious Misunderstanding About Understanding
Defining Things in Terms Machines Can Understand
You start by acknowledging the fact that a machine is incapable of understanding anything since it is a machine and cannot learn, or acquire knowledge, or understand. Then you do the exact same thing you do when you program any computer. Determine the set of rules (in this case the “robotic rules of the road” as the authors not at all cleverly put it in the article) that you would like the machine in question to follow, type a string of gibberish (of course it is not gibberish, it just seems like that to those who are not acquainted with the languages of programming) into a screen, run program, debug with more gibberish, repeat until it follows those rules without error. The set of rules can be continually adjusted (either by humans or the machine itself in ways previously defined by or with mathematical and statistical boundaries set by humans) over time until the outcome is a program that outputs a “safe” driving vehicle result. It is not the defining the rules for the machine to “understand” that is the problem, it is the defining the rules for us to understand, then us translating our understanding into algorithms via computer programming languages that is the problem. The computer/machine never changes its ability or not to understand (it has none) it simply executes the program we enter into it. And in case you were thinking it, modern computers are no different then their ancient brethren in this regard. No matter how clever the mathematical and statistical techniques are or become that are used in the algorithms that are the “brains” of the computer, they are still, at base, nothing more than a series of equations and rules. If you believe the human mind/body is also at base nothing more than a series of equations and rules then you are justified in your belief that a computer could understand something like a human. If you do not believe this then that belief is not justified. Understand?