You start by acknowledging the fact that a machine is incapable of understanding anything since it is a machine and cannot learn. Then you do the exact same thing you do when you program any computer. Determine the set of rules (in this case the “robotic rules of the road” as the authors not at all cleverly put it in the article) that you would like the machine in question to follow, type a string of gibberish into a screen, run program, debug with more gibberish, repeat until it follows those rules without error. The set of rules can be continually adjusted over time until the outcome is a program that outputs a “safe” driving vehicle result. It is not the defining the rules for the machine to “understand” that is the problem, it is the defining the rules for us to understand, then us translating our understanding into algorithms via computer programming languages that is the problem. The computer/machine never changes its ability or not to understand (it has none) it simply executes the program we enter into it. Understand?