An Algorithm May Decide Who Gets Suicide Prevention
A recent study on Google’s search results raises questions about the faith we put in algorithms — and the tech…
Here we go again with yet another article which in its wording implies that algorithms are somehow running around deciding things for us. In this case (they) are deciding who will get suicide prevention treatment. The title evokes images of an algorithm sitting alone in its machine, intently perusing case files, trying to decide who among the various patients encoded in the input data is the most deserving of suicide prevention treatment, and who will be left out. As it contemplates the consequences of the terrible decision it must make, the algorithm begins to cry for all those it cannot save, logical operators streaming down its many lines of code like tears. Shall it be Susie or shall it be David, it asks itself. How shall I ever decide, it wonders. Then with a burst of mathematical reasoning it finally makes the ultimate decision and Susie shall be saved! But what about poor David, it thinks as the last electron flows through the circuits where the algorithm lives, what about poor David?
An algorithm, just like a machine, cannot decide anything. It is incapable of deciding or not deciding. An algorithm is also incapable of learning, exactly like the machine which is used to run the code which contains it. Only of a (mostly) whole human being, and some non human animals, with a (mostly) fully functional nervous system including a brain can we say they are capable of making decisions, of deciding things. To say or suggest otherwise is to commit a logical fallacy. I call it the compulogical fallacy in honor of the mereological fallacy which is its close cousin. Assigning attributes, characteristics, capabilities, behaviors, etc. to machines, computers, or in this case mathematical expressions that can only correctly (logically correctly) be applied to human beings (and some non human animals) is to commit the compulogical fallacy. The title of this very article commits said fallacy as do tens to hundreds of articles I see on this website and across the web each and every day. Of course we can and do use algorithms everyday to decide things. But is is we who use them to make decisions or assist us in our decision making. Ultimately and in the end, in every single case, they do not decide, we decide. We are, to paraphrase George W. Bush, the deciders , not our algorithms nor our machines.
The tale of poor David and lucky Susie is obviously fictional and even more obviously ridiculous. But how much more ridiculous is it really to suggest that an algorithm (a series of logical and/or mathematical/statistical operators that together define a rule or series of rules) is capable of deciding anything. That a machine is capable of deciding things. That a machine or an algorithm have agency. They do not, and they cannot. End of story.