Friday, December 28, 2012

Trolley Problem

Alex Tabbarok raises a question about the "Trolley Problem," the famous (and largely pointless, because hypothetical) moral dilemma exercise.

Except Alex points out it is no longer pointless:  the Google has to decide.

Here's my question:  does it matter if the one person, or the fat man, consent?  Isn't this the ultimate "coerced by circumstance" problem?  The only reason you want to kill the one person is that the other option is even worse.  This is clearly not a voluntary choice.  Nonetheless, the premise of the exercise is that by doing nothing you "cause" even more harm.  Or, allow more harm.  Or, fail to prevent even more harm when by acting you could have prevented harm. 

Maybe it's not so pointless after all?

UPDATE:  To put some context on the example:  There is a school bus ahead, full of children.  And a single old lady on the sidewalk.  The road, unexpectedly, is icy, but there is still enough control to steer, though not stop.  What does the Google program choose?  Should it let momentum rule, and take no action, therefore smashing into the school bus?  Or should it take an actual action, and swerve up onto the sidewalk, killing the old lady for sure but saving 50 children?  This is NOT hypothetical.  Stage 1 is to program avoiding school buses, even if you might suffer more harm yourself (should the program do this?  can the owner disable it?  SHOULD the ownder disable it?)  Stage 2 is to weigh the costs of avoiding the bus.  Should you hit the old lady?  Humans would probably just panic, or act essentially randomly.  But in this case the Google has enough time to decide:  What is the right thing to do?

2 comments:

  1. I see the Trolley Problem as a metaphorical explanation for why most people support the regulatory state despite net harms. If they support the FDA because they intend to prevent thalidomide babies and so forth, then that is morally excusable even if tens of thousands of men die premature deaths because the FDA unnecessarily delays approval of drugs that prevent heart attacks. The "good effect," the prevented harms, were the result of intentional decisions. The "bad effect," the indirect harms that result because of the delays in drug approvals, were not intended and thus advocates of FDA regulation are not regarded as culpable for those deaths in the same way that anti-FDA advocates would be regarded as culpable for the thalidomide babies. From their perspective, "We" anti-FDA advocates could have saved innocent lives from harm but we chose not to do so. But their moral judgment is not symmetrical: They don't regard themselves as culpable for deaths due to FDA delays.

    I haven't worked this out in detail, but it would be interesting to do MRI tests on how people reason about FDA issues such as those suggested above:

    "He's found that when a person in an MRI machine is asked questions like whether they should take a bus or a train to work, the parts of their brain that activate to form their answers are among the same areas that activate when the person is sorting through the first example in the trolley problem. The thought of pulling a switch that will dispatch one person to save five appears to be governed along the lines of reason and problem solving.

    On the other hand (or region of the brain), Greene has found that distinctly different parts of the brain activate when people consider pushing a man onto the tracks. Regions that are responsible for determining what other people are feeling, as well as an area related to strong emotions, swing into action when a person is confronted with the decision of whether to push the man onto the tracks. It's possible this combination of brain functions constitutes our moral judgment."

    I suspect that similar patterns of MRI activation may be found in pro- and anti- FDA perspectives. (quotation taken from the article you linked to for "Trolley Problem")

    We can imagine a world in which people have internalized public choice theory to the point where the MRIs might show a different set of responses, but that is not the world that characterizes most voters (or even most intellectuals).

    ReplyDelete
  2. Go ahead and swerve, Mr. Google. Consequences are not all that matter, but they matter a lot. And it seems like a stretch to suppose that the programmer will lose any integrity or experience alienation while sipping a soda, playing taskbar tetris, and writing the code that programs a car to smash into an old lady in order to save some kids.

    ReplyDelete

Do you have suggestions on where we could find more examples of this phenomenon?