Remember that to value something infinitely is usually to give it a finite dollar value
Just an occasional reminder that if you value something so much that you don’t want to destroy it for nothing, then you’ve got to put a finite dollar value on it. Things just can’t be infinitely more important than other things, in a world where possible trades weave everything together. A nice illustration from Arbital:
An experiment in 2000–from a paper titled “The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals”–asked subjects to consider the dilemma of a hospital administrator named Robert:
Robert can save the life of Johnny, a five year old who needs a liver transplant, but the transplant procedure will cost the hospital $1,000,000 that could be spent in other ways, such as purchasing better equipment and enhancing salaries to recruit talented doctors to the hospital. Johnny is very ill and has been on the waiting list for a transplant but because of the shortage of local organ donors, obtaining a liver will be expensive. Robert could save Johnny’s life, or he could use the $1,000,000 for other hospital needs.
The main experimental result was that most subjects got angry at Robert for even considering the question.
After all, you can’t put a dollar value on a human life, right?
But better hospital equipment also saves lives, or at least one hopes so. 4 It’s not like the other potential use of the money saves zero lives.
Let’s say that Robert has a total budget of $100,000,000 and is faced with a long list of options such as these:
- $100,000 for a new dialysis machine, which will save 3 lives
- $1,000,000 for a liver for Johnny, which will save 1 life
- $10,000 to train the nurses on proper hygiene when inserting central lines, which will save an expected 100 lives
- …
Now suppose–this is a supposition we’ll need for our theorem–that Robert does not care at all about money, not even a tiny bit. Robert only cares about maximizing the total number of lives saved. Furthermore, we suppose for now that Robert cares about every human life equally.
If Robert does save as many lives as possible, given his bounded money, then Robert must behave like somebody assigning some consistent dollar value to saving a human life.
We should be able to look down the long list of options that Robert took and didn’t take, and say, e.g., “Oh, Robert took all the options that saved more than 1 life per $500,000 and rejected all options that saved less than 1 life per $500,000; so Robert’s behavior is consistent with his spending $500,000 per life.”
Alternatively, if we can’t view Robert’s behavior as being coherent in this sense–if we cannot make up any dollar value of a human life, such that Robert’s choices are consistent with that dollar value–then it must be possible to move around the same amount of money, in a way that saves more lives.
In particular, if there is no dollar value for which you took all of the opportunities to pay less to save lives and didn’t take any of the opportunities to pay more to save lives, and ignoring complications with lives only being available at a given price in bulk, then there is at least one pair of opportunities where you could swap one that you took for one that you didn’t take and save more lives, or at least save the same number of lives and keep more money, which at least in a repeated game like this seems likely to save more lives in expectation.
I used to be more feisty in my discussion of this idea:
Another alternative is just to not think about it. Hold that lives have a high but finite value, but don’t use this in naughty calculative attempts to maximise welfare! Maintain that it is abhorrent to do so. Uphold lots of arbitrary rules, like respecting people’s dignity and beginning charity at home and having honour and being respectable and doing what your heart tells you. Interestingly, this effectively does make human life worthless; not even worth including in the calculation next to the whims of your personal emotions and the culture at hand.