*Links to work:
I mention Derek Parfit. Derek was consider by many to be one of the great (if not at the end of his life in 2017, one of the greatest) moral philosophers. Larry was Derek’s first phd student, a long time friend, and his work on transitivity is deeply tied up with Parfit’s Reasons and Persons. Parfit was also an EA supporter.
I mention Nick Beckstead, as Beckstead is CEO at FTX Foundation and was an early leading figure for Open Philanthropy, and the Centre for Effective Altruism.
For Goma, see Linda Polman’s The Crisis Caravan and also Chapter 7.5 in Temkin’s Being Good in a World of Need.
This is the Amazon link to Inequality. Here is Larry Temkin’s essay which draws together some central results of his work on inequality. Inequality: A Complex, Individualistic, and Comparative Notion.
“...I discussed at some length a position I then called extended humanitarianism, and which I now call prioritarianism. On this view, one wants each person to fare as well as possible, but is especially concerned with—and hence gives extra weight or priority to—the worse-off…. “
This is the Amazon link to Rethinking the Good. Here is a detailed review of the book (Richard Kraut, 2012). And this is Tyler Cowen with a blog post on the book (2012). Tyler writes:
“While not an easy read, it is the most important work in choice theory and social choice in some time….The main contribution of this book is to show you that the transitivity postulate is far less intuitively appealing than it seems at first. Twenty-two years ago I disagreed with Temkin but now I accept much of his critique. Here is one very good Temkin piece from JSTOR. These days, I see the good is more holistic than additive-aggregative. This defuses Temkin’s arguments, though at a high cost. (You will find Temkin’s criticisms of holism and related ideas at around p.355, though I find them unusually lacking in force. One of his worries boils down to how a multiplicative view will handle negative numbers but I see the scale as sufficiently arbitrary that they need not pop up to begin with.) We can make some gross comparisons of better and worse at the macro level, with partial rankings at best, but for many individualized normative comparisons there simply isn’t a right answer. I view “ranking” as a luxury, occasionally available, rather than an axiomatic postulate which can be used to generate normative comparisons, and thus normative paradoxes, at will. I see that response as different than allowing or embracing intransitivity across multiple alternatives and in that regard my final position differs from Temkin’s. Furthermore, in a holistic approach, the “pure micro welfare numbers’ used to generate the paradoxical comparisons aren’t necessarily there in the first place but rather they have to be derived from our intuitions about the whole”
This is the Amazon link to Being Good in a World of Need. You can view the 2017 Uehiro lectures here (although his ideas are a little updated from this). This is a 2021 YT video/podcast with Friction purely on notions in the book. This is the TLS review of the book (Mark Hannam, March 2022). And this is Tyler Cowen’s blog post:
His new book is Being Good in a World of Need, and most of all I am delighted to see someone take Effective Altruism seriously enough to evaluate it at a very high intellectual level. Larry is mostly pro-EA, though he stresses that he believes in pluralist, non-additive theories of value, rather than expected utility theory, and furthermore that can make a big difference (for instance I don’t think Larry would play 51-49 “double or nothing” with the world’s population, as SBF seems to want to).
So where does the red pill come in? Well, after decades of his (self-described) intellectual complacency, Larry now wonders whether foreign aid is as good as it has been cracked up to be:
…In this chapter, I have presented some new disanalogies between Singer’s original Pond Example, and real-world instances of people in need. I have noted that in some cases people in need may not be “innocent” or they may be responsible for their plight. I have also noted that often people in need are the victims of social injustice or human atrocities. Most importantly, I have shown that often efforts to aid the needy can, via various different paths, increase the wealth, status, and power of the very people who may be responsible for human suffering that the aid is intended to alleviate. This can incentivize such people to continue their heinous practices against their original victims, or against other people in the region. this can also incentivize other malevolent people in positions of power to perpetrate similar social injustices or atrocities. …
The book also presents some remarkable examples of how some leading philosophers, including Derek Parfit, simply refused to believe that such arguments might possibly be true, even when Nobel Laureate Angus Deaton endorsed one version of them (not exactly Larry’s claims, to be clear).
Another striking feature of this book is how readily Larry accepts the rising (but still dissident) view that the sexual abuse of children has been a grossly underrated social problem.
What is still missing is a much greater focus on innovation and economic growth.
I am very glad I bought this book, and I look forward to seeing which pill or half-pill Larry swallows next. Here is my post on Larry’s previous book Rethinking the Good. Everyone involved in EA should be thinking about Larry and his work, and not just this latest book either.
Transcript Below, also available on shared google doc here:
Larry Temkin and Ben Yeoh in Conversation Transcript (July 2022)
This is an unedited transcript of a podcast transcript between Larry Temkin and Ben Yeoh. (Expect typos, and grammar errors typical of a conversation). The transcript covers 3 hours of conversation held in July 2022. The podcast is in two parts and available on podcast platforms and YouTube. The second part focuses on Effective Altruism ideas. The first part looks at transitivity, and other debates in philosophy through a pluralist lens. There is more in this blog post. You can CRTL-F the word “rhubarb” to quickly go to part 2.
Ben (00:03):
Hey, everyone. I'm super excited to be speaking to Larry Temkin. I told my friend I'm going to be podcasting with Larry and they replied, "He is a genius." This assertion, I think, rests on Larry's work on inequality and so called transitivity. In particular, the work on inequality might have been very impactful on thinking around the importance of access to healthcare. Reading Larry's work, for me, has made me think quite hard about how to live a good life, especially living in a rich nation, and questioning some of the assumptions behind certain aspects of effective altruism. Larry's books have also led me to question how much weight we can put on using expected value decision making tools, especially for moral questions. For those of you on video, this is a copy of one of his books, and this is a copy of another one of his books. I would suggest you get them; Being Good in a World of Need and Rethinking the Good. I just realized on Rethinking the Good, just today, that is the lollipop analogy on the front page there which I hadn't twigged until today. So I'm feeling-- I don't know whether slightly clever or not clever for only realizing that. So, Larry welcome.
Larry (01:21):
Well, thanks Ben. It's great to finally meet you. We've been talking about doing this for some months now, and it's great that this is finally happening. I have to say, as long as you were kind enough to hold that up, you cut off half of the picture. So I'm going to hold it up a little bit higher. The reason for that is the cover of this was actually drawn by my daughter. I love that she did that. She asked me, "Could I do the cover of your next book?" And I said, "Sure." But you're right, it is the lollipop example which is a seminal example in that book. Anyway, great pleasure to meet you and to be talking with you today. I hope it is enjoyable for your audience.
Ben (02:02):
Excellent. So I'm going to start with an easy one. How should we value a human life? Can we put a dollar figure amount on it? And when should we, or should we not? I have an example in mind. So here in the UK, we have a national health service and this health service has to make judgements. Broadly speaking, there is a set of people and policymakers who have decided that if a lifesaving treatment costs more than about 30,000 pounds for a life year, technically a good quality life year, they will pay for it. But if it costs more than 30,000 pounds for a life year, then they are minded not to pay for it because they can do other treatments which would be saving a life for under 30,000 pounds. So they decide this way for most treatment such as a diabetes treatment or a migraine treatment or something like that.
However, they also make some notable exceptions. Two of these exceptions which are quite notable for me, are for rare genetic diseases and also for premature babies. Here, treatments for that under most reasonable assumptions cost much more than 30,000 pounds per life year, particularly for premature babies where you can easily get into the hundreds of thousands of pounds. But one of the arguments given is that there is society preference-- not by everybody, but by quite a lot of people, and in the majority of people it seems when they've done this work to help premature babies, because there is no other treatment for them. So that's a kind of practical example thinking about this and how you go about valuing a human life and some of these problems or challenges which I can touch upon your work. So what do you make of that?
Larry (04:01):
Great question. In fact, I've never been asked that question before, and I don't have any pat answer to pull out for you. But I have a bunch of things to say about this kind of question. So the question of, "How much is a human life worth?" It's not the right question. It's not how I would frame the question though I realize that societies have to, in essence, answer that question every time, all the time. Not just in the healthcare system, but when you talk about, “Should you fix roads? Should you fix a curve in a road? How much is it going to cost? How many people are going to drive on it? How many people might die if you do or don't do it?" These are all sort of cost benefit analysis type questions of, "What's it worth for?"
The questions for me that we have to answer are not an absolute, “How much is a human life worth?” as if there's a single answer. But, what are the values that we care about as a society, as individuals, as members of the society when we think about these questions. So the reason I put it that way is that there are fundamental issues of fairness and justice that come into play here, which stand opposed to-- as my work and as you are well aware of, certain kinds of questions which the, "How much is life worth?” question push us towards, which is the kind of cost benefit analysis. How do we get the most pay for the buck? It seems it's inefficient to spend a lot of money to save an infant or to save someone at the end of their lives if it's going to cost millions of pounds to do so, when you could take those same millions of pounds, in theory, and spend it in lots of other ways and save far more people. Doesn't that seem better?
Well, the sum total of people who say it will be better, yes. But, questions about fairness aren't just the same as questions about efficiency. They’re unrelated-- there are certain ways in which they are related, but they're not related. The other question is, the reason why I want to bark at just answering the question the way it's framed is, we as a society are constantly being told that we have to make trade-offs between this costly procedure and this cheaper procedure which affects similar ends, et cetera. And I think perhaps we do. But the question really is-- when we're asking this question, "What is the actual trade-off? Why is the trade-off between this person's expensive care in order to save their lives and these people's less expensive cares to save their lives, rather than this person's expensive care to save their lives and this person's ability to buy a jet and fly down to an island that they own, which is a private island somewhere in the middle of nowhere, blah, blah, blah.”
When we as a society spend here in the United States a billion dollars on our pleasure palaces which is the State Arena where our football teams play, or a billion dollars on arenas, or half a billion dollars on arenas where our basketball teams play or our baseball teams play, and you think about all these kinds of issues, these are the trade-offs I really want us to think about. I want us to think about when we have these questions. We're constantly told, "We only have a budget. Budget is this amount. How should we make these trade-offs between this expensive health policy and this much less expensive health policy?” If those were literally the only trade-offs in play, I could be moved more to think, “Well, it's 12 lives with this cheaper efficient way, and it's one life with this way, blah, blah, blah.”
But if you tell me, but meanwhile, in the background there are these massive amounts of inequalities and accesses-- what some people regard as accesses, things that people spend enormous amounts of funds on that actually don't necessarily improve the quality of their life; maybe not even at all, but even if it does, maybe perhaps the bad image. They're not sure that we should be taking seriously this question of, "How do we choose between who lives and who dies in this area?" Rather than putting pressure elsewhere on it as it were, the economic systems that are fortunate to make these choices while we allow all these other kinds of things to go on in the system.
If I just say, "Well, I think a human life is worth this, and so we should make all our decisions that way,” I'm already buying into all the background assumptions, most of which I actually just reject. So I'm inclined to sort of not answer the question the way it was put and to want to shift our attention to these other larger questions. But I can enter a philosopher's room where I say all these other things are not there. We don't have vast inequalities and we don't have people spending hundreds of millions of dollars on giant yachts and they have 12 homes and so on. None of that is prevalent. Now we have to make some of these tough choices. Even then I would decline to say a human life is worth one amount rather than another. I would then still be trying to pay attention to questions of, "What would be fair, what would be just, how do we give everyone equal opportunities to live? How should we weigh those kinds of considerations?" These are more about comparisons and about values than about the absolute value of a human life.
Ben (10:09):
That makes sense. So I pull two things out of that, although there's quite a lot. One is, there's a higher level question as to-- Maybe I misused the word in terms of pluralism, but thinking about other aspects, both on the systems level, are there other things we should be judging? But also these other ideals of fairness or equality or things like that? I do think there is something though that the health authorities, for instance, do in the end have to try and make that decision. But in some ways, you're sort of saying that the philosophy question is perhaps a tier above that.
Larry (10:51):
Let me re-enter the question for a second with a concrete case that I discussed in my most recent book, “Being Good in a World of Need.” So there is a charity called Village Reach. It's a well-known charity and it operates throughout the world, but mostly in Africa. The aim of Village Reach is to reach so called last mile communities. And the idea is the Village Reach is a charity that recognizes that there are all sorts of very needy people; mostly in rural areas scattered throughout Africa and elsewhere, where there are infrastructure problems and there are medical personnel problems. There's a whole host of problems that are in play that make it difficult for people who are very ill in these distant rural communities; the so-called last mile communities, to get healthcare.
Strikingly oddly, certain effective altruist organizations such as GiveWell, for many years had ranked the Village Reach as one of their top rated charities. What that basically means is that they judged that the money that you spent in contributing-- if you made contribution to Village Reach, was money well spent, that was efficiently being used, and that this charity was doing what it said it was doing for the money that you were giving it. It turns out that for a variety of well-known reasons that Village Reach is actually intended to counteract. A charity of this kind is not efficient. It's not efficient if what you want to do is maximize the number of lives saved. This is what health organizations like the World Health Organization and many international aid organizations have recognized because by and large, you can do a lot more good if what you're merely interested in is saving the most lives.
If you focus on the large cities; you put a hospital in a large city, you put a hospital in a place like Kibera, which is the largest slum in Africa, and a single doctor can reach hundreds of thousands of people. Where similarly, if you go to villages that are in the middle of nowhere and there's a hundred people in the village or less, to send a doctor out there to address the needs of a dying child is just not efficient. It's not cost effective. But the point of a charity like Village Reach is to say that the child who's born in rural Africa, their life is just as valuable as the lives of the children that are born in the major cities in Nairobi or wherever else. And they should have an equal chance at life and having a normal life expectancy.
So this charity has risen up to meet a demand that most of the poor nations of the world healthcare systems just can't meet that demand. But the thing to note here is that this is a charity that at its root, has a certain conception of fairness in mind. It's not a, "How do we do the most good with the money?" It's a, "Does this child's life matter any less than these children's?" It's with that in mind-- And there are deep philosophical backgrounds to this kind of thinking. Kant famously claimed that each person's life was worth infinite value. That may be too strong. But the point about each person's life is worth infinite value, is in essence the view that you don't crate off between lives. You don't say, "Well, here's one life here and here's two lives here. I could save these two people if I'm just wanting to kill this person and distribute their organs. Are two lives more valuable than one?" Kant says, "Well, each of their lives is infinitely valuable. Each of their lives has value without bound, and you don't increase the total value in the world by killing one innocent person in order to save two others.” That's not how we should be guided in our thinking.
I'm not a strict counting by any means. I actually give a great deal of weight in my own thinking to so called consequences considerations, the kind of efficiency considerations that motivate organizations like effective altruism and such. I give great way to it, but that's kind of one important moral ideal and it has to be balanced against other ideals. That was the point of my initial response and the point of my talking about places like Village Reach. I think there's the place in our Pantheon of valuable charities that will rank something like Village Reach, which is highly inefficient in terms of most life saved for the dollar, but highly efficient in terms of giving weight to a value like equality or fairness.
Ben (16:04):
Sure. That makes sense and that feels like an analogy to the UK health service issue. I guess that has made me reflect a little bit on something to do -- And you've written about this a little bit as well. The nature of perhaps time and also distance. There's a little bit on the sense of fairness, I think particularly with babies, because we feel that they have a future and there is something about thinking about future selves. Then I think drawing in the example of the Village in Africa, there is something about distance, about perhaps it not being necessarily as morally relevant when we're thinking about weighing lives. And like you say, if you get to the Kant view of it being infinite, then actually things like time and distance fade away. Have you changed your view of thinking about time and distance over your thinking around this?
Larry (17:02):
Not over my thinking about this per se. Historically, people have thought questions about time and distance are philosophically simple, but practically complex. So, the thought would be historically that we should be neutral with respect to people; all people's lives are valuable, places; it doesn't matter where in space someone is suffering. What matters is that someone is suffering somewhere, and neutral with respect a time. So if I could save one person alive today, or 500 people who will be alive in a hundred years from now; 500 lives obviously is equal, more important than one life. This gets into the question we started with. Supposed we take that for a minute. And the fact that it's in the distant future, that doesn't matter. It's 500 versus one. So from a theoretical standpoint, most philosophers and economists and others have agreed that we should be neutral, theoretically with respect to people, places, and times. But practically speaking, it can make a big difference whether someone is right next to you or further away. It can make a difference whether you can help someone now or in the future. Because someone who might need your care now will assuredly die if you don't help them.
Someone in the future might die if you help them. They might not even come to exist if you do or don't make certain actions now. Or they might come to exist, but there might be technological developments that will enable you to save those 500 people. The technologies you would have saved, not use the resources here now, in order to say in the future, turn out to be obsolete and aren't even necessary as you let this person die for nothing. These are sort of practical considerations. They're not theoretical considerations. They're practical considerations. As a matter of fact, I've come to the view and published a bit that even the theoretical questions are not nearly as simple as most people have thought; that it's just a simple matter of being neutral with respect to people, places and times.
When I say that, there's always a kind of other things equal clause here. So when I say that you ought to be neutral with respect to people, I'm not suggesting that means that there can't be special relationships. So, between me and my wife, me and my children, me and my mother. But the point is, if there are special relationships that enable me to treat my mother differently than a stranger, then that's supposed to be true for everyone. That's the kind of neutrality I'm talking about here. So if another stranger has a mother, they get to treat their mother the same way I get to treat my mother. That's the sense in which neutrality could be compatible with different people having special obligations or agree towards those they're in close contact with. But anyway, the questions about space and time; neutrality with respect to people, places, and times, people have made a lot of assumptions about these over the years. That we are to treat these different possible locations of the good, the same. We are to treat time, or space, or people-- they're all the same from a moral standpoint. I've actually published that that's not right. I think that turns out to be extremely complicated. But that would take us down a philosophical rabbit hole probably which would try the patience of your readers.
But if your readers are interested in this, they might look at some of my articles where I argue about people, places, and times. And I show that the common assumption that they're all different locations of the good and we ought to treat them all the same. That's not only not plausible, it's not even possible in certain kinds of cases. They're actually incompatible with each other.
Ben (21:15):
That seems to be a thread when I follow your work, because it's similar on inequality that if I were to sum it up, you look at it and go, "Well, we thought it might have been theoretically simple, but it turns out that actually it's also theoretically complicated." I'm going to come back to this nearness of people because it's actually a small point in your book, perhaps, but it really struck me that you make a point about when someone is right in front of you, the immediacy of that moment-- I don't whether there is a certain phrase for it. That actually struck me as quite true, but that's from my work within theater and art. But we'll come to that because I think it will help understand the Singer's pond analogy when we get to some of the disanalogies on that.
So maybe before getting to that, I was maybe going to touch on your work on transitivity. But before touching on that, I was really interested in essentially how you've come about your ideas and how do philosophers think. I'm interested in how we as humans say progress. So we can progress technologically. We can progress socially. And I think we seem to progress morally. If you go a thousand years ago, the consensus was, “Slaves are fine.” Come to today, the consensus is, “Slaves are not fine.” I think most people also agree today that's actually some sort of moral or social progress. And some of that could have come from a moral or philosophical thinking. I was just rereading the front of your book and some of your thinking around this area you had quite earlier on. I think maybe you say 1977 or something like that.
You first propose it to a couple of some of the greatest living at the time; moral philosophers. They basically say, "Doesn't seem to make any sense." So you parked that idea for several years. But there's something about it which you haven't quite let go, and you have a couple of students who reignite it with you and you come to it again. It's interesting, and we'll come to this dinner that you have, which I think seems to be one of the greatest, perhaps moral philosophy dinners of the last 10 or 20 years. But again, you have these thoughts go around in your head. You can't sleep very easily. You run all of these scenarios. You seem to be almost dreaming about it to come to some sort of conclusion. It also now seems to be the case particularly if we think about transitivity, that people who didn't agree with you and people who thought this is completely crazy are kind of thinking, "Well, maybe this is truth. Maybe this is some discovery or at least something we need to take really seriously," which strikes me as some form of progress from something which is a very non-consensus idea, has sprung out of your brain somehow and come out into the world.
I'm not sure. Is there anything we can think about or learn about this? Maybe we can't because we don't have a good sense of it. But I was really struck how, if this is true about sensitivity and it seems to be certain true of inequality, and maybe some of these ideas that you've recently expressed, they've gone against a consensus grain but they've also then maybe allowed us to progress. There seems to be something quite important about that process upstream. I wondered whether you'd had any reflections on this, or whether you've come to think, "Well, that's interesting. This is an idea which in 1977 might have died and never come to light,” in which case actually, all of these other downstream effects may not have happened, but actually it did.
Larry (24:57):
Yeah. So this is a great question, Ben. Some of this is just luck. I couldn't write a prescription for how this happens. I can say several things which is that... I mean, part of it is just me. I remember when I was in ninth grade, I had a high school teacher and she said to me, "Oh Larry iconoclast you.” To be honest, I didn't know what that word meant, but I looked it up and I saw and said, "Yeah. I'm kind of happy with that." For those of you audience members who don't want to have to look it up, I'll just tell you. Iconoclast is just like a breaker of idols. So I've always had it in me that... My absolute favorite expression for the bumper sticker that I love is, question authority.
It's not necessarily disobey authority, but it's always question authority. So in some sense, this is the heart of what philosophies have always done. They question the given. Going back to Plato and Plato's cave, here's this world that we're surrounded in. Is it the real world? Are we just seeing shadows on a cave and we don't know the difference? Kant and Hume and all these great philosophers who have raised all these great skeptical questions they cart, "How do I know that I'm dreaming and I'm not awake?" These are all basically questioning even the most fundamental truths. The most fundamental truths is like there's a tree outside my window, or I exist and lots of other people exist and then people have the problem of other minds.
Well, how do I know anybody else exists? How do I know there's any center of consciousness anywhere, but here? So part of this is just as it were, a philosophical tradition that goes back to the pre-Socratics, to question, to question, to question. Some people have philosophical temperaments and some people don't. It turns out I'm one of those people who do. I have a philosophical temperament. I've always questioned things. My parents would tell me things and they tell my siblings things. I have three very bright siblings. They all have doctorate degrees, or lawyer degrees, or whatever. They're very bright. They're very smart. I would be questioning everything and my siblings would say, "Yeah, okay, that sounds right." So part of the thing is, “Is having a kind of temperament that allows you to question things and to be open to the possibility that even the things that everybody thinks must be true, might still not be true.” I am actually just always open to that possibility.
Then there's the element of, as it were, luck or being sparked. In my case, being sparked by a genius; a certifiable genius. My mentor was Derek Parfit. He was, in my judgment, a great living moral philosopher until the time of his death not that long ago. One of the great moral philosophers, I believe, since Kant. He was a towering figure, and I had the great fortune of working very closely with him. The thing about Derek is he was just brilliant. So you're in the presence of brilliance and he would create these arguments. He was this argument machine. He would generate all these arguments. He was so bright, and he was so clever, and he was so smart. People in the presence of Derek would often just listen to his arguments and they'd be blown away and think, "Wow, that's amazing. That must be right."
But I just always had this element of... I would read Derek's arguments and I would often think, "Well, I can see why he says that, but..." Then something in the back of my head would nag at me and would just make me want to think about it a little bit different way. Both of the things you're discussing here-- and maybe we'll get to the conversation of the dinner which was quite fun for my third book. But the spark of both of my first two books came from incredibly rich arguments that Derek made where he made claims, and most people read those claims and just accepted them. I read those claims and for whatever reason, I didn't just accept them. I thought, "Hmm. Maybe, but maybe not." The story about the transitivity, I'll just quickly relate it for your audience.
It was the case. Derek had created this incredibly clever argument which he called, the mere addition paradox. I won't go to the details of the mere addition paradox, but the form of the mere addition paradox is it was considering three different possible futures where the world would go. He asked us to judge how these different worlds compared, how these different possible outcomes compared. He considered three of them and they were called the A world, the A+ world, and the B world. He gave very powerful arguments for thinking that the A world was better than the B world, and they were very compelling arguments. Then he made very compelling arguments to the claim that the B world was better than the A+ world, and they were also very compelling arguments. Then he gave this incredibly compelling series of arguments for the view that A+ was not worse than A. But that was a contradiction, according to the standard assumptions of what's called the axiom of transitivity for better, which human beings have accepted more or less since we came down from the trees.
The thought is, if A is better than B and B is better than C, then A must be better than C. So if A was better than B and B was better than A+, then A must be better than A+. But Derek supposedly had these arguments to show A was better than B, and B was better than A+, but A wasn't better than A+. Derek took that to show that we had this paradox, and we had to figure out which of these various claims had to go. We didn't know which of them had to go, but we knew that one of them had to go because they were incompatible with the axiom of transitivity.
This axiom was just accepted without argument for centuries of human thought. I mean, everybody accepted them. Philosophers accept them, psychologists accept them, economists accept them, mathematicians accept them, physicists; everybody accepted them. But the thing is, when I read Derek's arguments, I thought, "Geez, A really is better than B and B really is better than A+. But it isn't clear that A is better than A+." Maybe the lesson to be learned from the mere addition paradox is not that we have to give up one of these very plausible claims, but rather that we have to give up the action of transitivity. That entered my head and the thing is I was open to that. Nobody else was. Everybody else read that they had an axiom. They just accepted, "Axiom of transitivity must be true." I took this and I went to, as you say, three of the greatest living philosophies at the time. I took this thought that's inside of mine, that maybe the lesson that we learned from the mere addition paradox was that, "All things considered better than wasn't a transitive relation.”
I ran this first by Tom Nagel, who was in my judgment along with Derek, one of the two great living bowl of philosophies of the 20th century. Tom just looked at me and he sort of shook his head very sadly with this kind of mixture of pity and contempt. And he said, "Oh, Larry, I wouldn't understand what someone meant if they said that all things considered A was better than B, and all things considered B was better than C, but all things considered A wasn't better than C." He was the great Tom Nagel and I was like a punk graduate student at the time that thought, "Oh man, I must be an idiot" and I wandered out of his office. But then I took this question to another teacher of mine. A guy named Tim Cannon who was at Princeton at the time and later at Harvard. He was a MacArthur award winner, so-called genius award. Brilliant philosopher. I ran the same thought by him. I said," Tim, it seems to me that maybe the lesson to be learned from Derek's mere addition paradox is that all things considered better than isn't a transitive relation."
He didn't look at me as if I was an idiot the way Tom had, but he basically said, "Larry, that can't be. All things considered better than is true by definition. If you know the meaning of the words, you know that must be true.” And then I ran them by Derek whose paradox it was. I thought if anyone will appreciate the possibility that there's this incredibly mind all three new way of thinking about the world that comes out of his work, it should be Derek. I ran by Derek and Derek gave me the same answer. He basically said, "Larry, that just couldn't be true." So they basically all told me that, "You're an idiot. Go home and do something else with your life." I did put it on hold for a while, but thanks to several students around, I did come back to it. I did explore it. Then eventually, I came to the view that maybe I was right and I could even explain what was going on. So I had an explanation, both for why people thought that "all things considered better than must be true," but for why it might not actually be true. And this had to do with an exploration about the very nature and structure of the good, which people assume to be one way, but on examination, in fact, it turns out perhaps to be another way. If it is another way, then there's no reason at all to expect all things considered better than to be a transitive relation.
And it is a striking feature that when I started this particular work as a graduate student in 1977, it was universally opposed. Now, when I travel to the best graduate programs in philosophy in the world, the next generation or two generations forward now, the graduate students are all perfectly prepared to accept that all things considered better than might not be a transitive relation. In a matter of basically 40 years, two generations among graduate students in philosophy, it's been a radical shift from this is conceptually impossible, or as the great philosopher, former vice chair of Moral Philosophy at Oxford, John Broome once told me, it's logically impossible that all things considered better than might be not a transitive relation. And now, many people, including some people in economics have agree with that, and yeah, maybe this is right.
Ben (36:26):
That strikes me as an amazing story on two or three levels. Let me pick out a couple of reflections and one addition. The addition is that when you came back to this work-- And maybe we wouldn't have all of your brilliant work on inequality had you only come to this conclusion. So maybe there was a silver lining for this. You sent the write up of your thoughts to Derek, who mythology goes was very busy, had just published, and then took it up halfway up a mountain in India, read it and went, "Wow, you might be right. I really paraphrase this kind of thing.” But actually, he had the openness of mind, which you suggest is perhaps a trait of philosophers. I also think Derek has a kind of interesting style of thinking which maybe I've called perhaps autistic cognitive, to some degree as well. But this pursuit of a certain thread of thought. But I thought that was very interesting.
I might just mention why I found it so troubling or at least thoughtful when I've only rarely recently came across your work around this. This is because if things aren't transitive particularly around this-- So like you say, it's very easy to see in numbers. The language of numbers; three is bigger than two. Two is bigger than one. So obviously three is bigger than one. And you can see this within the heights of people, right? So this seems to be quite obvious. But we use this as the background for economic utility theory and all of that decision making theory to actually where alluded to right at the beginning. And so if there is something potentially not always right, or not always situations where we can really hold that, it has really weakened some of my thinking about the moral choices of when you're using utility theory, which is still obviously a very useful decision framework.
But I think that relates to my first question, because actually, it seems to be the case that it isn't really all-encompassing as perhaps we would like it to be. It doesn't solve all of these problems for it. I remember listening to you. You had one very good example around pain as to why you could have a thought of experiment on that. But does that still hold for you then that this does probably slightly down weight the idea of how we can use utility theory because of your thinking around transitivity? Are those sort of examples still…? Has your thinking much evolved around that, or is it kind of evolved as how it was? Maybe you would want to give the quick example around pain or something else so that people can understand what it might be and why this seems so problematic. If you think that's a good example, or maybe anything else which is kind of easy to understand for this axiom of transitivity.
Larry (39:20):
Sure. Let me say several things. First, I'll give the general answer. I'll give a particular example. It's a striking thing because economists have always-- and I admire economists greatly. I think their contributions to the progress of humanity has been immense. I have many very good friends who are economists who I admire greatly and who have influenced me significantly: Angus Deaton, Nobel Laureate, John Broome who was trained under Nobel Laureate as an economist. These are all very brilliant people. There's much to be learned from them and from what economics has given us. Expected utility theory is an extremely powerful theory, but it does have a bunch of axioms. These are the premises on which this theory is built.
I have challenged in my book, Rethinking the Good, a good number of its most fundamental axioms, at least three. If the axioms of a theory aren't true-- and I think there's very powerful reason to think they're not true, then that theory isn't true. It's just not. That should give one great pause of putting too much weight on a theory whose fundamental axioms are questionable. So my thinking about that has not changed. There are still cases where we all implicitly rely on the axiom of transitivity in our everyday thinking. And there are lots of cases where we rely on standard economic expected utility theory reasoning in our everyday reasoning, but that doesn't mean it's right. It's deeply questionable. Let me give a concrete example and then I'll just mention one that's a little bit harder to grasp. But here's a concrete example for your audience.
So, I've been lecturing on the topic of transitivity now for 35 or 40 years; 30 years at least. I've traveled all around the world and I will often ask my audience the following question. Suppose that you or a loved one is going to have to have a certain intensity of pain. So I'm just going to measure that. I'm going to show you a visual diagram. The height of this when I raise this up is the intensity of the pain. So a certain intensity of pain for a certain duration of time. So if I go like that, that's how long it is. So certain intensity of pain for a certain duration of time. So you or a loved one is going to have to have a certain intensity of pain for a certain duration, or you or your loved one is going to have a slightly less intense pain-- so not quite as intense, slightly less, but it's going to last two, or three, or five times as long. And the question is, "What do you want for yourself or someone you love?" Do you want a certain intensity of pain for a given duration of time? Or do you want slightly less? I mean, just ever so slightly. They barely notice it, a little bit less, but it's going to last two or three or five times as long. That's my first question.
Now just to tell you, if your audience is... I should have said, "Think along with this question. When I ask it, think what you want for yourself or someone you really love. Your mom, your daughter, your significant other, whatever." Almost unanimously people say, "I want the slightly more intense pain that's much shorter in duration." It's not literally unanimous, but it's almost unanimous. I then ask my audiences the following question. There are two ways your life might go. In both lives there could be very, very long lives, as long as you might possibly want. And there's going to be 15 mosquito bites a month for the duration of your life. I came up with this example a long time ago when I lived in Houston, Texas. We have lots of mosquitoes in Houston, Texas so I was constantly scratching the mosquito bites. They're annoying. They're not the end of the world, but they are annoying. And I constantly have mosquito bites on me. So I was thinking, 15 bites a month for the rest of your life. Background, that's part of what goes with living in Houston.
But in addition, at one point in this very long life, you or your loved one; your son, your daughter, your significant other is going to have two years of the most excruciating pain humanly imaginable. So everything you've ever seen or heard or read in the worst be novels or movies about how torture might go-- And I don't want to make light of it because some people have actually gone through this. The bamboo shoots under the fingernails, the wax in the eyes, the electrodes hooked up to your genitals. It's everything that could possibly be bad. You're being tortured, and you're being tortured like 20 hours a day until finally you go unconscious. Then they wake you up, and the next day they do it again. The next day they do it again, and the next day they do it again. Every single day the pain is so intense that you really wish you could die. You just pray, "Please, Lord, let me die." But you don't. It goes for two straight years. But, at the end of that two years, because this is philosophy we're allowed to do that, they will give you a pill and you won't ever remember having gone through this pain. But for the two years of your life, it will be the most excruciating pain humanly imaginable. That's one way your life might go. Now, here's the other.
At this point, most of my audience members are thinking, "I don't care what the other is. I'm taking that. You don't have to tell me just what's behind door B. I'm taking door B. I don't want do A." But now I'm going to tell you what's behind door B because you really have to know. Instead of 15 mosquito bites a month and two years of the most excruciating torture humanly imaginable, you will have 16 mosquito bites a month. That's it. Now, which of those do you want for you or anyone you really care about? Here, again, it's virtually unanimous. Human beings say, "I want the life of 60 mosquito bites a month. I don't care how long I live. One extra mosquito bite a month, I'd rather have that than two years of the most excruciating torture humanly imaginable." So I ask those two questions in my audiences. I've asked them around the world. Hundreds of audiences, thousands of people. Virtually, everyone agrees with the answers they give to these two questions.
Turns out that the answers that almost everyone accepts-- And I'll just add, I believe rightly so. I'll say a tiny bit more about that, are incompatible with the axiom of transitivity. I'll just say why. Because you'll remember when I said at the beginning, you have a certain intensity of pain for a certain duration, or a little bit less intense for two or three or five times long. I didn't tell you how intense nor did I tell you the duration, and I didn't need to, because it turns out this is a generalizable truth. That is, whether it's a really intense pain for a month, or a really intense pain for 10 years, or a mild pain for a month, or a mild pain for 10 years, we'd all rather have that pain than one that's a little bit less tense for two or three or five times as long. So it's a general truth. No matter how intense or how long the original duration, we'll rather have that than a little bit-- just barely less, but it's less and much longer.
So now you just imagine a world in which everyone starts off with a long life and 15 mosquito bites. And one of those lives, first life, has two years of intense torture. The next one, just to simplify it, has four years of torture almost as bad. 15 mosquito bites and four years of torture; almost bad, not quite as bad, but almost as bad. Everybody says the two years and 15 mosquito bites is better than the four years of torture almost as bad, and 15 mosquito bites. Or make it eight years or 10 years. It doesn't matter. We're just going to simplify it. Two to four. We all agree. Two is better than four when it comes to torture. Then instead of four years of really bad pain, you have eight years of pain almost as bad. Everyone agrees four is better than eight. Then you have eight years of pretty bad pain or 16 years of really pretty bad pain, not quite as bad, but so close. We'd rather have eight years than 16 and you just continue. Eventually, the pain is getting a little bit less, a little bit less, a little bit less, until at the very end, the pain is equivalent to a mosquito bite.
What we've agreed about is we all agree the first outcome is better than the second. The second is better than the third. The third is better than fourth. The fourth is better than the fifth. That continues all the way down. If all things considered better than is a transitive relationship, then the first outcome is better than the last. But the first outcome is which involves two years of excruciating torture and 15 mosquito bites a month. The last outcome is just 16 mosquito bites a month, and nobody believes that's better. So here we have a conflict. People who are interested in this topic just need to read the book, Rethinking the Good. But the key is to understand what's going on. To understand what explains this. Because a lot of times people make intransitive judgment. We learned this from the work of Amos Tversky for which Khaneman won the Nobel Prize. Tversky would've won the Noble Prize had he not passed away yet. That psychologically, people often make intransitive judgment because they're making mistakes in their judgment.
But I don't think these are cases where we're making cognitive mistakes. I think something else is going on. I think what happens is that there are certain principles that seem relevant and significant for making certain comparisons. There are other principles that are relevant and significant for making other comparisons. It turns out that the principles that are relevant and significant for making the comparison between the first outcome and the second, and the second and the third, and the third and the fourth, and the fourth and the fifth, that's what you call an added of aggregation principle. It basically says, if you have two alternatives that differ only slightly, very slightly in terms of pain or pleasure, then quantity matters. So more pain is worse than less pain if the pains are similar in quality. You call that an added of aggregation account. For cases like that, we just add up the sum total of pain, weighted by the quality times duration. We take the area end of the curve, as it were, tells you which is worst.
But this is key. There are certain kind of cases where we expect what I call an anti-added of aggregation. The case I just gave is one, and my lollipop for life's case on the cover of my book is another. I'll mention that in a second. That's a case where we say, if the difference in quality of pain is sufficiently great, so that the one kind of pain makes a substantial impact on the quality of a person's life, and the other kind of pain makes basically no impact on the quality of their life, then no amount of the second kind of pain is worse than the first. So there we're rejecting the added of aggregation. We're rejecting the view that, "Well, if only there's enough mosquito bites, eventually they'll be worse than two years of intense torture." We say I don't care how many mosquito bites there are.
So basically what we're doing is we're using what's called an added of aggregations approach to compare the first alternative with the second, and the second with the third, and the third with the fourth, et cetera. But we're using anti-added of aggregations approach where we compare the first with the last, and both of those seem right. But then, it turns out that if the principles or the factors that are relevant for comparing alternatives vary, depending on the alternatives being compared, then there's no reason to expect transitivity to hold. Transitivity only holds if the very same factors are relevant for comparing every comparison, but it turns out they're not. It turns out there are lots of cases of this sort where actually the factors that are relevant for comparing A with B is one thing, B with C is another thing, and C with D is another thing. But the factors for preparing A and Z are different. And when that happens, no reason to expect transitivity to obtain. That's what I eventually saw. That's what can explain or account for what otherwise seemed logically impossible. We couldn't understand how that could even be the case until I got to the root of what explains how it could be the case, which has to do with the nature and structure of the good. I could say much more about that, but maybe…
Ben (53:51):
Definitely read the book for that. I think this is maybe the 18th time I've now read or heard your explanation for it and I'm still astonished listening to it, and it feels right. So I am troubled a little bit like you say that there's a theory that we use and it's probably not true because its axioms are not true. On the other hand, when I think about my creative and artwork, I am less troubled because actually there's a lot, I think, in the endeavor of human creativity which essentially kind of only holds in the moment in these certain circumstances and in these things. So the tools are kind of true even if they don't have a greater truth. There's another aspect which I've been thinking about. This idea of rationality or not rationality, that again, this human brain doesn't work necessarily on all of these prepared pathways, like the logic of maths might do. And actually, there might be even some neuroscience for why that might be. Do you want to do your other little example or shall we move on?
Larry (54:58):
Well, we could go on. I was going to give my lollipop for life example as another example of an anti-added of aggregations approach. But it is, as I say, the one that...
Ben (55:09):
Yeah, we've got the picture.
Larry (55:13):
It's a picture. These are little people, they're all licking a lollipop, and this is a big person in intense pain.
Ben (55:19):
You want to go and look it up on the web now. Have a look at the picture.
Larry (55:26):
So, the thing about the lollipop for life case is it's one of the many cases where most people accept what's called an anti-added of aggregation view. And what that means is the following. Suppose you had two ways the world might go. In one, everybody lives really an incredibly, wonderful, flourishing life. So whatever you think matters for a flourishing life, they all have it. They live a long time. They have deep loves. They're very creative. They appreciate music. There are all sorts of scientific achievements and accomplishments. There's justice; whatever you want, the lives are filled with them and everybody has them. So in terms of all the really weighty things that matter in life, billions of people all have them. And in addition, there's a bunch of not so weighty things in life, but that also make life a little bit better. One of them is that over the course of their lives-- these are long lives, everybody gets to lick a lot of lollipops many times. That's one way the world might go.
Here's another way the world might go. Everybody lives that same type of life almost exactly the same. All the things that matter; the high achievements, the good health, the terrific sex, the theater going, everything. Whatever you think is valuable in life, it's all there. They're all healthy and they're all happy. But, one person suffers the most agonizing life. Pain and misery and suffering every single day. Day after day after day and then they eventually die. But, everybody else who have this incredible great life gets one extra lick of a lollipop over the course of their life. Now, almost everybody you ask that question, "If you had to choose between those possible outcomes, which would be better?" They say the outcome in which everyone has an incredibly great life but one less lick of the lollipop. Rather than the life in which all these people, vast numbers of them, have great lives, one more lick of lollipop, but someone suffers unbelievable agony and dies. So I call that the lollipop for life case.
But if one had a simple added of aggregation approach for everything, well, one lick of lollipop might be only a teeny tiny bit of pleasure. But if you just add up to sum total of people, if only there's enough people, eventually the sum total of pleasure gained from all those individual licks of the lollipop would outweigh the suffering of any single person, no matter how bad. On a simple added of aggregation approach of the sort that's typically favored by standard utilitarians. But that just seems crazy. That just seems deeply wrong. There are many examples from literature that try to evidence this kind of a view. The ones who walked away from the omelas are the ones who walked away.
But it just doesn't seem right. Things don't add up in that way. And I have more to say about why they don't add up in that way. But I'll just say one more thing about that because it's important for people to get their head around what does or doesn't work. When I think about the pain of a mosquito bite spread out throughout our life, or the extra value of a lollipop spread out over time across many different lives, I claim that those kinds of pleasures and pains don't add up in the way they would need to, to outweigh the concentrated pains or pleasures that might occur in the same life with two years of torture. When I think about that, my analogy is roughly like this. I want to talk about the straw that broke the camel's back.
So you have a camel, you put a straw on the camel's back, and then you add a second piece, and then you add a third piece, and then you add a fourth piece, and a fifth piece, a six piece, It takes a very long time, but eventually you'll have tons of straw on the camel's back and the camel's back will break. That's the added of aggregation approach. If you add enough straw, the camel's back will eventually break assuming that straw is all there present at the same time. But suppose instead you have a camel and you put these straw on his back and then you blow it off. Then you put a second piece of straw and then you blow it off. And then you take the third, and the fourth, and fifth. But each time you blow it off. The camel's back will never break. It doesn't matter how much straw. It will never break.
Well, I think when we have a mosquito bite; one this month, and one next month, and one next month and one next month. Or a lickable lollipop; one person, and then a different person, and then a different person, then a different person. Those don't add up. That's like the straw on the back that's being blown off each time. Before the second piece of straw comes, the first one's gone. Before the second itch comes, the first one's gone. When the one person has a lick of a lollipop, the other person has one, but they never combine. So there are some kinds of cases where an anti-added of aggregation approach seems right for comparing outcomes. But importantly, there are other cases where an added of aggregation approach seems right for comparing cases.
Then it just turns out it's easily shown that if certain factors are relevant for comparing A and B, and B and C, but different factors are relevant for comparing A and C, then A could be better than B in terms of all the factors relevant and significant for making that comparison. And B could be better than C in terms of all of the factors relevant to C for making that comparison. But it does not follow that A is better than C in terms of all the factors relevant and significant for making that comparison because those factors are different.
Ben (1:02:15):
That still blows my mind. I hope everyone listening can ponder that, and re-listen to that, and go read the book. I’m going to move on to a couple of Peter Singer ideas before getting to the pond, or at least one, before his pond analogy. I have to confess. I found this Singer idea so awkward that I put a little bit less weight to his other ideas on animals and altruism when I first came across it. This idea has to do with disability and I think with some of his utilitarian roots. I hope I don't mangle this, but Singer has proposed that we should allow parents of disabled babies the choice of keeping or ending their lives. Now, Singer has quite a few nuances to his arguments. But I think, as a layperson, this seems to me both are wrong on consequentialist and non-consequentialist grounds. Consequentialist because actually you can see examples where this is probably going to have less welfare in the world, and non-consequentialist because it doesn't seem to be the right thing to do.
But I never really had a chance to ask a moral philosopher on this. So I wonder if this has been problematic in the world of philosophy as well. You can see people as brilliant as Stephen Hawking who lived a very fulfilled live. When you do that, disability rights people talk about the social model of disability, which doesn't also encompass things like focusing on systems change and making lives more fulfilled for people. Is it as problematic as we kind of think?
Larry (1:03:59):
This is not a topic that I address in any of my books, but I'm happy to address it here. So Peter's views about disability are complex and they are nuanced. I think they're often misinterpreted. I'm not saying you've misinterpreted them, but I do think they're often misinterpreted. The question whether they're controversial or not. The interesting thing you said is you think that they're controversial on both consequentialist and non-consequentialist ground. I think they're less controversial on consequentialist grounds than you. I think they're very controversial on non-consequentialist grounds. I think there are lots of reasons why one might or might not worry. But even then, I have to get in. Among philosophers, they're less controversial even on non-consequentialist grounds than many people would think. I say among philosophers-- There are philosophies and philosophers. They always say if you have 15 philosophies in the room, you have 15 opinions about any about any given issue. Do you mind a funny story and then I'll come back to this. Maybe I won't. Remind me if you feel like it. There's a character in The Good Place, Chidi who can never make up his mind about philosophy. That character was a pattern on my reading group, and I can tell you more about that.
But let me talk about the more serious issue. So the issue for Singer, what makes this so complicated is it gets down to a variety of questions about under what conditions does a being of any kind have a serious right to life. For many people-- I know how Roe V Wade was just decided here in the United States. But for many philosophers, a being doesn't have a serious right to life until they have achieved a certain level of psychological development; a whole bunch of other kinds of things. For Kant, you didn't have a serious right to life until you were capable of following a moral law that you prescribed to yourself.
In other words, you had to be a rational being. You had to be a being that was capable of morality. For some people, you don't have a serious right to life unless you're capable of a kind of inner self consciousness. A life going on, on the inside where you're aware of who you are. Where you are of autonomy, where you have plans that are perhaps being frustrated by being treated one way rather than another. For some people, you don't have a serious right to life until you can envision your future going a certain way, and then have that vision that you have for how your future might go being interrupted by other people who are deciding how your future should go for you instead, et cetera. The point is, there's a whole bunch of candidates for under what condition would we say of a being that they have a serious right to life, and most of the candidates that most people take seriously-- This is one of the wonderful ironies about all of this. About how Peter is castigated on one side, but then not credited on another. But for most people, they don't think a cat, or a rat, or a rabbit has a serious right to life.
But in terms of the mental capacities that are currently held by a newborn baby; handicapped or not, and a cat or a rat… One of Peter's points early on is that an adult male cat, or a dog, or a cow, or a rat, or a mouse has those capacities to a greater degree than a newborn infant. Notice that has nothing to do with the disability per se. That's just true, he thinks and many philosophers think about… You first ask the question, “Under what condition is one a person?” Not a human being, but a person. Whereby person, you mean a being with a serious right to life. Peter tries to ask, "What do newborn babies have that newborn chimpanzees don't, or that cats don't, or that mice don't have? What do they have?" We can say more about, well, they have a potential that the others don't have. And then we could talk about that.
But that potential isn't there now. It's not realized. It might be there in the future. Is that enough? On that view then, these newborn babies, whether handicapped or not aren't persons. They don't have a serious right to life. I'm not saying I accept that view. I'm taking a stand on that one way or the other. But this is a very serious kind of view that's widely discussed and that one has to come to terms with. In virtue of what is it that any being has a serious right to life that would enable us to distinguish, for example, between a newborn infant and an adult cat-- if we really think adult cats don't have a serious right to life. Then his point would be, when he turns to the question about, what about handicap beings? Well, that can put a lot of strain on a family. That can put a strain on the other children in family. It can put a financial strain, emotional strain, all sorts of other strains.
Why not, if they don't have a serious right to life anyway, allow the parents to decide for themselves about something that's of such crucial importance to how their lives go, and how their family's lives goes, and their children's lives might go. Now, if you thought a newborn baby infant, whether a handicapped or not has a serious right to life, then the fact that it puts a lot of burden on the rest of the family might be irrelevant. Doesn't matter. “So I'm sorry, you won't be able to go to an expensive college because we spent all this money taking care of your younger sister. That's too bad. But she's a Persian, she has a serious right to life, and that's just the cost you're going to have to pay.”
But I think his position is more complicated in that really at its foundational roots, there's a question of whether any human being before the age of two, even has a serious right to life. And if the answer is no, even for a healthy newborn baby, then you could see why Peter or others might think when you're dealing with a seriously handicapped child, that might be a huge burden on other members of the family, and whose quality of life depending on how serious it is, may or may not be significantly high. Why not in that case give the parents the autonomy to choose what's best for them and their siblings? That's one element of the kind of view in question.
There's another element which is what utilitarians accept, which non utilitarians don't accept, which is the kind of replaceability argument. On this view, all human lives are replaceable. Let's go back to the very first question you asked me about, "I could save one person, but would cost a lot of money. For the same money, I could save 10 other people." Well, this person's life is replaceable as it were. One life versus 10. Kant thinks and non-consequentialist are over here saying, "No, you can't think about human lives that way. Human lives aren't replaceable that way." And that's my own view.
But for Peter or for other committed utilitarians, we should be neutral between all beings; all essential beings, including all human beings, and one human being is replaceable by another. So if this human being would have a certain degree level of success and value in their life, and you could have two others or another person that the expected value would be just as high or higher, this person's replaceable by another. So you should allow a parent to not have to support this baby in order to bring another one into existence that would have an equally and better life. That's replaceability thesis. That's a distinctly utilitarian thesis. It's not in general. But I want to get back to one of the things you said which is very important.
You speak about the case of Stephen Hawking. Other people talked about the case of Beethoven. Though Beethoven wasn't in fact deaf at birth, he became deaf later on. So it's not the perfect analogy. But anyway, suppose he had been deaf at birth but still capable of creating his great work. Stephen Hawking and others. What you have to understand here is the form of the reasoning, that Peter and others I suppose, which is a kind of reasoning that I often object to in my work, as you know. But it's the form of the reasoning. In legal circles, people say hard cases make bad law. You can't decide the general principles to government society based on the exceptions, based on the fact that as a matter of fact... And again, Stephen Hawking, same case. Stephen Hawking wasn't born with the disabilities that developed later. They came much later.
Suppose he had been born with those disabilities, what would the expected value of his life have been? Well, the expected value of his life might have been very, very low. It turned out that he was genius and ended up doing great things, but the expected value of his life was quite low. When we any kind of decisions; moral decisions, practical decisions, you don't make your decision on the best possible outcome that could ever occur. You make the decision on your most reasonable guesses of what's likely to occur in general, in cases of this kind. So when you're choosing between-- This goes back to the value of expected utility theory. But you're choosing which profession you go into, and you're going into this profession, you're virtually guaranteed to have a great life. There's almost no downside to it.
You go into this profession and you're virtually guaranteed to have a miserable life, but there's a one in a billion chance or a one in a million chance that you'll have a stupendously great life, a better life than you would have while we're here. Which should you pursue? Almost every rational person forget about the morality of it will say, "Pursue the one where you're guaranteed you have a great life with no downside.” You don't choose the one that is almost certain to lead to terrible outcomes, just because there's some chance it might lead to a great outcome. So when people point to these great lives that have been lived by people who suffer from serious handicaps, that's not necessary. That might be the hard cases that make bad law. Do we really want to adopt general policies for society; moral or otherwise, based on the exceptions rather than what's predicted to be the case.
Suppose you knew the following. You're going to have a thousand children born with a severe handicapped and 999 of them lives are going to go horribly. They'd be ridiculed at school. They're only capable of certain levels. But one out of a thousand, their life is going to be great. Should you bring the thousand people into the world? Do you bring a thousand people into the world knowing that 999 of them are going to suffer greatly because one of them might fare really well? That's really what Peter and others are trying to get us to grapple with. They're trying to get us to grapple with really, what is the expected value of these lives? And if the expected value of these lives is actually negative, though on rare occasions it’s positive, he says, what right do you have to do that?
But the key thing I want to get here is there are cases. There's a difference between a child that's born not normal, but perfectly capable of a happy, fulfilling life. I take it most down syndrome children have this kind of a life in front of them, and children that really have severe spina bifida. They're likely to be in pain and suffering and live a short life. It's never going to turn around, or highly unlikely. There are different kinds of cases. I'm not going to agree with Peter at any point where the argument turns in essence on the replaceability. "Well, we could get rid of this model for a newer improved model." I just don't buy that. But I do think there are other elements that are underlying his view that need to be taken pretty seriously.
Ben (1:17:10):
Okay. That's given me food for thought. I still find it somewhat troubling, but I understand now the link to his work on animals and sentience better. I have listened to him, but I actually have found your explanation better. Maybe it's easier to do it when you're doing it on someone else's work where you might not completely agree with all of their things because you can elucidate the things out from that. So one more then maybe before the pond analogy. Hopefully we'll have time. Again, this is a personal thing which isn't really in your books either. It was a little bit I hinted at because I somewhat was troubled by some of Peter's views because of a view that he had. This actually comes in the work of art. So I had, not a close friend, but a friend, who arguably was a great artist and made great work.
But it turns out that they also had other parts of their lives which was very troubled and probably horrific. It ends up that this friend then committed something awful and they had long running hidden problems and it didn't end well. Actually, I have their work still on my bookshelves and I'm a little bit troubled. This question is often raised with regards to Ezra Pound. It's a little bit more historic, so we can think because he had very troubling views. But poets would say his poetry is also very brilliant. Do why, or should I have to regard their work any differently in the face of other aspects of their lives when I'm thinking about their art or not?
I mean, I guess it's going to come to every individual and the waiting of the case and all of that which I'm considering. But does moral philosophy have anything to say about those sorts of problems? I guess they extend because people can do other horrific things and they have other parts of their lives. I've read a little bit around that, but I've never had anything which I kind of go, "Oh, I could really think about it in that way." I'm assuming that's because of your 50 philosophers and the light bulb type of thing, there's probably just a huge range of views in that. Is there any easy way of thinking of that or is the answer complicated?
Larry (1:19:32):
So we've hinted at this already. You've hinted at it in the very beginning and I’ve hinted at in my various responses. But I'm what other people call a pluralist about morality. So I think there are many ideals that matter. I think there are many ideals that matter practically and I think there are many ideals that matter morally. So justice matters, autonomy matters, freedom matters, equality matters, perfectionism matters; all these things matter. The truth matters, I believe. Beauty matters. Keith famously saw the connection between beauty and truth, right? “Truth is beauty. Beauty is truth. That's all you know, and all you need to know,” something like that. I probably butchered it. So a very important claim about the importance of truth. Truth itself has its own kind of beauty, and beauty itself has its own kind of truth…
So I believe there are different values and I believe that at the end of the day, a person who wants to be a good person has to give proper expression to all of these different kinds of values. So that is almost always going to make, from my way of thinking about the world, any question a complicated question because there's rarely a case where I'm going to say, "Ah, here's the answer." People who are what I call monolist as opposed to pluralist. They have a single value. It's the one they care about more than anything else, or the only one they care about. Milton Friedman famously claimed, "One can care about only one value and that value is freedom." But anyway, the point was there's one thing that matters; freedom. Everything else is pale. So you assess every single issue in terms of freedom.
Well, if you assess every single issue in terms of-- and now fill in your favorite value; utility, or freedom, or perfection, then most of the questions you're going to ask me or yourself aren’t going to have an easy answer. But as soon as you recognize that morality's not that simple, life isn't that simple… Art itself isn't that simple. It's fragmented. It's complicated. You look at many of Picasso's pieces, he's Kafka. I love Kafka. The truths that he's revealing are not simple straightforward. You don't get to them in a linear fashion, but they're revealing something really important about his inner psyche, but also about the world, and also our relationship. All these kinds of things that you can only see through a distorted lens. There is no clarity. There isn't clarity to be found. There isn't searching to be found.
So I believe that morality is extraordinarily complex and lots of different factors bear on almost every important question. There are some questions that are very nice. They're kind to us. Because those are the easy questions where all the values line up. Utility favors, and perfection favors, and equality favors. They're all favor. That makes it a simple answer. But there's so few real world questions, the kind that trouble us, the kind that keep us up at night that have that feature. There's all these complicating elements. That's the background. Now to the particular question. You asked a specific question. So how should I think about a great poet or a great philosopher?
Heidegger or others who are affiliated with the Nazis. Nietzsche. What should we make of Nietzsche if in fact Nietzsche's views were used or misused by the Nazis? I think there are theoretical issues at stake. There are moral issues at stake. There are practical issues at stake. I think Insofar as a poet gives us an example of great art that can be inspiring and can teach us about beauty, or ourselves, or the world, or our relationship. It would be a terrible waste of resources to ignore that poem that does that for us. It would be inefficient if nothing else. On the other hand, do you want to hold this poet up as a paragon of virtue? Be like her, be like him when there's all this other baggage going on in your lives? The answer to that is no, you don't want do that.
So you may want to, as it were, there's always the old question of... I always want to say you don't want to throw out the baby with the bath water. For some people, the kind of case you describe, it's a very small baby and it's a very big bathwater that's filthy, and dirty, and drudgery, and so on. You want to get rid of the bath water, not the baby. And it's always a continuum, right? Some of these people are very big babies and only a little bit of water. You really don't want to throw out the big baby just because there was a little water there. We're facing this issue as you know, of course, as we try to come to terms with our past. In the west, but not only in the west, in the terrible atrocities that we have helped perpetrate or have perpetrated, or our ancestors did, do we celebrate or not celebrate here in America, Columbus?
He discovered America. He also committed genocide on all these various things. So now, how do we come to terms with that? In the United States we're trying to, how do we come to terms with these leaders of the Confederacy? There were all these statues to them, but they were trying to keep slavery as an institution. So we need to weigh off, what do we learn from them? What do we not learn from them? What are the costs of just saying, “We don't need them?” I could get this argument. Someone might say, "Yeah, Ezra Pound was a great writer." But there are plenty of other great writers. Draw on them for their inspiration. You just don't have to talk about him because it's not like he's irreplaceable.
So if you don't need to talk about someone who's this complicated and has all this baggage, why bother? But the philosopher in me is-- There's always one element in me that cares about the truth. It cares about the truth. If a poet or a writer or a philosopher has truths that they've expressed, I think we should hang onto those. But we have to be very clear that what we're hanging onto is the truth, and not the speaker of the truth. That would be my own way of thinking about it. I certainly can see there's the theoretical case to be made, and then there's the practical case because human beings aren't so good at compartmentalizing. Supposed if we say, "No, look. Here's what we want to do. We want to continue to teach these controversial figures, but we're going to do it with the sign at the beginning of class." And all it says is, "Look, they did a bunch of really nasty things. Ignore all that. Just pay attention to the poetry or the story" or whatever like that.
If you say that, are we going to do it, or is this other stuff going to infiltrate? Is it going to have an impact?" You say, “Don't pay attention to it.” But even when you say don't pay attention to it, you start paying attention to it. The old saying from the Bible, "People in glass houses shouldn't throw stones." This is my point about the big baby with a little bit of water, or the small baby with... If you go back in history, all of the people we teach are complicated because they all come out of a historical period where nasty views were held. And they're part of those nasty views.
Ben (1:27:37):
That is helpful thought to think about it in that way, and perhaps confirms my nudging towards a sort of pluralist type of view as well. So let's go to Peter Singer's pond. Hopefully we still have time. I probably spent a little bit too long on all these other interesting aspects. Who knew that there was so much philosophy?
Part Two Follows Here:
Second Part Starts Here (Rhubarb)
Ben (00:22):
So, I'm going to ask about Peter Singer's pond, and maybe you could reflect a little bit on this amazing dinner you had also with Angus Deaton and some moral philosophers. Maybe I'm going to mangle it a little bit, but I'll talk about the pond and then maybe you can call out some of your disanalogies and some of the perhaps more novel aspects of how you've thought about Singer's pond and what it means for effective altruism and how to think about doing good. But for his pond analogy, very roughly it goes, "You are passing a pond, no one else is about, you spotted drowning child. It would be safe for you to save her at the cost of wet clothes or wet shoes, what should you do?" And of course, everyone says, save the child. And then Singer goes on to say, "Well, this is an analogy for giving aid to those in need when at very little cost to yourself, you can help save a life somewhere else.
When I first heard the example, I wasn't exactly sure it was analogy because of these issues of inter mediation and distance. I'd been to some very poor places in the world and I thought, "Well, in the real life, it's a bit complicated.” But it did have a reasonable force of analogy to it. But on reading your work, I kind of realized there are quite a lot of disanalogies when you get into the real world and some various serious thinkers and economists like Angus Deaton have some quite serious concerns. I was quite moved by the fact that you've kind of changed your mind, or your mind has been nudged, and these are the elements of that which I found quite insightful. So I've only been grappling with the kind of these ideas for a while, and I think we agree that EA or trying to do good as this whole podcast has been about, is on the side of angels, in the sense of we're trying to do good better, or the most good. We can argue a little bit about what that is with the project for effective given and things. But maybe you'd like to recount where you've come to the disanalogies with Singer's pond and how this has come about.
Larry (02:35):
Good. So let me tell this story. I'll tell some stories about this and then you can interrupt at any point. So first, the audience has to understand if they have a chance to look at the book, I describe all these things. But I've been worried about the needy my whole life-- basically, as long as I can remember. I have very vivid memories at different periods of my life of just seeing people who are very poor and just my heart going out to them and thinking, “I'm so lucky, I've got so much. They have so little, and it's just not fair, and we have to do something about it.” From the earliest ages, I was contributing in my own small ways nickels and dimes out of my allowances to give to charities that would help the needy. I would go trick or treating for UNICEF and try to raise money to help hungry children. I happened to meet my wife when I was 16 years old on a hunger hike. We were walking 31 miles to raise money to feed people who were hungry. So this is a very deep and longstanding concern of mine. Both of my parents were very concerned about this. They were very different. One was very liberal. One was very conservative, but they both agreed that when it came to the issue of people in need, this was not a right left issue. This was a humanitarian issue. And this was something that was very important to me.
I got to college, I got to graduate school. When I was in college I was volunteering for all sorts of different kinds of local organizations. I would go door to door, try to raise money for them, et cetera. Eventually, I got to graduate school, started learning about Oxfam, and I eventually came around to the view that there were lots of organizations that I had given to and that I cared about, but they just weren't as important as the international development organizations, the international relief organizations; organizations like Oxfam. Didn't mean they weren't important, I thought. It's good to contribute to Literacy. It's good to contribute to the March of Dimes which tries to address handicaps of various sorts. It tries to address people who are disadvantaged. It's good to address cancer and so on. It's good to save the whales. You can start this large litany of things that it would be nice to do, but contribute to your local PBS station; Public Broadcasting.
Those are all worthwhile activities. Preserve the environment; a park so we can go sit in. It's endless. But people are dying. Children are dying every single day, and it just seemed to me that was more important. So there came a period-- and this was before I'd ever heard of an EA moment, where people would come to my door and they'd want to raise money for some worthy local cause. I would invite them in oftentimes to sit and talk about it. But in essence, I would tell them, "Look, I'm glad you're doing this. It's a good thing. It's a good cause that you want to support. But I'm not going to support this kind of cause because it's not as important as saving innocent children from dying." Once one begins to think about it that way, when beginning to think about which charities are or are not most important, and then among those charities, you begin to think-- And again, this is before I'd ever heard about EA or before I'd read anything by Peter Singer.
There was the thought of charities can do it more or less effectively. You really want to give to an effective charity. If you want to save lives, there can be expensive ways of saving lives or there can be very cheap ways of saving lives. Wouldn't you rather save more lives than fewer lives if we want to save innocent lives, et cetera? So from early on, I found myself in a kind of effective altruist mindset of, "I really am going to focus my giving-- We (me and my wife), who walked on this hunger hike, are going to focus more on really trying to do good and in particular, trying to help the neediest people in the world.” We thought that was what most people do. As you know, the poorest people at the time when I first started thinking about this, in China, in India, in Pakistan, and so on. But also throughout Africa; Sub-Saharan Africa and elsewhere.
So that was the thinking. When I started out as a young professor, I started teaching obligations to the needy in a large introductory class where I did a bunch of moral issues. We would teach abortion, and infanticide, and capital punishment, and nuclear war and all sorts of different issues. Other sections came and go, but I always had a section on obligations to the needy. By now, I had started sponsoring. I had some students who sponsored an Oxfam club. We did an Oxfam meal skip program. This was at my first institution, Rice University. Students could swipe their meal cards once a week, and instead of receiving a meal, they go without a meal on that day. And the money that would've been spent on the meal went to Oxfam. We would do Oxford hunger banquets at Thanksgiving where different people were randomly assigned. Roughly 78% reading beings and some other percentage reading something more. Then a few people representing the wealthiest creatures in the world would have some wonderful dinner. We'd all be sitting around; some at the table would find linens, and some on the floor with straw. So people could get that feeling for a night of really the disparities here.
So this is something that's been near and dear to me for a very long time. Eventually, I started reading people like Peter Singer and then later people like Peter Unger who has a wonderful book, Living High and Letting Die: Our Illusion of Innocence, and teaching these topics in my classes and raising money along the way. At the end of the section on Obligations to the Needy, I would have anonymous collections. It wouldn't affect people's aids. I wouldn't be in the room. We'd have a half pass and we raised thousands and thousands of dollars; sometimes more than $10,000 in a given year. I would match penny for penny. Every dollar that they gave, dollar for dollar-- my wife and I, in addition to whatever else we would normally give. So this is something that's been a deep concern for me, more or less forever.
I like many other people would often teach Peter Singer's famous article, Famine, Affluence and Morality, and it did have this very powerful effect on people. Because if you walked by and you see the kid drowning in the pond, it seems you have to save the kid even if it's going to cost you your new shoes and your new clothes, and that's just not important. Then Peter says, “But look, you can give the equivalent of new shoes and a new clothes by writing a hundred dollars check for an international relief organization, and they can be just as effective on the other side of the world and save some kid who's just as needy.” Goes back to one of your earlier questions. We are to be neutral with respect to people, places, and times. The mere fact that they were in Africa, rather than here, that's not morally relevant. If they can be just as effective, you ought to write that check.
Then this eventually gave rise to be effective altruist movement. You want to write a check, but it's not just writing a check. It had all these various features to it. For years, I had found myself not lecturing on this topic in my classes, but then I was invited to lecture on this topic to help launch numerous chapters of effective healthcare organizations; Giving What we Can Chapters at Princeton, several chapters of a similar fund at Harvard, one at the University of Manchester. I was lecturing around the world, sometimes co-lecturing with people like Jeffrey Sachs from Columbia, of course, and also Peter Singer was co-lecturing with me on one of these locations.
So this is something I've been committed to personally and professionally for years. One of the key points about the kind of lecture I would always give is I would just talk about how incredibly well off we at the West are, and how badly off people are elsewhere, and how every one of us-- not just the richest, but every one of us by making teeny tiny alterations in our lifestyle could help those in need. I thought it was very clear that by making tiny alterations in our lifestyle, it wouldn't adversely affect us at all. In many ways we'd actually be better off if we ate out less, if we drank less, if we took fewer drugs. If we did some of these various other kinds of things, it would be better for us actually and better for them. I basically focused as Peter does in his example on how needy others are and how little it would cost us if we tried to help them. And that remains true.
Then one move that I made, which was important from early on in all my lectures that was different from Peter's kind of view or the views of the effect altruist is I would argue for this, not just on utilitarian grounds, not just on the grounds that this is the way of doing the most good. I was arguing that this is something that one had reason to do in virtual, the kind of pluralism I care about. So if you want to be virtuous, you should be compassionate, you should be generous, you should be merciful, and you should be all these other kinds of things. And that should lead you to want to help people. If you want to do your duty, there are duties to help people in need; positive duties. I argue those can be every bit as strong as negative duties, not to harm people in some cases or not to violate someone's rights. A lot of people say, "Well, there are negative duties and those are really strong. Then positive duties are weak." There's a negative duty not to steal my brother's candy bar, but there's a positive duty to save the child drowning in the pond. If I steal my brother's candy bar, well, that's wrong negatively, but it's not nearly as bad as letting the child drown. So I argued that positive duties can be extremely strong, even much stronger than many so-called negative duties which are supposed to be strict and you have to fulfill them. I think you have to fulfill some of these other duties.
So I had all these arguments that there are duty based reasons to help people, virtue based reasons to help people, and you are doing more good in the world if you make slight alterations in your lifestyle that significantly improve people's lives. So that was my view, and it dominated my thinking from the time I was a little kid until very recently into my fifties; mid-fifties, late fifties. Along the way though, I met Angus Deaton who was a 2015 Nobel Laureate in economics. Angus is a terrific guy and very smart. We met at various different places; Princeton and over in Geneva conferences for international health issues.
We would meet and we would talk and we'd go to lunch together. He had me over to his house. He'd be telling me about this book of his called, The Great Escape, in which he was basically arguing that the inequalities in the world are not as bad as we think. It's not that there aren't inequalities, but they're kind of a necessary byproduct of progress which takes place at uneven rates and uneven pace throughout the world. Since I was a person who spent most of my life thinking and arguing for equality, I wasn't very sympathetic to that. We were getting to these debates about whether we should or shouldn't worry about the extent of inequality in the world, et cetera. But along the way, he would slip in that he thought that people like Peter Singer were actually doing more harm than good.
And I remember thinking, "That can't be right." But I also was acutely aware that while Peter Singer was the god in this domain, I spent most of my life intellectually and otherwise giving to international relief organization and urging that my students do so, and raising money and lecturing to help launch these chapters of giving what we can. If Peter Singer was doing more harm than good, then I was too, and that was a very dismayed thought. I just didn't want to believe it and I couldn't believe it. I couldn't believe we were doing more harm than good. I more or less dismissed him. Even though I like him a lot and I respect him, I just didn't really take it seriously. We had numerous conversations on this topic over the years, and every time he would raise it and I would just say, "That just can't be true."
So now we fast forward, and I've been invited meanwhile to give the rural lectures in Oxford. I was planning to give some lectures on justice and inequality and global health, but I had a dinner party at my home. This dinner party, small group of people, but an amazing collection of people. It included the world's greatest living active epistemologist; a guy by the name of Alvin Goldman. It included Derek Parfit who I've already said I thought was the greatest living moral philosopher. It included Jeff McMahan, a philosopher I admired greatly who was the vice chair of Moral philosophy at Oxford. It included Angus who was Nobel Laureate. And it included various other people including a billionaire named Patrick Burns. Anyway, we have this incredible conversation, free flowing conversation. Dinner is going great, if I may say so. It was the greatest dinner party I've ever been a part of by far, whether as host or guests. I've never heard anything like it. It was amazing.
But at some point, Angus who is a very large man pushed back from the table and started talking about his views about international aid and how people like Peter Singer are doing more harm than good. He was on one side of the table, literally, and on the other side of the table, literally, Derek and Jeff McMahan started pushing back. I was the host of the party so I was kind of observing and watching rather than actively participating. And when you put the observing and watching hat on, rather than the active participation hat, it's a very different hat. I found myself listening with fascination to the give and take because I heard Angus putting forward his ideas, and I heard Derek and Jeff pushing back against his ideas. What they were saying was almost word for word what I had told Angus all these years every time we had this conversation. Every time Angus would say something, I would say something, and Derek and Jeff were saying basically the exact same things. “They didn't believe it could be true. How could that possibly be true?”
Normally, if I find myself in agreement with Derek and Jeff that would make me confident that we were right, because they are just great philosophers. Jeff is a really, really good philosopher, I'm a pretty good philosopher, and if we're all on the same side of an issue-- I don't care who's lined up on the other side. For the most part, I'm going to be pretty confident we're right. But this time I found myself thinking, "What the hell is going on here?" Because they just was actually talking about empirical questions about whether Peter Singer and others, and on the ground aid agencies scattered throughout Africa were doing more good than harm. We were all saying-- comfortable, armchair philosophers that we are, arguing that can't be true.
This argument went on well after dessert, after the dishes were clear, eventually everyone went home. My wife and I are cleaning up the dishes. I can't get this out of my head. "What had just happened?" It seemed to me clear there was something wrong methodologically with Derek, Jeff and me all telling Angus he couldn't be right. Well, we were basically arguing about empirical issues. He had a whole lot of empirical background and we had basically none. Even if we're right, how can we be so sure we're right? I went to bed that night and I basically didn't sleep. I just could not sleep. I found myself thinking about Peter Singer's very famous pond example. Then I started generating various examples, versions of pond example. Just one after another, after another, because the basic pond example is you walk across, there's a child, you're the only one there, they're drowning in a pond, can you save them or not? Of course you have to save them.
But what if it's different? What if they're drowning in a pond and there's this person at the bridge and he's like a tyrant. And he says, "Yeah, you can go save them. But before you can save them, you have to shoot somebody." Well then would you save the child? Suppose he says, "No, you don't have to shoot them, but you have to give me an Uzi. Then you can go save the child." Would you save them then? What if you're pretty sure he is going to take that Uzi and he's going to shoot people? What if he says, "No, you don't have to give me an Uzi. I just want you to give me a bunch of ammunition.” But you're pretty sure he is going to shoot a bunch of people. Do you still save the child? What if he just says, "No, you just have to give me a bunch of money," but you know he's going to buy guns with that money and he's going to persecute people with one. Do you still save that child? And I followed through thinking about example after example of variations of a pond example, where what if someone had thrown the child in the pond, and then in order for you to save them they force you to pay them?
I couldn't sleep all night. I'm thinking about example after example. I drifted off at some point in the early morning. I wake up an hour and a half later and I realized, "This has got to be the topic of my rural lectures. I have to think about this more." Because when I thought about it, I had generated all these possible pond examples. In a few of them, you still had to save the child. But in a whole lot of them, you couldn't save the child. And in a bunch of them, it wasn't clear if you should save the child or not. Then the question was, "Is the real world more like the one set or the other set? Is it more like Peter's example or more like these others?"
Then I found out I actually have to do some reading. I normally don't read. I normally just sit in my office and I think. When I did my work inequality, I mostly thought. I didn't read other people. When I did my work on rethinking the good, I mostly thought. I didn't read other people. But now I was trying to figure out what's actually going on in the world and I found myself just reading a whole bunch of books and articles by developing economists and philosophers. But I read a lot of the philosophers stuff. Bottom line, I spent the next five years rethinking my views about this topic and it was very unsettling. I'd spent my entire life, not only living my life a certain way, but absolutely convinced that I was right in doing so. I'm no longer so convinced. I'm still convinced-- I remain convinced that those of us who are well to do in the West and elsewhere, have to do a lot more than most of us do to help the neediest people in the world. I remain convinced about that, but it's no longer clear to me what the best way of doing that is.
There are so many disanalogies between Peter's original case and what goes on in the real world. It isn't funny. There are all these worries that arise in the real world that simply don't arise at all in Peter's case. Some of them people are acutely aware of, but many of them no one even thinks about. They're very deep and they're very important. I now think they're very serious questions about whether we are doing more harm than good, whether there are much better ways to aid the Navy, whether we should be focusing on the needy in the world's poorest country, or just focusing on the needy wherever they live, because many of the worries about helping people in the world's poorest countries don't arise in wealthier countries. That sounds counterintuitive, but there's reasons to think that's true. There's just so many different questions.
But ironically, when I thought about this as a pluralist, I just found myself thinking the effective altruist approach to this, which is dominated thinking among philosophers at least for years now, we have to do the most good. I just thought that's not the right approach anyway. It's not a question about just doing the most good. We want to do the most good, that's an important value, but we also want to be good. And sometimes these conflict. So I find myself raising all these questions. I find myself looking at a lot of literature. I find myself wondering and worrying about a view. I lost more sleep over this topic than I had in the previous 35 years of my life as a professional philosopher. I could think about inequality the way I did. There was never any, "How does this fit with my own personal convictions and commitments?" I could think about rethinking the good. There was never any deep tension between what I thought about that philosophy and how I'd live my life. For the last 5, 6, 7 years as I've thought about this topic, I've had to seriously reexamine, "Have I actually been acting rightly all these years in doing what I've done and urging others to do what I've done to aid the Navy in the way I have?" It's been very unsettling to think that maybe the answer to that question is no. So let me pause there and then give you a chance to ask about any more details, arguments, asymmetries that you want to hear about.
Ben (25:38):
Yeah. I've got lots I could pick out. Maybe I'd asked you to pick out the ones that you feel most strongly about. I perhaps have two or three reflections. So having come to this somewhat later and a little bit more haphazardly, I was struck by-- and I think you made this reflection-- That I'd known a tiny bit about the real life episode of Goma, but hadn't realized how much of a disaster it is. And obviously, Deaton holds this out as a sort of case for like, "Wow, look. Look what happens." That's one disanalogy or something which has happened in the real world. So that's what I reflected on. Then the small and the tiny other one, which is maybe aside from the whole of it, but I was really struck about where some things might have the force, which was your argument. I referred to this a lot earlier about the immediacy. There is something about the immediacy of when someone is in your face.
Actually, I think you see this in cities all the time when you actually meet face to face someone who is homeless. So it's not a theoretical issue, but it's a here and now issue. I do think from my own experience that there's no real theory guiding this. There is something qualitatively different from it which has been alluded to a little bit, but again, from this armchair philosophy versus where out is in the real world, that has really has really struck me. I probably now understand why I might put a little bit more weight to that liveness of where that is. Again, it's complicated. But there is something to that.
And just to your overall conclusion of that, it just seems uncertain because of that. And that maybe, for instance, you end up in a place where... Depending on how you define it in the UK or in the US, 10 to 15% of Americans or British are below the poverty line here. Direct giving to those poor people or poor wherever you are is definitely something good. Whether it's the most good, or doing good better you could argue about, but it is doing good, and you have less of these governance problems. You could maybe say it's more immediate. So that was one reflection that I had as well. But maybe you could just talk a little bit about the disanalogy which causes you to think that I'm no longer sure that some of this international aid or some of this is actually doing more good than harm.
Maybe the example of Goma is one, if you want to do that. Or there's a couple I could really struck with your disanalogies and you've alluded to some of them. "Do you give this person a lot of money?" Well, that seems sort of fine. But then if you know they're going to use this money for corrupt things-- And you can see in the real world, a lot of corruption has happened. Then this idea I think that Deaton has that countries develop in response to their own citizens. And if you essentially sidetrack that process somehow, you're essentially doing a whole disservice to that nation. That has now really struck me as being plausible. If that's plausible, that's kind of problematic for this international development thing. I think that's where you seem to have landed as well in your thinking.
Larry (28:55):
I think it is. So there's a number of elements you're touching on here, all of which are important. I'll just say a word about the face to face. I do talk about this in my book. As a pluralist, I think that there are many different ethical components that matter. There are certain people historically, who have said that the quintessential ethical moment arises when one human being encounters another human being and recognizes in that other human being a subject like themselves, with similar hopes, dreams, experiences, and so on and so forth. A similar frailty, similar mortality in that moment of direct confrontation. People like Martin Buber and Emmanuel Levinas have said that this is the heart of morality. Now, I don't think it's the heart of morality in the sense of-- It's certainly not the whole of morality. But I do think there's something extremely important about that face to face confrontation which is relevant to the strength of an obligation to help someone in a pond that's lacking when someone is also in danger of dying on the other side of the world.
I think that there's a kind of-- when you confront them... So I give an example in the book; a toy example. "You have a very expensive watch on, and because it's very expensive it has this latch and you need a significant to get the latch on and off. It's so complicated. You can't do it one hand. But it's worth $5,000. Your rich uncle left it to, you love him, et cetera." You're walking by a pond, there's a child drowning in the pond, and you see them. Your first thought is, "I have to save the child." Then suddenly you think, "But wait a minute. I have this expensive watch on. If I jump in, my watch will be ruined." Then your next thought is, "But that's insignificant. There's a child who's going to drown. It's a $5,000 watch, but it's just a thing. It's just not important. So it'll be ruined. So what?" Then you have a further thought and the further thought is, "Wait a minute. If I let the child drown, I could take this and I could sell it for $5,000. I could give it to an effective charity and they could supposedly save two children." Then you think, "Yeah. Well, I've read the effective altruism movement. I've read Peter Singer and so on. I ought to do the most good that I can. So, I'm really sorry about this. It's really too bad, but…." So you call 911. You say, "There's a child drowning in a pond here." There's nothing you can do about it and you go off to work. Then you sell the watch and you send it to an effective charity and they save two people.
Well, I claim there'd be something deeply wrong with you and something deeply wrong about what you did if you were able to do that. I think this is connected to issues having to do with virtue. I think if you're raised to be a virtuous person, those virtues will come to play and forcibly tell you, “You have to save this person there.” I also think there's something what we call a special kind of direct agent relative duty that you can acquire just by being in direct confrontation with another human being in need. Now, there are various analogies on this.
Notice the following. Suppose I'm walking into a grocery store and there's someone following me and I let that door slam on their face. That's a direct disrespect of another human being. I mean, you're walking into the grocery store. You turn, you make eye contact, you've got the door right there and you let it slam in their face. There's something bad about that. There's something really bad about failing to respect another human being, even over something as trivial as that. It becomes even worse of course, if that person is a woman, if that person is handicapped, if that person is a black and you're directly disrespecting them perhaps for all sorts of biased reasons. That's a kind of thing that arises that direct face to face failure to show respect when you're in contact with another human being. That's just not in play. When somebody on the other side of the world is drowning, metaphorically, and you could help them, the fact that that is in play when it's in your community and when you're face to face with it, gives you, I believe, a further reason to help, which is disanalogous to the other case.
But there are so many other factors. When you jump in the pond, you're saving a life. When you give it to an international aid agency, you are effectively sending them a check that's going into their general fund, no matter what you say. And that's being spent on a whole bunch of things. Now, the net effect might be that on average, that amount of money is saving a life. But what you are actually doing is contributing to a general fund that's going to help pay for the salaries of the people who work there, help pay for the gas of the cars that are being driven by them, help pay for the supplies that you buy. It's going to help pay for the rent for the building in Boston. If it's giving, like I do to Oxfam America, you're going to be paying for all those things with your check.
There isn't the same moral compulsion to pay for all those kinds of things as there is to save an innocent human life. I know money's fungible. I know it works out on average, but they're not the same. There are other issues. When this child is drowning in the pond, it's just me and the child. I don't have to worry about corruption. When I give money to an international LEAF organization, there's an indefinitely large number of people that stand between me and the delivery of that aid. Any one of whom could, for any one of a number of reasons, divert that aid to his or her own purposes. I don't have to worry about that in the one case. I do have to worry about it in another case. There are other kinds of issues. When you have an on the ground, this is an issue that people are aware of in the global health realm, but they pay basically no attention to it in the realm of international aid.
When an agency moves into another country, they want to be effective. In order to be effective, they want to hire really talented people. They want to hire smart people, talented people, energetic people, and committed people. They want to hire people of good character. So they're going to go in and address what's often... Their particular thing they care about is AIDS. So they're going to have an office. They're going to have secretarial help. Maybe they have to do some infrastructure. Maybe they need some roads being built, whatever, et cetera. They're going to want to hire the very best people. Often, they're going to hire local people because those are the people who are going to be willing to work there. Here is a question nobody asks. What are those local people doing before? We often think about opportunity costs. "What else could I do with the money?"
But what we're not thinking about is, “What are the opportunity costs of the people who are going to come to work for you for the higher rate wages than international aid agencies can pay?” Where are they coming from? Because very often, if it's a doctor, or a nurse, or an engineer, or a lawyer, they might be coming from an absolutely fundamentally important social position in a community to address the one particular thing that you care so much about as an outside agency. What are the opportunity costs there when they move from here to there? So you can see the tosses, you can see the gains, but you don't see the losses. There are issues of cover-ups. These are serious issues that no one pays attention to. International aid organizations are not looking for the places where their aid goes awry. When they see the places where their aid goes awry, they often cover their eyes.
It's a see no evil, hear no evil, speak no evil approach. This came to headlines with the Oxfam prostitution scandal just a few years back. But it's the tip of the iceberg. There are also cases of corruption in many of these countries. Unfortunately, people say, "Oh, this is like Africa." It's not just Africa. There's corruption everywhere. But in the world's poorest countries; the local, or state, or national level, often they're dominated by tyrants, by thugs, by dictators. These tyrants and thugs and dictators have a multitude of ways of getting their hands on international aid. There are so many ways that a person in power that you have to deal with if you're going to work on the ground to help people can get a piece of your pie. And then the question is what are they doing with that piece?
They can get money from you by forcing you to buy a permit in order to work in the country. They can force you to hire their followers. They can tax their followers. They can say basically for everybody who they get a job for, they're going to take 10%. They can get kickbacks from the people who receive foreign aid. They can insist... Here's a strategy that's used all the time. Countries, governments can set artificial exchange rates and insist that countries working in their area change the money first into the local currency at the official exchange rate, and then buy goods and services on the ground often from their people or through the aid. In Syria recently, there was an article in the Guardian just in last year. Syria has been doing this to the tune of at one point, taking 51 cents on every dollar of international aid spent in Syria off the top, by making the UN agencies that were working in Syria exchanged their money first into local currency before they could then buy goods and services.
What they would do is set the official exchange rate at 50% lower than the so-called black market rate, which is what the value was actually worth. And in this way, able to pocket hundreds of millions of dollars off the top just by manipulating the currency rates. Then those can be used to support the Syrian regime, which is a brutal regime and has gassed its own people. You help contribute to the human rights violations and so on. There are just so many ways. There's something called the resource curse. Economists have been talking about it for a while. People never talked about this when I was growing up, but now suddenly people are aware there's something called the resource curse. Basically, what that means is it's striking how in many countries, poor countries that are resource rich, the general population as a whole remains poor, while a group of elites become enormously wealthy.
The reason it's called the resource curse is because we have an international political system, economic system that allows whoever's in control of the resources, which means whoever's de facto in control of the government, can sell them to the international community at any terms they want. Typically they do so for the benefit of them themselves, their followers, and their political agenda. Well, that often leads to strife. It leads to civil wars. It leads to battles because people are fighting to come to power over the rich resources in their country. And this leaves many innocent people killed, wounded, dead, et cetera, as a result of these ongoing civil wars that lasts for decades in some of these countries like the Central Republic of Africa. All of this is fueled by the fact that whoever is in control of the resources could sell them to outside companies and countries for their own benefit.
There's a kind of resource curse that results in the world's poorest country when billions of dollars funnel in parade. Because people in power, if they're corrupt, if they're tyrants, if they're thugs, if they're warlords can get their hands on that money in a series of ways and use it to advance their political agenda. And when they do, often that political agenda is just terrible and we're complicit in that helping to make it happen.
I'll pause there. I didn't yet to go. But Goma is the most striking, terrible, ethical disaster maybe in the history of international aid. Unbelievably bad. Your listeners, if they read nothing else in that book, should read the one small section of Goma. But what they really should read is Linda Polman's book on this topic-- I think it’s, The Crisis Caravan: What's Wrong with Humanitarian Aid? An extraordinarily powerful book that describes in great detail how the aid agencies are often complicit in these terrible events occurring. Often they know it, and they cover it up. It's a chilling story.
Ben (43:03):
Yeah. Let's ask people to look up Goma because I think we probably can't do it justice and the whole group of AIDS skeptics. So I think your work-- and we've highlighted quite a lot of challenges to that international aid. I guess I'm going to call it the kind of Peter Singer, perhaps utilitarian style of EA. I think there are a lot of those challenges where it's not exactly clear, and I think there are a lot of challenges which people haven't thought about. Maybe I'd add a couple of extra thoughts around it and try and loop some of the things that we talked about; issues with utility and some of this probabilistic thinking as well. And I guess this three...
Larry (43:48):
Can I interrupt for one second? I want to do that, but that might take us in the direction of which might be a good idea to bringing things together and some final thoughts a little bit. I'd like to add just one more sentence or two about this. So I want to say just a word about effective altruism, if I can, and then go on. And that is this book raises a lot of challenges. I'm not the only one who's done so. But among philosophers, EA is the dominant theme right now. People aren't really challenging EA. It's almost like a religion. You want to give, you want to do good, you need to be in EA.
So among philosophers this is a challenge. There are non-philosophers who have challenged EA, but there aren't a lot of philosophers challenging EA from within. I am challenging EA on a variety of levels; both empirical questions, but also theoretical questions. I do want to be fair in a way, but this gets to the kind of summing up question to EA. There are a lot of current EA leaders who are aware of many of these problems and they are not advocating that we give to on the ground international relief organizations. Many of the leaders of EA-- not Peter Singer, notably, but many of the other leaders of EA-- Here, I mean, people like Toby Ord, Will MacAskill, Nick Beckstead in our country, they have moved on. They think, "No, the most effective way of helping people in need isn't to help the neediest people in poor countries. It's to worry about things like existential risk.”
There's much more bang for the buck in preventing the extinction of humanity than there is in helping a billion people who are in the worst forms of poverty, or 2 billion people who are in poverty, but not extreme poverty. That's just not cost effective. That's not the way to spend our charitable contributions. And I want to say I bark at that too. I think many of the current EA leaders were in the same place I was. They were on the side of the angels, I thought, in wanting to do what they could to help the neediest people of the world. But many of them had now moved on because of expected utility calculations. How do I get the most expected value from charitable contributions? They think the way to do that is work on preventing nuclear war, work on preventing biological destructions, work on preventing chemical warfare, work on preventing an asteroid from hitting us, work on preventing the kind of thing that we have right now going on with COVID; these kinds of things. We should stop pandemics. That's where we should be spending the money. Not spending our money to help the neediest people in the poorest country.
I just want to hold back on that too because I'm not just an effective altruist. I still care about the neediest people in the poorest country. I want to find a way to help them. I believe that should be at the top of our priorities when we think about what we should do in terms of helping people. Not that these other things don't matter. What they care about matters too. We should pay attention to them. We should worry about climate change. We absolutely should. But not to the exclusion the way so many of them now do, because that's how the cost benefit analysis comes in to helping the neediest people in the poorest country. My heart still remains there.
My question is, how do we do that? I am inclined to think that if we end up doing it, the two things that will come into play is by finding ways to promote better governance, to promote democracy, to promote the rule of law, to change the terms of international trade, all those kinds of things. So we find ways to […]help people in Africa without spending money directly in Africa, which gives rise to all these other kind of issues. So that's one issue. But the other issue is we might have to sometimes in some cases step in to help people right now because they're in need now, even if we know there might be terrible downstream effects of our doing so.
That's because of not just an effective altruism, other things matter too. Sometimes the urgency of the moment when people are in need that we could help now just might take priority over, "Yeah, but will that do the most good in the long run for everybody looking at future generations?" Maybe not. Maybe that's not what we should be doing if we want to be good in a world of need and not just doing the most good in a world of need. I want to just get that out because in fairness, the leaders of the effect altruism movement actually recognize many of the problems I've raised. Often did so even before I raised them. They say, "Yeah, right. That's not where we're spending our focus," but I'm not happy with that answer either.
Ben (49:20):
That's brilliant. I was hoping to wrap a lot of what you just said up in my next couple of questions. So I'm going to try and wrap it around a reflection and a question or two. So hopefully I can keep all of that in. Maybe I can say that because I'm even later to EA and I would view myself as an EA outsider, but I'm very in intrigued. So one core thing, I think, maybe some of the leaders or some people in EA people think as sort of higher level. EA is a project of doing good better, rather than doing the most good. So just thinking about how to do good better is aligned with a lot of this thinking. And I think one of my reasons for doing this is maybe they are open to thinking about... In fact, there has been some post written about, “Well, maybe how do we do better governance? How do we do better systems? Maybe this is doing good better.” This is why I think… I liked your phrase that a lot of the thinking is trying to be on the side of angels. We can disagree about how we might do that.
I think that's quite important particularly for a lot of younger people that I meet under 30, who want to do good or think about this. To think about, "Okay, maybe in a pluralistic view of the world, this is where you might want do something.” And maybe that is working on governance issues. Maybe that is working on some of these systems issues and things like that. Then my second reflection which goes back all the way, which is maybe where I spent so long in the beginning on some of your thoughts about utility, and transitivity, inequality and things like that. It comes down to your asteroid example. It is because there is this movement within EA to think about rogue AI, existential risk, and long termism. You can see from one of your things, if you have a lot of billions or trillions of people in the future and you multiply that by something, you get a really good blob. You really need to save this blob in the future. You can get that when you multiply these numbers under these kind of utility or expected value things. But to our very first point, if there are potentially issues with the axioms behind that, and we can see all of these other things, maybe we do need to be more pluralist, even if we've done that, and we haven't talked about kind of each we dilemmas and some of these dilemmas within long-termism or things like that.
So I may be interested in that final reflection of like, "Yes. So there's problem with international aid and maybe those problems are going away about being too reliant on unexpected value for very big things on long-termism.” We are in some ways pre arguing for actually Will MacAskill's next book which is going to be concentrating on this. I think, on future and future lives. I think you are just sort of holding your hand up and saying, "Well, yes, there are these concerns. Sure, in a pluralistic world, we must work on climate and pandemics." But in a pluralistic world we might think about inequality. In fact, there might be even trade-offs for that, that if we want to have economic growth after poverty might have to accept a certain amount of inequality in order to also think about justice and fairness and good governance and things like that.
And it's like morality, complex, which I think I've got from reading your work. So I don't know whether you might want to reflect on that long termism aspect and why you think some of those expected value calculations may not be doing good better, and that might be some other areas to work on. That I guess is my first question. Then my second, which is a little bit interrelated to that, which actually has come from… I think Tyler Cowen actually was, if you think you've had a little bit of a blind spot, at least previous to coming across Angus Deaton's work, or... This is what I find so interesting. In a way, back in 1977 those three great moral philosophers had a little bit of a blind spot because they couldn't conceive of transitivity to being different.
And actually in parts of EA or international aid development, they couldn't conceive that we were doing something wrong. Is there some other element that you're intrigued about that maybe your intuition or something is saying that, "Maybe we should really examine this because I just don't know, because it's been a little bit of a blind spot and actually two or three or four of the major things or discoveries that I've had have come from really interrogating some of these blind spots.” So two part question: One is when we're thinking about doing good or doing good better, why may we maybe want to rate or less rate long termism or some aspects of long termism and utility theory, and maybe more rate the ideas of health or poverty, but doing those better even if it's not international aid. And then second, is there anything else which you think might be a blind spot and why that is?
Larry (54:28):
So I'm going to start with the second one because I can deal with that more quickly by saying I'm sure there are other blind spots. But you don't always recognize them until they hit you between the eyes, as it were. I don't have anything as it were on my agenda. So percolating in the back of my head, everyone's got this wrong and we need to correct this. I do think we have blind spots. We have lots of blind spots socially that we're slowly-- people like Peter Singer and lots of other people are calling our attention to. I do think that our treatment of animals remains for most people in the world, a blind spot. We just can't take seriously that animals deserve more consideration than we currently give them, which is almost zero.
So I do think that's a blind spot, but I'm not unique in pointing that out. Lots of people have said that's a blind spot. So there are things of that kind that I think we as a species as it were, have not really taken on board seriously certain aspects that we should. So I think that. But a lot of these other things where I find myself suddenly rethinking for the first time something, it just kind of comes. I'm reading someone's work, I'm talking to someone, they're making these claims, and then it's just not fitting. And I suddenly realize something's not right here. It's just not right here. At least that's how it comes across phenomenologically. Then I suddenly realize I never saw this before. I mean, I couldn't. The point about angels, it was a total blind spot. I've been so committed to this.
We have all these psychological explanations for once people are committed to something. You have anchoring biases, you have confirmation biases, you have cognitive dissonance, you have all these psychological factors that make it very hard to move off of some ideological position that you're predisposed to accept. And that's true for all of us. I mean, just if you're a member of the human species, you have these problems. We all have blind spots individually.
Ben (56:48):
Nothing nagging you at the moment?
Larry (56:51):
No, nothing revolutionary. I think my thought about transitivity is kind of revolutionary. I think that we are way too... I express the sentiment that I have in my most recent book that I am cosmopolitan by nature. I'm capable of feeling moments of pride for the United States if we win more Olympic golds than someone else. But by and large, I never understand how a president can be elected by saying, "Put America first." And this is true on both sides. This is not just Donald Trump saying America first. But basically, Biden and others, they run on the thing, "It's all about the American worker." I'm thinking, "Yeah, well, if we shipped everything back to the American worker, we're taking away from people in the Philippines. We're taking away from poor people in India. We're taking away from people that are a whole lot worse off." I get the political appeal of it, but from a moral standpoint, I just don't get it. The kind of xenophobia that dominates the world, particularly the Western world right now, but the world about immigrants and so on.
I think these are all blind spots. I think they're moral blind spots. I think people are just not capable yet of really taking on board the attitude that they should towards people who are not one of their own; whether that's race, religion, gender, sexual orientation. We just have these terrible in and out; the other versus us. All of those lead to blind spots that I think are bad morally, socially, politically, et cetera. But I don't have anything revolutionary to say about those. I just think people have recognized that, and we're not there yet and we need to work more to get people there, I believe.
Ben (58:48)
So I'm just going to make a quick reflection before going to that first question. I see there's something with children or the art, or particularly talking to children. I was very struck speaking with my son where seeing people suffer in pictures, often particularly when younger. And the question is, “Why are they different?” Because they haven't been taught that they are different. It's interesting that through that lens they see something which becomes a blind spot later which they don't have earlier. I remember listening to school children and there were these forced economic migrants or whatever; refugees crossing over the sea, and real difficulty explaining why that had come to be because their questions of why had very unsatisfactory answers to them. So that's a small reflection. Maybe then coming to that first question on wrapping around how to think about doing good better, why long termism or some aspects of it maybe being over weighted by some within the EA community and some of that. How might we think about that?
Larry (01:00:03):
Good. Maybe we'll end with this discussion of where can you go beyond long termism? I will say this. This is actually quite important, I think. I think it was Derek Parfit at the very end of Reasons in Persons who really put on the philosophical agenda in the course of a single paragraph the issue of long-termism. And I do think that was a blind spot. I think for most of human history we could basically affect ourselves, our family members, and our local community, and the scope of morality extended that far and not much further. My obligations were to my wife, to my children, to the members, my neighbors. That's because I really couldn't do something about people on the other side of the world.
Then as information changed, as technologies changed, as we became in contact with a larger global community, we were suddenly able to recognize that there's a lot more going on in the world; more interaction, more people that we might be able to help or hurt than just the people in our local neighborhoods. And we began to have an expansion of our global awareness, our global reach. But for a long time it continued to be focused on what we call, those people who are alive today. Those who were kind of as it were, both the agents of moral thinking and also the patients; the people to whom we had obligations, with the people alive today. Those were the people you could hurt. So a lot of people in moral philosophy and Derek made this very clear had the view that, "No harm, no foul." So, you can't be violating someone's rights unless you somehow hurt them or leave them worse off than they otherwise would be.
When it came to future generations, there's nothing you do typically that's going to hurt them or leave them worse off than they would otherwise be. Especially once Derek taught us about the nonidentity problem, because if you could bring in future generations at level 200. But future generations are completely different people for reasons he talks about, given the nonidentity problem. At level 500-- well, the people at level 200 will be glad to be alive at level 200. That's an arbitrary number, meaning a decent life. The people at 500, that's also an arbitrary number, are leading much better lives but it's different people. It's not bad for the people at level 200 to be worse off. To be alive at level 200 with good lives than to not exist at all, which is the alternative.
So if the thought, which for most of human history is, "You don't ever act wrongly if you don't actually harm someone, that means leaving them worse off than they would otherwise be,” then we can do whatever we want to future generations as long as we make sure they have lives worth living. That was a blind spot. That was a hole in our moral thinking. It took the genius of Derek Parfit to wake us up to the fact that, "No, we can drastically affect the quality of life for generations to come. Indeed, we can affect whether or not there will even be people existing for generations to come and merely asking the question, "Do we harm anyone, leave anybody worse off, if we picked this alternative rather than that alternative?” That's leaving out a whole domain of moral philosophy.
So the first point to note is that there was a giant blind spot in our moral thinking when it came to the topic of future generations. How the world might go and whether there should even be future generations? Our standard intuitive way of thinking about these matters was poorly fit. It wasn't designed, because our evolution could be placed in an area where everybody who you could affect was someone you could reach out and touch; had nothing to do with future generations. So that was a big blind spot. But now the interesting thing is, philosophers post there-- this is all last 50 years or so, have rushed in to fill this void. One of the things that has resulted is long-termism driven by expected utility theory calculations. One of the leading books on this-- not published yet as a book, but was a thesis by a former student of mine, Nick Beckstead, who wrote a thesis on this topic that has been hugely influential in the effective altruism community, and it is a brilliant thesis.
But it takes expecting utility theory to be the fundamental principle of rationality; how we should make decisions, then it drives forward in that way. Now, three thoughts. There remains a kind of blind spot throughout all of these discussions which places human beings front and center of all these discussion about long termism. I see no reason to do that. If you're really just concerned about the expected utility of the world, the total amount of happiness that exist in the universe, it'd be a lot cheaper to genetically alter cockroaches so that they can experience pleasures. Maybe they already do, but genetically enhance them so they can experience pleasures a little bit more. Just ensure that there are billions of cockroaches all over the universe. Put cockroaches in spaceships and send them out rather than human beings. It’d be a lot cheaper.
There was a great book many years ago before your day probably, written by a guy named Jonathan Schell. It was called The Fate of the Earth and it was about nuclear disaster. I used to teach this in my class. Powerful book. He basically said after the great nuclear army get on when Russia and United States have launched all their nuclear things and they've destroyed the worst, the only thing that's going to be left are grasses and cockroaches. This was being slightly rhetorical flourish, but the question is, “If all you care about is the total amount of happiness in the world, if that's what's driving you, then this focus on humanity is the center of happiness” doesn't make sense. Because lots of beings, squirrels, rats, mice, maybe cockroaches may be genetically enhanced the MIBI; might be capable of having positive experiences, and it might be easy to duplicate them on vast numbers at a very cheap rate. Whereas humans are expensive to maintain, and to send through space, and to colonize.
So this kind of focus on humans from a long termism doesn't make any sense to me. I think that's a blind spot. It's an anthropogenic vestige. But it may reflect something very important. It may reflect that there are values that matter; certain kinds of perfectionist values, artistic values, the kind of things you do. If you're doing theater, or art, or poetry-- we've talked about in here that have a kind of value that can't be on the lollipops for life scale. How many licks of a lollipop are the equivalent of a burle brick play? Maybe no number. Maybe that's why we focus on humans rather than cockroaches. But again, that doesn't fit with the do the most good approach of the effective altruist and long termism if it's really just a matter of adding up how much good exist in the world. It's giving weight to certain things and saying no amount of this outweighs that. Lexical priorities…So that's one issue. And I do think there's maybe an undoing focus given their anteceding concerns on long termism on the human side.
The other thing that doesn't make a lot of sense to me is I believe we live in a vast universe. Set aside the multiverse question, which there's good reason to believe if you believe in Roger Penrose that we have multiverses all over the place. But set aside the multiverse, still the universe is huge. There might be all sorts of essential beings out there that have high quality lives, far more beings than we could even imagine. I suspect that's likely to be true. One of the main reasons people are so worried about long termism is they're worried about extinction. They're worried about the idea that we're going to go extinct and there might be nothing like this kind of high quality of life ever again, ever in the universe if we go extinct. I don't get that. Life will evolve. It will show up again. If not here on earth, it will show up on other planets. It probably already has.
So this kind of extreme angst about the future of humanity, that seems to have an overinflated sense of our own importance. I don't mind caring about our future, but I'm not willing to sacrifice the lives of people in need now. Here and now on this earth, right here, they need us. And we're going to say, "Eh...” We're too worried about future humans. Why future humans rather than future squirrels, rather than future cockroaches, rather than just... There are probably lots of beings scattered throughout the universe who already have high level existence. Our presence or absence in the universe is probably an insignificant contribution to the overall greater good. So why not focus on things like justice or equality, other values, than just more, more, and more. So I have three beefs basically with long termism. I think it’s something we should pay... There was a huge hole before. I want to make that clear. Derek told us there's a huge hole. We need to think about the future and we need to think about that from a moral perspective. That actually runs through all my work. I have arguments through tons of my work that basically say, "We have a moral obligation to pay attention to the future."
It's not just about doing the best we can for people who are alive now. It also matters who will be alive in the future. So I get that. I think that hole needs to be filled. But, I'm not going to drive all of our thinking about morality by this model that's dependent on a whole series of premises that are questionable at best, and most likely false. I'm going to want a more nuance. I'm going to want a more complex. I'm going to want a more pluralistic way of balancing off. I'm never going to want to give the overwhelming priority to human beings for a human's sake that this movement seems to have largely incorporated when they think about long-termism. And there's this incredible focus on us. It just reminds me-- I said we're kind of bringing it to a close so I'll give you the last word. Bur we still haven't gotten over the …. conception of the universe. Notwithstanding the hundreds of years we are post Copernicus.
The whole thing about the … conception of the universe is, "God created the world and He created us and we're at the center of everything. It's really all about God, but we're at the center of it all." Then it came as a huge shock to learn maybe we're not at the center of the all. Maybe there's a sun and it’s at the center of our solar system, and it's even not particularly important because there are all these other things out there. This caused untold damage to the human psyche to think, "Oh my God, we're not the center of the universe." But when you talk about holes in our thinking, we still think of ourselves as the center of the universe. We can't help it. It infuses our thinking everywhere and maybe that's unavoidable. Maybe that's an inevitable reflection of the human condition which leads to solipsism and subjectivity because each of us is the center of his or her own universe, right? When we die, our universe in a very real way goes out of existence. But we put so much weight on that, both as individuals and as a species. I think what we are supposed to learn from Copernicus is that it was a mistake to think we're the center of the universe. And it still is.
Ben (01:12:56):
Wow, that's fascinating. I'm going to ask you a couple of last questions to round that up as our sort of closing remarks. But I will offer a couple of reflections. One is, I'm really glad we did start with some work on expected value and transitivity because it comes full circle back to understand how you're thinking about long-termism and things like that. So that's really thought provoking. And that it was a blind spot thinking about the future and that Derek Parfit sort of mentioned it almost in passing, although there was more to it than in that, and that's important. But you don't necessarily want to rely on a theory which seems to have axioms which are not completely true or true in all circumstances. I do think then on EA though, if EA is more simply a project for trying to increase human welfare, maybe that is something which people can agree on, even if actually where the EA project might be going, that still has a lot of debate on.
So I'd be interested if that is a kind of simpler project to think about. If you do want to improve human welfare, how should you do it better? Then my last sub-question after that is if you'd like to offer any final remarks, perhaps advice about being good in the world or a life advice about the world, or even thinking about the life of a philosopher. Anything you'd like to do on final comments. Is that whether EA is a project for improving welfare and how maybe we should think about doing good in the world.
Larry (01:14:39):
Good. So let me start with the simple point. I said a long time ago and I still believe this. I do think that EA as a movement and the leaders of EA are on the side of the angels in one clear respect. They are concerned. They are moral beings who are concerned with acting morally in the world. They do have a larger global scope than what's just best for me, and you have to admire that. In fact, that's the thing that's often the most criticized for them. Is that they're going to require the greatest possible sacrifices that individuals should make for their greater good. Even if you think that's a mistake--and I'll say a word about that in a second, you have to admire people who take that approach to life seriously and put others before themselves.
You also have to admire the grain of truth in this. Suppose you want to do good in the world and you can spend at an earlier time when it costs this amount, something like $30,000 to address a single case of AIDS in Africa, or you could spend that $30,000 on oral rehydration tablets. In the one case you might save one life, and the other case you might save 20 lives. Well, if what you care about is saving innocent lives, it's hard not to think 20 lives has a certain pull on you that one life doesn't. That's important not to me to just become a being counter about doing the most good. Still, there's something valuable about that approach. More of the good is better than less of the good. It has a powerful appeal even if we can't let it drive the bus.
So I do think there's a lot to be said for effective altruist in that they tend to have a scope beyond just themselves, beyond just their country, beyond just people like them. They look towards the future. I'm a person who thinks we always should be looking towards the future. We shouldn't just be living in the past. There's so much to admire about effective altruism as a movement, and there's much to be said for other things equally we want to do more good rather than less. I still think that's an important component of being a good human being and of acting morally. So I can sign onto a lot of what they sign onto. Many of the critiques I make in my most recent book really are critiques that the effective altruism couldn’t fully accept. Yeah, you're right. This just shows that we have to do things differently, not because the goal was wrong, but because we weren't accomplishing our goal by doing the things we are doing now.
So there's a lot that I'm still in agreement with about effective altruism. But, at the end of the day, effective altruism have altruists. You mentioned the issue about might we limit the scope to human welfare. Insofar as we care about human welfare, we want to do the most good with respect to human welfare. Then we can leave it as an open question. “How much should we care about animal welfare? How much we care about alien welfare?” But at least what are the parts we're focusing on human welfare, and that seems to be a reasonable thing to focus on in some cases anyway. Can't we agree that we should do the most good at least there? I agree that we should pay attention to how much good we're doing always. There's always a relevant factor; pragmatically and also morally. But it's only one factor. That's going to be my claim again and again.
I've taught many students over the years. I'm coming to the end of my career. I'm retiring. I've had countless students in my office over the years who are struggling with the question of, "How should I lead my life? This is extremely controversial, but being the pluralist that I am, I believe in a balanced life. Now, you can find balance in a number of ways. But just as I'm a pluralist about my moral values, I'm a pluralist about what's involved in being a good person and what's involved in leading a worthwhile human life. I'm signed up in the camp of, "We only have one life to lead."
It would be great if there's reincarnation and we led multiple innumerable lives. It would be great if there was life after death, et cetera. I'm on board for all of that if it's there, but I'm not leading my life assuming any of that is true. I'm leading my life thinking I have one and only one life to lead. How am I going to lead it? I would like to lead a life. I would like it to have been true that I led a life that at the end of the day, I can look myself in the mirror and feel good about the life I live. This involves the life for me of things like personal integrity. How do you treat other human beings? Do you treat people with respect, et cetera? I believe that personal face to face confrontation matters. I believe the things in my life...
I'm wearing a shirt right now. I’m wearing this shirt on purpose today. It says, "I have been: Mr. Professor, and Doctor...” If you turn around to the back, it says, "But my favorite title is grandpa." I now have five grandchildren. I understand that having children is not for everybody, but for someone like me, or for my father, for my parents, the amount of joy and pleasure and pride that I get out of my children and grandchildren, there's nothing in my professional lives that compares to it. For all the accomplishments that I've had-- I've had a few. All the impact on the world that some people have say I've had which is major-- none of that. It all pales in comparison to the relationship I've had with my wife going back to when we met on the hunger hike decade ago.
I put a tremendous amount of weight on the value of human relations. Parent a child, love towards significant others. These are tremendous services of value in human life. I pity my peers who are at the absolute pinnacle of intellectual achievement, but whose lives are lacking in so many other ways. So Derek Parfit who I've mentioned before, he was my mentor. He was my closest friend in philosophy for 40 years. I still think about him almost every day. I love Derek, but I would never have wanted to be Derek. And I wouldn't want anybody I care about to be Derek. He accomplished unbelievable things as a world class philosopher. But his life was, to my mind, missing so much of what makes a life valuable. So I wouldn't recommend that life. He didn't regret the life he had. And I have other philosopher friends like Derek; single minded, focus on their philosophy, that comes first, second, and third, and they wouldn't have traded their lives for anything. I get that.
I don't see that as the most valuable human life. One of the ways of thinking about this is, I have a model which I use in my Rethinking the Good, which I... I'm blanking on the current name. It was called the Gymnastic Model for many years and now I've given it a new name. Often when we think about like, "Who's the greatest gymnast? The best all-around gymnast?" A cat model moral idealist, that's it. The greatest all around gymnast is not the gymnast who excels at the parallel bars, but stinks at everything else. I don't care how good you are at the parallel bars or how good you are at the mats. If you are not good in anything else, you're not a great all round gymnast. Similarly, when you think about the virtues, I don't care how much Genghis Kahn might have the virtue of loving his mother-- pretend that's a virtue.
He might love his mother an unbelievable amount, but he's not a virtuous person if he raves and pillages and kills and steals and does all these other things. It doesn't matter how accomplished he is along that one dimension, if he's abysmal along the other dimensions. If you want to be a virtuous person, a truly virtuous person, you have to score pretty highly along a whole bunch of dimensions. Similarly, I think for the human life, if you really want to have a great human life-- and this is why I try to urge my students, you can't be narrowly focused on any one of the aspects that make a human life valuable. That's my view. I have colleagues who don't share it, but you're interviewing me right now and you're asking my view. When I give my advice to the people I care about...
Nobody can do everything. You can't be everything. We live in an age of specialization. I don't do every academic field. I don't even do every academic field within philosophy. I don't even do every academic field within moral philosophy. I specialize. One of the reasons I'm as good or not as good as I am in moral philosophy is I spend 30 years-- 15 years working on my first book and, 30 years working on my second year book. That means a lot of hedge hogging rather than foxes. So I get the appeal to focusing on something and trying to do it well. But when it comes to living a good human life, it can't just be narrowly focused on one thing. There has to be space for these other things that allow a human being to flourish. That's what I believe and that's certainly how I've tried to live my life.
There's an old expression from the tradition I grew up with that says, "If you are not for you, then who will be? If you're only for you, then what are you?" Basically, that means we do have the right and the permission to look out after our own wellbeing and those of those we love. I believe that. But if we only look out after ourselves and those we love, and we don't take time to sense of the larger world of which we're apart, our lives are not good lives.
Ben (01:26:01):
That's amazing. I'd sum that out post. Be pluralistic, be balanced, be able to look at yourself in the mirror, and be happy with what you see. So on thinking about that, I think this has been one of the most amazing conversations in my life. So, Larry Temkin, I thank you very much.
Larry (01:26:26):
It's very kind of you to say. Thanks, Ben. I really appreciate it.