Then Do Better

View Original

Ruth Chang: Making hard choices, philosophy, agency, commitment, Derek Parfit | Podcast

Ruth Chang is a prominent philosopher known for her work in decision theory, practical reason, and moral philosophy. She is a professor at the University of Oxford, holding the Chair of Jurisprudence. She is well known for her theory of "hard choices," where she argues that many choices are not determined by objective reasons but instead involve values that are incommensurable. Her Ted Talk on the subject is at 10 million views.

Chang challenges the traditional framework of decision-making, which views choices as being simply better, worse, or equal. She introduces the idea that some choices are "on a par," meaning they are qualitatively different yet in the same neighborhood of value. This perspective suggests that the balance scale often used in decision-making wobbles without settling, reflecting the complexity and richness of our values.

The conversation explores how this understanding can be applied to career decisions, illustrating the importance of identifying what truly matters to us and recognizing that our agency allows us to commit to paths that align with our values, even in the face of hard choices. Ruth discusses the importance of commitment and the role it plays in rational agency, highlighting how it can guide our decisions and bring meaning to our lives.

The episode also touches on the implications of this theory for public choice situations and AI development. Ruth emphasizes the need for AI systems to account for hard choices and incorporate human input in decision-making processes. This approach could ensure that AI aligns with human values and contributes positively to society.


Further, Ruth reflects on her experiences with influential philosophers like Derek Parfit and shares insights on the state of philosophy as a discipline, particularly the challenges it faces regarding diversity and representation. She offers her perspective on philosophical movements like effective altruism, emphasizing the need for depth and complexity in philosophical discourse.

The episode concludes with Ruth sharing her "A.U.T.H.O.R." framework for making choices and becoming the author of one's life, encouraging listeners to embrace hard choices as opportunities for agency and self-expression. This insightful conversation invites listeners to rethink their approach to decision-making and consider the profound impact of values and commitment in shaping their lives.

To become the author of your life, ascertain what matters, understand how alternatives relate to what matters, tally up pros and cons, and then open yourself up to the possibility of commitment. Realize yourself by making new reasons for your choices.

In facing hard choices, if you can't commit, it's okay to drift—dip your toe into an option. This way, you gather the information necessary to discover where you can truly stand behind a path."

Transcript below, video above or on YouTube. Available wherever you listen to podcasts.


See this content in the original post


Contents

  • 00:36 Understanding Hard Choices

  • 04:37 Applying Hard Choices to Careers

  • 08:55 Rational Agency and Commitment

  • 18:37 AI and Hard Choices

  • 25:35 Philosophical Influences and Effective Altruism

  • 45:34 Current Projects and Life Advice


Transcript (errors are possible as this has been AI aided in generation)

Ben: Hey everyone, I'm super excited to be speaking to Ruth Chang. Ruth is a philosopher and Chair and Professor of Jurisprudence at Oxford University. Ruth, welcome. 

Ruth: Thanks so much for having me. 

Ben: I'm trying to decide who is better, Taylor Swift or Paul McCartney? How do you think I should think about that decision?

I guess traditionally we've argued for three positions. Taylor is better than Paul. Taylor is worse than Paul. Taylor is equal to Paul. But your thinking from what I've read suggests that about hard choices, that might not be the context that we should use. We shouldn't perhaps think about it differently.

What do you think? 

Ruth: So that's a great way to enter into my world. I'm interested in trying to understand a very structure of normativity and value. Okay, so what you just said, that we think that when there are reasons to do things or things are valuable, we can only array them in one of three ways, better, worse, or equal.

And I think that's a mistake, and I think there's a diagnosis for why we make that mistake that's perfectly reasonable, and that is, we, when we're trying to tame the external world by measuring quantities of stuff, we find more or less than equal the right framework with, within which to operate in understanding the external world.

But the external world. Also includes values, reasons to do things, and we may, we need to make a distinction between non evaluative properties and evaluative properties, or non normative property and normative properties, and the normative properties in the world aren't like the non normative properties, so they can't be represented as quantities.

And once you recognize values, they can be so different in quality. They also have different amounts and so on. They're much more complicated. Then things like length and weight, which can be fully represented simply as quantities. Then you start to think maybe this framework, what I call the trichotomous framework, better, worse than equal, more or less equal is not the right framework for thinking about how to live, how to deal with values and reasons.

So if you think about Taylor Swift and Paul McCartney, if you're a certain kind of person and what matters, And let's say a choice between how to spend your hard earned bonus concert there. You might think that's a case where it's a hard choice Now, it's important to figure out what do we mean by a hard choice?

What is a hard choice? And if you go out on the street, most people will say a hard choice is a choice where I just don't know enough. I'm uncertain. Or something else they might say is a choice is hard because I can't measure the values of the options. But I think both of those common answers are mistaken.

The lack of measurability is a kind of surface symptom of something deeper, and that deeper thing is that the value of going to a Taylor Swift concert, and the value of going to a Paul McCartney concert, they're qualitatively different. And so the choice, Is hard because the options are what I call on a par.

So one, one way I think about this is, I think that most people when they make decisions they take out of their back pocket, a balanced scale. I think, okay, I put one alternative on one side of the scale and the other alternative, the other. And then you wait and you see how the balance scale settles.

And on the trichotomous framework, of course, it's going to be like this, or like this. But I want to say that in most of the interesting voices we face, It's going to be like this. The balance scale is going to wobble. It will never settle. And that's because the options are qualitatively different.

And yet, in the same neighborhood of what matters and the choice between them overall. So they're on a par. 

Ben: And so how might we apply that to thinking about careers? So I can see that these choices are hard, might be on a par. And you've spoken about this, but maybe I'm thinking about being a yoga teacher, or maybe I'm thinking about being a lawyer, or I guess you had to choose at some point between being a lawyer or being a philosopher.

And I guess. If you think about society, they're paying lawyers more money than yogi teachers on general. So that's one element But it strikes me that this is a hard choice And I was reading some of your work and you had this notion of I guess commitment a willful commitment potentially having agency in your choice Being something to take into account So if people are thinking about some of these difficult to measure and hard choices like in a career or something like that. How should they think about that framing?

 

Ruth: Okay. There are two points that you raise. One is whenever you face a choice, hard or easy, the number one thing that you have to figure out is what matters. And you have to frame a choice in terms of what turns on this choice, what matters to me, or to society, or whatever, in choosing between A and B.

That fact, that all choices are relative to what I call a covering consideration, Is still extremely under theorized in philosophy. And, quite frankly, if you think about a huge part of decision making, theorizing about decision making is focused on the surface phenomena, things like what are the circumstances?

What are the background social structures that create a social choice architecture? Tell me more about the alternatives. Which is better for me, so on, but really we can take all of the questions that have been swilling around the surface phenomena in decision making and recast them in terms of features.

Of what matters in the choice, that is features of the covering considerations that matter in the choice if you really understand what it is that makes for a good career, right? Goodness of a career is your covering consideration when you're choosing between being a lawyer and a philosopher will immediately see that the trichotomous framework is silly.

Because there are lots of qualitatively different ways of having a good career, and we shouldn't try to force them into one of the three boxes or onto the balance scale because that's just a mistake, right? So one way to think about approaching hard choices in choices between careers or, how to spend your life. The first thing you have to do is you have to really sit down and think hard about the covering considerations that matter to the choice. Having said that, second, if the choice is hard, if the alternatives are on a par, if they're qualitatively different with respect to what matters in the choice, and yet in the same overall neighborhood of value, then we're stuck, it seems, because what it is for the

that are relevant, given what matters in the choice, to have run out. So that's it. You're out of reasons, you're out of values. So now what? You might think the thing to do is to just flip a coin. But if, as I think, these kinds of hard choices are ubiquitous, they're all over our lives, that gives us a picture of life that is It's mostly random.

We don't seem to have any authorship or agency over. It's just the world throws us a bunch of things that are on a par, and so let's just flip a coin and go one way as opposed to the other. That strikes me as unsatisfactory. So what I suggest instead is that we need to expand our understanding of what it is to be a rational agent.

As something that isn't simply recognizing the reasons and the values that the world throws at us, figuring out how they relate, and then responding appropriately. Rationality essentially, and centrally, involves this other capacity. It's a capacity to put our very selves Right to stand behind something and thereby endow it with value or normativity and this capacity it might sound a little strange, but I think we do it.

Most of us all the time. Here you and I are, we're sitting and chatting about philosophy, but guess what? There's a bunch of other stuff we could be doing.

We could be writing checks to Oxfam. We could be saving lives and many other things. I could be having some tea. You could go play with your child. So what explains and what justifies our being in the choice situation where right now we're chatting about philosophy and maybe The choice you face is between asking me one question as opposed to another question and the choice I face is how much detail should I get into this?

And, what should I say? Why are we in these choice situations and what justifies our being in these choice situations? So if you zoom out for a minute and think, wow, our entire lives are filled with these moments in time in which we're in a choice situation, but we could be in so many others.

You might wonder what explains why we're in the ones we're in and what justifies Are being in the ones we're in and here I think we have to appeal to the idea of our agency or our commitment to a certain path that makes certain choice situations, the right ones for us. You at one point committed to creating this podcast and, that commitment then makes certain choice situations pop up and be more salient for you than others. Your agency is what makes true that you're justified in being in the choice situation you're in. And we can't explain why you're justified in being in the choice situation where you're contemplating what should I ask her next?

In any other way, that gives you agency over your life, right? The standard, if you ask someone on the street how come I'm in this situation? The standard answer will be entirely passive. It will be done in terms of causation, like you just cause you to find certain things salient, and then you form an intention, and then there are norms of structural rationality given that you form this intention to make a podcast, then, if there aren't countervailing reasons, then you're structurally rationally forced to be in these kinds of choice situations, supposedly those But notice you drop out of the story, like where are you in this story?

It's just stuff happening to you. So probably the kind of deepest thing I want to say is that rational agency shouldn't be understood as this essentially passive set of capacities passive in the very deep sense of There's no room for us to actually create reasons for ourselves or to add normativity to things.

Ben: That's very attractive to me because it strikes against determinism. It brings in agency as well in this kind of willful commitment idea. I was also wondering how you might apply that to public choice situations. So here in the UK, we have a a budget on healthcare, and one dilemma which comes up quite often is this idea of do we give money to the preterm baby?

And save their life a cost of something like half a million or a million dollars. And do we spend more money on diabetic patients who tend to cost twenty to forty thousand dollars? And there's a complicated way that they adjust this into kind of dollar per life years to get some sort of comparable measure.

But then actually here in the UK, we've said we don't think that's quite the only way and the kind of narrow thing about just cost benefit and expected value doesn't chime with what everyone thinks. And so then we also ask people, how do you think that calculus should be done? And we weigh it to that.

So that's why we do spend some money on preterm babies. If you were just narrowly thinking about a dollar per life, you might always choose. The diabetic, there's this thinking about how actually maybe that kind of way of thinking isn't as helpful help in that decision, or is it to comment that you made?

It's just we haven't quite still got the theories and structures to really help out in making some of those tough choices. 

Ruth: I think a lot of the work that is currently being done in healthcare policymaking and, bioethics and so on, is extremely important because what it helps you do is zero in on the kinds of factors that are relevant in determining, what we should do overall in these difficult cases.

And that's important because, you got to understand what's at stake. And I assume what matters in these cases is, what is it that matters? Doing the right thing? No, it's probably not just morality. It's probably some plurality of goods that matter, and it's very hard. To me, I think it's interesting that the philosophy of healthcare stuff that I've seen doesn't actually try to list the things that actually matter because it's controversial, but you need to do it.

There's no shortcut. You got to roll up your sleeves and have the arguments about the things that matter. And that will then constrain the merits and demerits of the alternatives. Having said that, the kind of case that you raise seems to me probably a hard case. And if my theory is right, the thing to do is to understand that there's no right answer that you have to try to figure out.

But instead, you have to actually shift your decision making protocol. To recognizing that there's a hard case here, and then realizing, we as a hospital, what we can do is we can commit to treating preterm babies, or we can commit to half of the money going to preterm babies and half going to diabetics or, but recognizing that we just, that there's no right answer, that if we can stand behind spending our money, In this way, as opposed to that way, that gives us a kind of identity and clarity of purpose, right?

I think this is probably happening in the US, at least legal system, right? Different jurisdictions have different characters, right? They stand for different things. In this appellate jurisdiction, they care about. Actually it's more plausible district courts. The example I'm about to give, they care about indigent circumstances and vulnerability while committing the crime over here, this jurisdiction.

No they just care about the cost, the economic costs of the crime and so on. And that's a little crude, but there's a way in which a judge. Who has authority over jurisdiction can create the character of her court by committing to certain values over others. In the hard cases that she faces.

Ben: That's a lot of food for thought. So I can definitely see that being hard cases and how you might think about that. It brings to mind, then currently there's a lot of talk about how now in the future we might use AI to help us do decision making and it strikes me that this is something that we might want to think about in terms of what we put into.

Algorithms and choices, or even if we're using AI somehow to help humans make choices that this idea that there might be for hard choices, a different way of thinking about it. Quite important. Do you think this is relevant to AI? And what is your thinking about how we might have AI aligned, make sure that AI is.

Helping and enriching humanity as opposed to the other way around. 

Ruth: Absolutely. You put your finger on, I think, probably the most important application of this idea to the near future. And that is AI. It's coming like a juggernaut down the pike. And if you look at machine learning algorithms and even most symbolic systems, they're built on trichotomy.

Right now, a machine learning protocol will give you only three good results in the context of decision making. This is better than this, it's worse, or they're equally good, and then everything else is a bad case. I just throw it out or do something else with it. That means if, in fact, You and I are facing a hard choice about whether to spend the hospital's money on preterm babies as opposed to Diabetics and we get an algorithm to help us.

It's going it's already designed to force the choice into one of the three Categories, it's better to treat preterm babies. It's worse or it's even worse Toss a coin. But we know that the choice is a hard one. So if we could redesign our machines to allow for four good positive outcomes, to allow that there are some hard choices, that is in addition another way, it's a kind of new interesting way of getting the human in the loop in machine processing.

So the machine hits a hard choice. And it sends up a flag and says, Humans, I need some input and it's at these points that a committee can review the information and actually make a commitment or do this other thing, which I think is okay, which I call drifting, but let's stick with the commitment case, commit to, preterm babies.

Let's commit to. To spending money on them and sends that information, then back to the machine. And now the machine adjust its algorithm so that the, what was a hard case, right? Preterm baby or diabetes. Now becomes an easy case, right? That the preterm baby is better than cheating diabetes, and it adjusts its algorithm the minimal amount to make it true that's the right outcome that would reflect the commitment of the hospital to put the money on preterm babies. Two things, right? We need AI to have four and not three good outputs. In order to match human values, right? Because human values, you and I, we face hard choices all the time. And so why are we building these tools or decision makers who don't face hard choices? They need to face hard choices too. And second, these machines cannot.

Ever be fully autonomous because when they hit a hard choice The human has to come back into the loop and provide some input. Because this idea means that you have to have small AI, you have to have AI that's carves at the joints. It's got to be, it's, you need a design for each joint and figuring out which those joints are. That's hard work. Someone's got to do that. But I think that's the only way we're going to get value alignment.

And it's just, it's sheer lunacy to think that without putting hard choices into machines that we're going to get value alignment. It's just not going to happen. There are going to be many people who are harmed by machines deciding A is better than B when in fact, A is on a par with B. 

Ben: I think there is room potentially for a small AI or small AI startups, they will probably always be. In the minority versus the juggernauts, but at least then they could be developed. And if they do turn out to be potentially better, we'll have these uses than that.

That will still be useful. It also strikes me coming back to the individual choice, that we should probably not feel as bad as many of us do when faced with. hard choices that actually if we commit or maybe you can touch on drifting that both of those are plausible ways of trying to weigh up these choices and actually we should perhaps not feel quite so bad about making these choices if we do commit to one or even if we chop and change and drift would that be a way of thinking about it or not?

Ruth: Yeah, so we shouldn't think of hard choices as these horrible things that that we all have to face in life. They're amazing thing that we face in life. Because they're the junctures at which we get to express our agency. We get to stand behind something and add value to it. We get to direct our lives in a certain direction as opposed to another.

So the standard picture of rationality is here's your job as a rational agent. You've got to wake up and figure out all the reasons and values that the world throws at you, figure out how they relate, and then respond appropriately to you. That's the standard picture for millennia, okay, and that picture leaves no room for us as agents, and I call it the pacifist view of rationality on an activist view.

The picture is the world. You wake up and the world's going to throw a bunch of hard choices at you and each of those is this precious thing where you get to express your agency and commit to one thing as opposed to another and add value to it and in that way craft your life, right? You justify living like this as opposed to that.

Even though that alternative path is not better, worse, or equal to the path that you've committed to, they're just different. They're on a par. 

Ben: That makes a remarkable amount of sense to me, on a par. We've talked a little bit about this phrase, what matters. And that's perhaps a good segue into Derek Parfit, who I think you studied with.

And I was podcasting with David Edmonds and reading his book on the Derek Parfitt biography. And it struck me that he influenced the thinking, or in fact helped the thinking, of so many philosophers and wider thinkers. And there was a particular point in time also when he was with Amartya Sen, who went on to win the Nobel Prize and had a lot of thinking within economics as well.

And I'm wondering a couple of things. One was, what do you think might be most misunderstood about Derek Parfit? And the second is, how, what was it like listening to and debating with Parfit and Sen and to give a sense of what it is to be a a philosopher during our times? 

Ruth: That was a very heady time, and I feel extremely lucky to have been in Oxford during that period.

So there was something that was called the Star Wars Seminar. It was Ronnie Dworkin, Jerry Cohen Amartya Sen, Derek Parfit I think I'm missing someone. Let's see, who would I be missing? Anyway amazing lineup all in a row and All Souls old library and the room jam packed. It really makes a difference having a bunch of Very accomplished and interesting thinkers, all working together in a way not working on the same idea, but working in a way that they can feed off one another. And I think that was very important. I think one of the most important things that we academics can do for our students is to just give them a sense.

of what, philosophy at the highest levels is like. So too much philosophy today is just people writing down and publishing things that are just not the sort of thoughts on the way to something worth saying, but not really worth saying. And I know there's so much pressure to publish, but I think it's ruining the profession.

If we could all chill and just write down things that are worth. saying I think the profession will be much improved. And I think that generation of amazing thinkers did exactly that. They just wrote down things that were worth saying. So it was a great model. For how to do philosophy.

As for Derek, I feel so fortunate that I was able to study under him. And he was someone who gave me a sense of what philosophy at the highest levels is like. And we want to impart that to our students. But of course the flip side of that is that you spend your life falling short. But it's important to know what that is.

Instead of thinking, oh this is what philosophy is. It's just writing down anything I happen to think that has a little bit of rigor. So that's, I think, the most precious thing I learned from Derek. And of course, I am constantly falling short. So it makes for not a great life in some ways.

But if you accept that you are what you are and you do the best that you can I think that's better than, and while you know what it is to do philosophy at the highest levels I think that's a better situation. It's better for you, and it makes you a better philosopher than someone who doesn't even know what that is. Yeah. Okay, so Derek, it was just amazing, it's just spending. Eight, 14 hours with him just talking. And, to be frank, most of our discussions were not about my work, right? He wanted to talk about the things he was interested in and I was all too happy to oblige. So it was a way of cutting your teeth as a young person and being critical and, thinking about interesting ideas at a pretty high level.

And it was great fun. He really was indefatigable as far as how he's misunderstood. I guess it's a shame, I think, for him to be blotted into a certain type of philosophy, which I think is not fair. I think he admired and was glad for Effective altruism, that's not who he was as a philosopher.

So people who want to paint him as this flat footed, quasi utilitarian, I just don't think that's right. And it's convenient to take someone who's a big figure and use him as a foil, but I don't think it's accurate. 

Ben: His work's just that much more complex than, like you say, effective altruism or quasi utilitarian thinking.

Ruth: And that's not who he was as a person either. , I was just reminiscing about, lunches we'd have together and he would just suddenly burst into tears, he was a feeling person and and he believed in respect and duties and obligations and all that stuff. And he was extremely empathetic, right? So you can be like an old world British socialist who's got empathy for a group or some abstract entity. He wasn't like that. And I sometimes worry that people paint him as that. He was empathetic individually, for individuals. And he could occupy other people's shoes.

I think the thing that was most remarkable about him, maybe the thing I admired most about him, was that philosophy wasn't about himself. There was no ego in what he did. He was, he had, I think, a healthy ego, but when he did philosophy, it was all about the ideas. That's all he cared about.

And it wasn't about putting other people down and showing that they're more, he was more clever than them. So he was a role model for how to be a philosopher in that respect. 

Ben: How to concentrate on what matters, there. That scene that we come back to that's a good segue into whether you have any thoughts on How good effective altruism is or what its state today?

I guess some people would say that Derek Parfitt's kind of the grandfather of that kind of movement Although some of it, you could only take maybe a handful of his pages to get some of that. So I don't know if you have any thoughts on that

Ruth: Are you asking me to say what I think about effective altruism? Maybe I 

Ben: could answer. We could do a little, I was going to do this in maybe underrated or overrated or maybe something else without that. But you said, yeah, maybe if effective altruism is a underrated philosophical idea or overrated, it probably hard to boil it down to one thing or any thoughts about where the movement is.

I guess it's been more. influential with young university goers than perhaps they they might've thought. But in recent years, it's also attracted quite a lot of criticism, maybe because of this perception of it being a kind of narrow cost benefit utilitarian idea. I'm just intrigued and what you make of it as a movement and how it's doing. 

Ruth: There's the movement and then there's the philosophy. And as far as the philosophy goes. My own view is that it's worth taking an idea and exploring the heck out of it, right? Go down all the alleys, that's what we should do. But then there's also kind of judgment about which cluster of alleys, you're looking down at which ones are worth spending a huge amount of time on. And I think that effective altruism, The philosophy of it is an alley that it's worth going down, right? Go down those alleys. That's great.

As far as the activism side, I guess I'm a little worried as to why this cluster of philosophical ideas has got so much traction, especially among young people. Like, why do they find it so attractive? And I worry that the reasons are not the right reasons for a theory to gain traction. To have something neat and fairly clean and in some ways not terribly complex and gives you a sense of virtuousness. I think that if that's what's going on, those are very bad reasons to cotton on to a philosophical theory and champion it. 

Ben: Yeah, that makes sense. 

Ruth: Yeah if those are the reasons, then things will peter out, right? Because the philosophical theories that have, Insight to them. Those are the ones that need to live on, even if they're messy and complicated and not easily summarizable.

That's what we need to be championing. 

Ben: And with that, with the sort of reasons part, how much weight do you put on intentionality when we come to choices and thinking? 

Ruth: So according to some studies, about 40 percent of the actions An average person performs a day are automatic.

So there's no intentionality at all. You just turn on the alarm, you make the call, right? And sometimes it feels like we're just on autopilot, right? We just go to work. This is what we have to be doing. We have a script that we're following. I'm trying to fight against that. And I think your use of intentionality is something I would call here's this opportunity we have for commitment where we can actually put ourselves behind a certain course of action instead of just drifting into it. Intentionality as you mean it, that is there are two things you might need. One is something that's close to what I think of as commitment. The other is, let's reflect and think about which path we should be taking and we should certainly be doing that. I think no one thinks we shouldn't be doing that.

I want to say that there's something else we should be doing that is not cognitive, is actually volitional. So yes, we need to think and contemplate whether this course of action is better than that one. But we also have to understand that our role as rational agents is to Throw ourselves behind something and create value for ourselves and thereby become the author of our lives by adding value to things for ourselves, right?

We make it true that being a philosopher is better for us than being a lawyer. 

Ben: And It leads to your point that hard choices are there for an opportunity. They can be great because you can bring your commitment, intentionality to it and bring it value to yourself. 

Ruth: One thing is that most people who have written to me about their hard choices, trying to get some help, they're in a position where They can't commit.

So it's all well and good for me to say, Oh, yes, and hard choices. You have this capacity to ground new will based reasons to create value for yourself, for an ordinary person when they face a hard choice, usually they're not in a position to make the commitment. So in that case, what do you do?

Like you're torn between being yoga instructor and lawyer. If you can't throw yourself behind one option as opposed to the other, what I tell people is to go ahead and drift in the sense of dipping your toe in one of the options as opposed to the other in the hopes, right? In the broader context of seeing whether you can commit to that option.

So in my own case. Should I be a philosopher? Should I be a lawyer? I was really torn. It seemed very clear to me that dipping my toe into law was a kind of safe thing to do because I'd be financially secure. Philosophy is a very dangerous, risky profession. So I dipped my toe into law, and I found myself unable to commit to being a lawyer.

So in law school, I took as much philosophy as I could. I couldn't commit to the way of being a successful law student, right? To be a successful law student, you have to do what's called issue spotting, and they give you a scenario and then you got to spot a bunch of issues and what I would always do is I pick one issue and I try to think about it philosophically, that one issue.

And so I'd miss the other dozen issues. So I wasn't a very good law student. I just couldn't do it. I just, I couldn't stand behind doing that. And when I practiced law, I kept thinking, I really want to go to foster graduate school. So that was a case of drifting into one option, dipping my toe in, seeing whether I could commit to it, and discovering I couldn't.

And that's a way to get information about what you can commit to. It gets you in a position where you can then throw yourself behind a path that looked too scary, five years ago. 

Ben:

Okay, I've got a couple of more big ideas, which I hear around and I'd be interested in your opinion as to whether they're overrated or underrated and in the same thing, and then finish off with a couple of current projects and perhaps overall advice or life advice questions.

So one is, there's some thinking around so called existential risk or tell risks of bad things happening. Some people talk about man made pandemics or I guess nuclear war or rogue artificial general intelligence. Do you think those are underrated ideas or overrated?

Should we be spending more time thinking about them or less time or maybe is it about the right amount of time that people are spending thinking about existential risk problems? 

Ruth: Existential risk is very sexy, but it's way off. I think we should be spending more time now trying to solve immediate problems. 

Ben: I guess there's two parts to sustainability, which is meeting the needs of today, as well as meeting the needs of tomorrow. But if your tomorrow is perhaps a thousand years or 2000 years in advance, or some people talk about a million years in advance, it does seem a little bit far away. We touched on the ideas of pluralism, kind of putting weight on a few things. I guess some people mean a bit differently by it. But do you think that's a kind of overrated or underrated explored idea, this idea that perhaps we should be more pluralist in our lives?

Ruth: I, it depends on what you mean, but in a kind of broad brush way, yes, let's be more pluralist. And one way we can be more pluralist is to have different communities. that are very fixated on one single idea or one religion or way of being and so on, so long as there's peace, right? And as long as one recognizes that mine isn't better than yours, they're all on a par.

Ben: Great. We just happen 

Ruth: to commit to this religion, and they've committed to this other religion or this other way of being. 

Ben: And that's okay, right? 

Ruth: That's okay. 

Ben: Okay. And then last one on this would be a universal basic income, some sort of UBI. Good idea, bad idea, neutral. We don't know enough about it.

Ruth: I don't know enough about it, but other people must on the face of it. It seems to me really intriguing. And I know there've been little pilots. Studies in small places. Just speaking personally without knowing anything about the topic. I love it. Like I'd love to try it and just see how we evolve as humans.

Ben: Great. Okay. And then last one I guess this is sometimes comes up in your work as well, but what do you think about the value of transformative experiences in terms of, big experiences that we have, which might change our views or things, or perhaps also how we should go about either kind of being more open to transformative experiences or not. 

Ruth: I think of transformative experiences as of a piece with ordinary experiences. You could have an ordinary experience that makes accessible to you, some value that wasn't accessible to you before that changes the weights of values that were accessible to you. Transformative experiences are ordinary experiences on steroids, where they're, they have this funny effect of.

Changing a huge swath of your value profile in a way that you couldn't have anticipated, right? Perhaps. Whether the epistemology is as people think is one, one has to argue about that, but, you can have experiences in life that change what you consider valuable and how valuable you consider it. And those are, as it were, small scale transformations, but you can also have these big scale transformations. And transformative experiences understood as the big scale ones, I think, are interesting. But I think they should be understood in a broader context of different ways in which our value profiles can change.

Ben: Great. And final couple of questions would be are there any current projects that you're working on? That you want to share. And, last one would be, life advice or advice that you have. But maybe starting with current projects that you're working on, or in the future that are interesting you at the moment.

Ruth: So the two projects That I'm working on now have to do with trying to locate and propose changes for what I consider two fundamental misunderstandings about value that are currently embedded in AI design. And I think that unless those two mistakes are corrected for, we're never gonna get alignment.

So they're not just random mistakes. 'cause any ologist will tell you AI when it deals with values gets a thousand things wrong. But these are two things wrong that I think are absolutely central to human alignment, human machine alignment, value alignment, and of course. As a philosopher, you can just say stuff, but no computer scientist is ever going to listen to you.

Especially if you have a proposal for how to fix what you claim are fundamental mistakes in AI design, you better speak their language. Kit Fein, I have prevailed upon to help, to create a mathematical model that could fit. What I think are the ways to fix these two fundamental mistakes.

So that's one project. 

Ben: And just to say that the two mistakes, because one was about hard choices, that there should be four ways rather than three ways. 

Ruth: Yeah. 

Ben: And what was the second mistake? Or is, and that, because it has to hand back to humans. Is that related? 

Ruth: No, the other mistake I haven't talked about yet has to do with an assumption that the way you achieve a value.

is through a non evaluative proxy. If you're trying to build an AI to hire the best candidate, right? Best candidate, that's evaluative. How do you achieve that? You put in the reward function some non evaluative goal, like Sort all the resumes that look the most similar to the resumes of the people we've already hired.

And all machine learning is like that. And that's a fundamental, I think, mistake about how it is to achieve an evaluative goal. You can't do it through a non evaluative proxy, and certainly not at scale. You can jimmy up a small AI that will work for, but that will be useless. This goes back to what I was saying before about how I think we need to have small AI that carves at the joints and getting the right joints is absolutely crucial.

So those are the two mistakes, values proxies and making no room for hard choices. So the second thing I'm working on now is related to something we talked about earlier and it has to do with how, here's one way we can get meaning in our lives. So it's about how do, how is it that you achieve meaning in life?

And, it's not about achieving things or having great relationships, although those things could be important, the ways of having meaning. I think it's about

committing or putting yourself behind certain, what I call well formed choice situations. as opposed to others. Like you and I get meaning in our lives by putting ourselves behind well formed choice situations that have to do with executing being thinkers or, talking about ideas. Other people get meaning in their lives by putting themselves behind being wolves of Wall Street.

And the idea is that you can contrast the case where you put yourself behind a certain set of choice situations. As having more value for you than others. Contrast that with a case where you just passively, look your father worked on Wall Street. His father worked on Wall Street. You went to fancy schools and Wall Street firms came a callin and handed you a job on a silver platter.

But where you just drift. into a path, but you don't stand behind that path. You haven't actually added value to that path. You're just blindly drifting. And a life like that, I think, doesn't have the kind of meaning that I'm interested in. 

Ben: That's fascinating. Oh I look forward to reading more about that.

And I guess final question then is, do you have any Advice that you'd want to share to listeners. It can be advice about choices, although I guess we've talked about that or life advice about career or how to how to live a flourishing life. It sounds like you're alluded that to you and your sort of current project around thinking about being committed to choices and the like, but I don't know if you'd sum that up in a particular piece of advice you'd like to share. 

Ruth: I have this kind of cute. Recipe for making choices and being the author of your life, right? A, ascertaining what matters, U, understand how the alternatives relate with respect to what matters, T, tally up the pros and cons of the alternatives with respect to what matters. And then here, we tend to draw a line.

That's it! After we've tallied up, we can figure out what we should do. We'll recognize that

often, we don't get an answer. So what we do is we go back to A U T again, and try to figure out where did we go wrong? We have to just be more careful. Instead, I think we need to move on and go to H, which is to home in on the fact of parity, right? There are hard choices. And then O is to open ourselves up to the possibility of making a commitment to one of the options.

And R, by opening ourselves up to the possibility of commitment, and then committing or drifting, right? We realize ourselves by making new reasons for ourselves if we've committed. By not making new reasons for ourselves, if we drift. So that's how we can become the author of our own lives. 

Ben: Great.

That's a really neat little recipe. A U T H O R. Author. Great. So with that, Ruth, thank you very much. 

Ruth: American spelling! 

Ben: Yeah! 

Ruth: If I were British, I'd have to add a U. Anyway. Okay. Alright. 

Ben: Thank you. 

Ruth: Thank you for having me.