Transcript (only lightly edited)
I am super excited to be speaking to Kanjun Qiu. Kanjun is co-founder and CEO of Generally Intelligent, an AI research company. She works on meta science ideas often with Michael Nielsen, a previous podcast guest. She's a VC investor and co-host her own podcast for Generally Intelligent. She's part of building the Neighborhood which is an intergenerational campus in the square mile of central San Francisco. And she is all round amazing. Kanjun, welcome.
Kanjun (00:30):
Thank you. I'm really excited to be here.
Ben (00:33):
Everyone is talking about OpenAI's, chatbot, ChatGPT. The bot has some major flaws but can also produce some amazing writing and code. You can add on top of that the recent advances in AI art generation also via open AI or stable diffusion. And then other more technical advances such as DeepMind and protein folding and the like. I'd be interested what do you make of the current state of AI? Where do you see perhaps AI language going and where does Generally Intelligent and its own mission fit into this ecosystem?
Kanjun (01:10):
Yeah. It's a good time to be asking this question because I think when I first started Generally Intelligent with Josh, my co-founder, a few years ago we definitely didn't expect things to go this fast. I mean, we expected things to go quite fast. But I think this is faster in terms of progress than we've ever seen. It's really exciting and a little bit scary. We can get into the safety stuff later. But I think ChatGPT is something we've kind of roughly known would be coming for a while. But what's really remarkable about it is that you change the interface from something that's a freeform, unconstrained, humans have to prompt into an interface that's a chat interface and suddenly tons of people figure out how to do much more creative things.
Something I've been thinking about is like the Xerox PARC component of AI where we have all of this interesting development in capability. So one question is, "Okay, what are the interfaces that might allow humans to be able to get a lot more out of even existing models?" In terms of where models are going, I think-- At Generally Intelligent, we work on building general purpose agents that can be safely deployed in the real world. I think we kind of timed it well. We think we will have general purpose agents that can be hopefully safely deployed in the real world in the not too distant future.
So I can go into kind of our focus really is about studying generalization and reinforcement learning and how these models are able to generalize. And whether we can get a better, clearer theoretical understanding of how to construct these models in a way that is more predictive. Like, can we know ahead of time given this training data, these parameters, this training procedure, this type of model that you'll end up with a model that has these behaviors? So that really is the kind of hope. It’s that we can get to something that is more controllable. It's kind of like building bridges or nuclear power plants. Neither of these things is exactly safe, but we've made them safe because we understand how they work. So I think the same thing has to be true of neural networks.
Ben (03:38):
And so for a person outside the tech world or even outside the AI world, what do you think is most misunderstood about where we are with AI? So one impression I have is what you alluded to; it has gone a lot quicker than expected. Another element we can segue into the sort of rogue powerful AI, the so-called alignment problem on safety which I think the person in the street is generally not something which comes across their minds. And obviously there's a lot of strange and detailed technical elements to some of this. But I'd be interested in what you think is perhaps most misunderstood when you speak to the person in the street.
Kanjun (04:18):
I have a few. One is we tend to use words that we use to describe humans in order to describe AI. For example, the word 'understand.' So when I talk about whether or not a human understands something, I'm kind of referring to a mental process in their head that I have some sense of like, “It happens in my head too." So I have some model of what's going on in their head. A lot of people they say, "Oh, these models are just statistical. They don't really understand anything." In some sense that's true. Certainly they're not understanding in the way that humans are understanding. They don't have the same mental process in their head as we have in our head. But I think that is maybe foolish to expect these models to have the same mental processes and to say, "Okay, unless they have the same mental processes as humans, they can't be intelligent or capable and be able to do things that humans can do." So I think just being careful about using human descriptors to describe these models as a way to say, "They can't do X because they're not like humans in X." What else is misunderstood?
I think on the flip side some people say, "Oh, we already have something that's general. Everything is solved." I think that's probably not true. It seems like as we scale up models that we get a lot more capabilities for free. I think if you ask any researcher in the field, there are few results remaining towards something that's a lot more general.
Ben (06:08):
That makes a lot of sense. I think that initial observation you make about how we humanize things has been quite a human trend for a long time. So we humanize trees and we have already done the whole religions and cultures around that. So I can understand why we would do it with AI. But perhaps it's the same mistake to think that trees are like humans. It could be a similar type of thing. How worried are you about the alignment problem and rogue AI? AI safety is how real a thing that we should be working on? There's some people who are kind of dedicating their whole careers to it and others who seem quite blasé and just say, "Look, this is seemingly a human construct." Is there a small but real risk or is it overstated?
Kanjun (07:00):
I think the risk is that we don't know. I don't know, none of us really know how these models are going to evolve, what kind of capabilities they're going to express. The issue with complex systems-- and I would say neural network is a complex system, is that they end up with emergent behavior. So will they end up with behavior where they're trying to deceive us at some point? Maybe. It's hard to say, "No, absolutely they're not going to do that." So you can kind of construct the problem in ways where you can study whether or not they're likely to end up having that behavior. And I think it's really good for people to study.
Ben (07:42):
I guess that makes sense because I'm never quite sure how much we really understand the brain itself. Like maybe below 50%, maybe even as little as 10 or 20%. I'm going to segue into something which I've been reading. Some work that you've done and something which isn't very well understood in the world which I guess is around trauma or anxiety. I'm interested in neural networks and how the brain thinks of it because we don't really understand it. I'm particularly interested in this technique called EMDR which works on eye movement and it essentially seems to have a reprogramming effect. It stands for Eye Movement Desensitization and Reprocessing therapy. If you look at controlled trials, it really seems to work for some people. Not everyone, but actually pretty good results in trauma and things like that. Quite a lot of neuroscience, we don't really know how it works. So I'd be interested in what you think AI or neural networks or this has perhaps told us about trauma or anxiety or the brain, and maybe vice versa in terms of maybe some learnings or thoughts you have about how it might be helpful or not in this kind of area.
Kanjun (09:01):
Yeah. So get ready for the rabbit hole. I have a bunch of thoughts on trauma. I think maybe the kind of core idea is-- and I'll write an essay on this at some point. This idea that trauma is overfitting and therapy is actually a process of retraining. So let me give some examples. I've done a ton of therapy on myself, I think. I grew up in China. I moved here when I was six by myself. My parents had moved here when I was much younger. I had a lot of abandonment issues that resulted in all sorts of weird behaviors that were not well suited to my current environment. Those behaviors were rooted in fear. I think basically young children copy a lot of beliefs from our caretakers, and in particular we copy beliefs where our caretakers are scared.
The theory behind social copying like this is that copying is a really efficient pre-filter on potential beliefs or potential behaviors because presumably other people around you have those beliefs and behaviors because they're adaptive. That's kind of one theory why we copy so much. So we copy all of this into our brains and so now we have all these fears that our parents and grandparents and family have. I had internalized a lot of those fears. We call something trauma for really two reasons. One is something that seems really bad happened. That's kind of the medical term of trauma. But I want to use trauma to refer to this other phenomenon where we call it trauma when it doesn't seem like it's well suited to our daily life or our current life.
So post-traumatic stress disorder is a trauma and PTSD is fine if you're at war. You do actually want to be super jumpy and really careful about whether or not a shell is coming. But then you migrate back into the real world which is out of the data distribution of the environment you were just in and now you're out of distribution. So you've been overfit to the previous environment-- and I'm abusing some terms here, but now you're not generalizing to this new environment that you're in. I think this is true of most people. We don't actually generalize that well from childhood to adulthood. We freeze a lot of these old beliefs because they're adaptive and in an environment that is much more dangerous than our current environment that worked really well. But our current life, this environment is quite safe and having these fears is actually not very useful and causes a lot of maladaptive behavior. And so trauma is overfitting.
So therapy is retraining. I think what I've observed in therapy is that there are really three processes going on. There is the process of activating or accessing a network; a sub network of some sort. There's the process of giving it new training data from your current life and then there's the process of updating it. So actually having that memory reconsolidation process happen. It seems to me that all therapy techniques are good at one to three of these three elements of retraining. So EMDR as you mentioned is actually quite good at all three. I think that's part of why it's so effective. It's that with EMDR, you are flicking your eyes back and forth or you're doing this bilateral tapping where you're tapping different sides of your body. Somehow that seems to reduce fear response so that you can more easily access a network.
EMDR I would say the places where it's not as effective is in giving your mind new training data from your modern day. Your therapist actually has to prompt you to do that. So if you're working with a therapist who doesn't understand this frame of, "Okay, now you need to see how this old memory is not adaptive and you can update it." If your therapist is not doing that, if they're not giving you training data from your modern day then you may not see updating. And so that might be why. I've done EMDR on a lot of people and it seems fairly consistently if you do show them the training data and their modern life is good, then they'll update.
Ben (13:23):
This is fascinating to know and also that's a sort of learning that you get from understanding ideas of neural networks and training data. It might in some ways bring us closer to some animal behavior type model. But I hadn't heard it articulated that way and also makes sense of why some of these things may fail or not.
Kanjun (13:44):
I kind of have this whole theory of why therapies work or don't work because I've experimented so much on myself and designed new techniques combining existing ones because they're not all good at all three things. So cognitive behavioral therapy is fantastic at getting new training data from your modern day because it's asking you things like, "What are the costs of what you're doing or what are the costs of that belief?" So it's basically a data accumulation mechanism. You can acquire more data using CBT procedures. But it's not very good at accessing these very old beliefs. The kind of complaint about CBT is that it often works at the surface level where it's actually really hard to get to these deep old beliefs. And then it's pretty good at helping you update. Has a bunch of different methods to help you update.
Like, you write down on the costs, you look at the list of costs, you implement habits into your life, et cetera. So that's particularly good for getting training data but it's not good for access. Now you look at something like Internal Family Systems where you're talking to different parts inside your body and these different parts are kind of-- The way I think about it is they're like old memories that are stored. IFS is really good for access. The whole point is that you're accessing some of these parts and able to go into them, but it's not very good at getting training data from your modern day. So often people do IFS, they go into those parts and if they're not prompted properly or they haven't prepared their conscious mind with new beliefs from today, then they'll just believe whatever that part is believing. They'll believe like, "Oh, I'm three years old and I'm really scared and it's good to be scared." And you're like, "Well actually no. You're not three years old anymore." You kind of have to bring them out of that purposefully.
So combining IFS and CBT or somatic therapy and CBT, these are effective methods. It makes a therapy method much more effective. I think this framework actually makes therapy debuggable. So if a person is for some reason not able to update a situation that they're feeling frustrated by you can kind of say, "Okay, well is the problem in access? Is it in training data or is it in updating?" There are techniques that you can combine to get all three to happen. Then to your point about neural networks and how that influence the mind, I think basically both neural networks and the mind are learning systems. And learning systems-- like modeling systems have some shared properties. So overfitting is one of them.
Ben (16:33):
That's amazing. So you definitely have to write that as an essay. It sounds like you should actually have a whole new startup investigating that. You may not get to AGI, but you solved the therapy problem which seems to me almost as great.
Kanjun (16:50):
Actually, I think an important part of safety is figuring out what values to align to. So understanding humans is an important part of that.
Ben (16:59):
Exactly. It leads me to think that... Do you think animals go through trauma then? Some form?
Kanjun (17:06):
Almost certainly. Yeah. So you'll see some dogs were abused when they were young or by a previous owner and then now they're really jumpy even though they're with a new owner. So these learning systems update very conservatively in humans and living creatures which makes sense because the real world is fairly unforgiving. If you update too quickly, then you might die.
Ben (17:32):
Sure. So this is the paired learning response. It's really strong. Okay. So a slight left field pivot then is, do you think dinosaurs felt trauma?
Kanjun (17:44):
Almost certainly. Again, it's a learning system. Unless dinosaurs never learned anything new, that's plausible, but unlikely.
Ben (17:56):
That's quite a good one because you have a question on your website which you posed which I guess we have no answer, which is they existed for 165 million years or so, give or take, and they did not seem to advance to the levels that we seem to advance to. But obviously they seemed to felt trauma and they have some of these learning mechanisms. I guess we can just extend this to other animals which have been around for a long time. Maybe you could do mammals like rats or you could do insects as well. What's your current thinking on this?
Kanjun (18:30):
So the question is basically-- I was reading a lot about dinosaurs and I was like, "Why is it that for hundreds of millions of years we had these creatures that evolved a little bit but didn't seem to improve dramatically or change dramatically? And then versus in the last like 1 million years or 2 million years on earth, we have had this crazy exponential change in the nature of intelligence of animals on this planet. Is there something that causes this kind of change in evolution?” My current best hypothesis is environment constraints force evolution of new skills or capabilities. So basically you want to be in an environment where intelligence is rewarded. If the environment is too abundant-- which at that time there was a lot of oxygen in the air. Maybe it was very abundant in terms of an environment, in terms of food. If the environment's too abundant, there's no benefit to being smarter. And so here now today, the environment's not as abundant-- I think, we think. I'm not sure, no one knows. But if that were true, then constraints, maybe.
Ben (19:46):
Yeah. Okay. I like that theory. I got one slightly downstream for that which is dinosaurs didn't have hands as much. I think the second thing which comes alongside is language. And that didn't develop. But why didn't those things develop? There is some evidence or there’s some theories that the human animal or something like that was forced to-- for instance in ice ages or when there was scarcity-- to develop these things to survive. Therefore that would happen with a bit of luck. So they kind of intertwine. I guess the follow one from that though is I was pondering when you go back just a little bit in time I guess on this time scale to the Romans and the Greeks, why did they not invent more advances than we have today? They certainly seem to have some of the capabilities-- and they did invent some things like Roman cement which we still can't seem to copy which seems to be a really good material.
But they didn't invent some of those things. You can see this is going to segue into meta science at a point because they also didn't invent some things which didn't really need other types of technology. The one I always think about is the randomized control trial. So you test one arm and you test another arm and you compare them. That didn't need any extra science and certainly they had seemingly the capabilities of it. In fact, you could have gone back 2000 years earlier and they would've had the capabilities of that. But it didn't develop, or maybe it did and it just didn't hold which is also kind of interesting. Have you ever thought about that? Why did Greek and...
Kanjun (21:28):
Yeah. I thought about it. Quite a lot.
Ben (21:31):
Is that essentially a meta science question? Because some of these ideas like randomized control trial or how they did it wasn't holding and so it didn't transmit.
Kanjun (21:41):
Right. So I'll talk about randomized control trials first and then we can go to the tie to evolution. So I think randomized control trials, the reason we have them now is because we have the statistics, the mathematical foundation to be able to evaluate these two groups. Are they actually the same or not the same? I tried this trial. I randomized one is the control group and one is the test group. Did the test group outperform the control group? There are a lot of statistical techniques that are needed to really understand that question. So I could see that maybe they would have done randomized control trials for really small effects a long time ago, but that maybe it's not very deterministic. You get some people in this scoop, it works and some people in this other group, it works. Then they might throw up their arms and be like, "I don't know what to conclude about this." So I could see the reason RCTs didn't exist before is because we didn't have the mathematical foundation to be able to look at the results and say something about it and get information out of it. And I think that's true.
Ben (22:50):
Geometry which I find a lot harder, but maybe that's just me. Euclid invented geometry which seems to be quite harder than the stats behind certain RCTs. But it's true. They hadn't seemingly invented the stats. But I'm not sure- Like the ability to see that.
Kanjun (23:11):
Trigonometry doesn't build on so many other fields of mathematics whereas statistics does. But I kind of back to the question of, "Why did Romans and Greeks not-- their civilization didn't accumulate in the way that ours does technologically?" I think it actually comes back to this process of variation and selection which is true in evolution and also true in science. So in evolution we were just talking about constraints. I think the reason why constraints are interesting is because in evolution you're varying, you're doing a lot of variation. Then what the constraints do is they enable selection. The tougher the constraints are, the narrower the selection is. In science at that time in the Roman and Greek era, maybe a way that people thought about knowledge is that knowledge maybe came more from authorities.
There is not this idea of evidence being a thing. So it was not until the royal society in the 1600s, I think, that they had this model; nullius in verba, which means take no one's word for it. Which means that before that model, people took other people's word for it. So people weren't varying and evaluating ideas and they weren't able to test and select new ideas to adopt as a culture. So the church had some top-down ideas-- many of which were wrong, and so no one was able to change them. But now we have this process of science is quite remarkable. In the ideas of science we can-- Any grad student if they're more correct than a Nobel Prize winner, the field actually acknowledges that they're more correct. So this is quite a remarkable thing to be able to have the ideas change not from the establishment, the authority, but from the outside; from people with no authority at all.
Ben (25:22):
It can take them some time but it does eventually happen. I think about ulcers and how they figured that out, but they weren't believed for some time. But eventually the science does seem to win out which like you mentioned is a kind of remarkable thing.
Kanjun (25:40):
Actually, in some fields it happens really quickly. Like in physics, there's this idea of superconductivity where Brian Josephson was this 22 year old and he had published this work on Superconductivity. John Bardeen who's the only person who has ever won two Nobel prizes said like, "No, you're totally wrong." People pretty quickly realized Bardeen was wrong and Josephson, the 22 year old was right. And the physics community pretty quickly, I think, came to this conclusion. So it's not true in all communities. Some communities rely a bit little bit more on authority, but happens sometimes.
Ben (26:21):
Isn't that true about-- was it Linus Pauling and DNA as well?
Kanjun (26:24):
Right. Pauling was wrong.
Ben (26:27):
He was wrong and he admitted it quite quickly and so did the community. And said, "Look, this is obviously right." So I think that's true as well. You speaking about the fields of mathematics in ancient Greek and all of these other fields leads me to something. So I've just been reading it in the last couple of days and I believe you were part of this conversation as well as actually a ChatGPT. And that was Michael Nielsen's recent notes on the ideas of fields or communities as a unit of advancing progress or thinking about progress. I thought this was a really interesting idea. This is reflecting on the fact that you said stats has to build on quite a lot of other fields, whereas some fields may just develop out of not quite nowhere, but may not have to build on so many other things.
I think of this in creative literature art fields and performing art fields. To what extent are you building on what goes before or to what extent do you take from a whole sideways field and make a kind of new field of it? It seems to me that actually that is one interesting way of thinking about how advanced we are, and that actually now that any individual human will find it very difficult to even have a surface knowledge of all of the fields, and certainly not an in-depth knowledge of maybe more than 5, 10 or 20 of which something like an AI could start to do. So I was interested in what you think about fields as a unit and therefore AI or some development putting these fields together to then create new fields, is maybe one of the key questions that we should look at.
Kanjun (28:12):
Yeah. I'm actually quite confused about the-- I think the underlying question here is kind of like, "What is the structure of knowledge? Or something like that. There's this one model of knowledge as being a thing that is like a tower block. You've got some blocks at the bottom and then you put some... This is not a good analogy. There's maybe a better analogy. Like a computer program where you're calling a lot of previous functions. So like you have a function that calls another function, and that function calls an older function, and that function calls an older function, and each of those functions is like a discovery. So there's that model where by definition, the outermost function is dependent on all of these discoveries that came before it and can't be simplified. It has to call all of these preexisting functions. That's one model, but that model is not really true.
It often feels like when an idea is first developed in a new field, the person who developed it actually doesn't understand it as well as the people who follow them. So I think there was a Feynman quote that talked about how like-- or maybe a Hamming quote that talked about, "Einstein didn't understand his own ideas as well as the people who came after him." There's this kind of reconsolidation process where the understanding is simplified and it may be simplified even more. So it's no longer true that this function is dependent on all the previous discoveries. In fact, you might end up with a new fresh function that doesn't depend on any other previous discoveries and that still captures all of the information. So it's actually not clear to me that humans can't-- in some fields at least, that humans can't understand everything. Maybe the reason we can't understand it yet is because it's not yet simplified to its final form.
Ben (30:18):
I guess that might be true because I'm going to segue from creative arts as well, and that does seem to be true in creative arts. Although it's argued about, the very simplified precis is that you might have a play or a poem or whatever it is. The audience or later artists make much more of that creative work than the original creator. The original creator has obviously their view and vision and have that. But actually those who come after make it even so much more than what it was. It's actually out of the original creator's hands, and particularly once time has passed-- I guess you can think about this in Shakespeare today. Obviously, he had a view and we don't quite know what his view was on all of his work. But it has been taken to another level by so many more artists and creators. And actually, I think arguably is greater today than it was in his own time because of that. And actually that feels-- although it's still argued about, quite well established within art. That your creation is not just your own and that the very greatest creations become bigger than what the actual original creator think and might be interpreted more deeply than what the original creator can even think. Therefore, you actually aim [inaudible 31:30] who at the peak of their art will often slightly step away from commenting on what they think their art is because they realize that their answer may not be the best answer and actually might narrow the interpretation of what that might be by imposing this idea that the author knows best.
Kanjun (31:50):
That's really interesting. I'm really curious about this process. So here's what's happening. In your view, what's happening-- let's take Shakespeare. These people who have built upon it, is it that they've kind of added additional meaning to what-- They're interpreting it in different ways. And so a single sentence if you read it yourself might only have a little bit of meaning, but if you read it in the context of everyone else's interpretations, it has a lot more meaning. Is it that they've added more meaning? Or is it that the people who come after have found simplifying patterns in his work that mimic this reconsolidation process in science that I was talking about of ideas. What is it that's making it richer?
Ben (32:36):
So actually for the most complex work it's both elements. And I would actually add a third to segue into something like cave art. So cave art, we find new... Obviously, we don't actually even know the original creators of that art, whether they even viewed it as art. But obviously we find patterns now. So young children put their hands in the mud or in wet cement. And so we riff on the now of the culture and then obviously we will add our own meaning into what we see how in the now. Then because so many people have commented on cave art and made their own mud art and a kind of, I guess, a meta art sense from that, it's also made the whole field richer. So you've definitely got those elements and it added together. Then you've got people who draw on those various things to then create more meta pieces.
So it definitely builds that way as well as wide. I'm only thinking out loud. I'm sure there must be more, particularly in something as rich as Shakespeare because it then becomes so pivotal to other things and actually might segue into things like culture. So it's now a key element in how British people think about themselves. If it wasn't for Shakespeare, we wouldn't think the way we do, I'm pretty sure. Obviously that would be arguable. There's probably a PhD in that. And even, there are words and catchphrases from Shakespeare's plays which now have gone into different nations' vernaculars which didn't exist beforehand, but have also made alive something which was already there. But actually crystallized it potentially in a simplifying form or in some form that people get, "Yes, that is what that was about and that's what that phrase means."
Kanjun (34:28):
That's really interesting. Yeah. This is another thing I'm quite confused about which is, "What is meaning?" I think an interesting thought experiment that my co-founder, Josh, gave is. "Let's look up at the sky, pick any star, and let's imagine that a hundred thousand years from now humans or some descendant of humans live on that star." Now, we have two scenarios. In one scenario, they have no memory of earth or where they came from. They don't know how they ended up on that star, but they're there and they don't know anything about their history. In the other scenario they look up on the night sky and they point at earth and say, "Hey..." I mean, maybe they can't see Earth, but in this general direction. Like, "Hey, that's where we came from." There's like some merchant on that star that's like, "I know that my distant distant ancestors came from Earth." In one of those scenarios it feels like there's a lot more meaning. That merchant feels a lot deeper sense of meaning than in the other scenario where they're kind of disconnected from where they came from. I don't really understand why. Somehow meaning is tied to this sense of context and history and kind of where things came from and why they are the way they are. But I don't know what it is.
Ben (35:50):
I can't answer the why. I kind of almost obviously, or if I had, I would've made some genius breakthrough in the human condition, I guess. But it is definitely true. So you think about money or cultural symbolisms or particularly art. But say in your example of the star. Say I said, "Oh, I went onto the internet the other day and I sold Kanjun that star or one of these internet star nomination things and 10 million ancestors later, Kanjun's ancestors arrived on the star which supposedly she owned." Again, you would've imbued that with so much more meaning. Except in today's day and age, what does it mean selling someone a piece of paper saying, "You sponsored such and such a star?" There is no ownership in our legal culture of what that star might be.
That's actually why I am not so worried about what some people are worried about in terms of some aspects of AI generated art because-- I don't know what the portion is, but some significant portion of art is the value is in the so-called the eye of the beholder. So the meaning we give it and its time and place and things, and that's part of its value. Obviously that's part of the value in the techniques and everything which went into it which is obviously going to be different in AI art. But there's definitely this part that humans bring. This kind of human value part which is not seemingly part of the physics natural laws, but seems to be part of the human natural laws.
Kanjun (37:21):
Yeah.
Ben (37:22):
That's leads me to think then, what do you think out of all of the things you are confused about, interested in. What do you think are the most important questions in science or meta science that we should be seeking to understand at the moment?
Kanjun (37:38):
We wrote a whole essay on that.
Ben (37:43):
Glad you noticed.
Kanjun (37:47):
Yeah. Talking about the essay, I want to come back to this idea of meaning later because maybe we can riff on it.
Ben (37:52):
Yeah. Essay back to meaning. Something might...
Kanjun (37:57):
So talking about the meta science essay, originally some of the questions that motivated us were questions like, "Why is it that a refunder says they want to do high reward research and yet they end up doing relatively low risk incremental work; a kind of funding low risk incremental work? What is causing that?" Clearly there's an intention to do something different and yet every new funder kind of gets sucked back in into the existing ecosystem. They don't diverge very much from the existing processes and norms and results. So why? That's so weird, right? Isn't it weird that you try to do something totally different and you end up being exactly the same? In many fields that's not true. In art, if I try to do something totally different, presumably at some point I'll ended up with something totally different.
So that really was the beginning of us trying to understand like, "What is going on here? Why is it that we can't fund things in totally different ways?" There are lots of different kind of ideas expressed in the essay, but one of the ideas is this idea that the space of social processes of science is very underexplored in that most funders have very similar social processes to existing funders. Even if you start a new funder, you might still have peer review, you might still approve grants based on you don't anonymize the names or anything. You approve grants based on how good other people think those grants are. You approve them based on how successful they seem like they might be.
Michael in his head already had a giant list that I started to add to of potentially totally different programs that you could run as a funder. For example, high variance funding. You only fund things where the peer reviewers really disagree with each other or funding an open source institute. So this is something that typically wouldn't be considered science. But if you're a funder, you could fund a whole institute that's working on open source projects and that is an important infrastructure for science. I think at some point we talked about like a traveling institute of scientists where you go around the world... A young genius immigration program where you immigrate people into the country, find people across the world who seem like they might be really capable in some particular way. So it was like, "This is infinite." We just kept going. And so we were like, "Okay, well that's interesting." Eventually we found a frame for this which is the beginning of the essay which is, “Let's say you come across some aliens and they do science. Would those aliens have discovered mathematics?” Seems likely. “Would they have discovered atoms?” Unless we're totally wrong about atoms, seems likely. “But would they have PhD programs or tenure or...?”
Ben (41:21):
And have randomized controlled trials.
Kanjun (41:23):
They might have randomized controlled trials or we may discover that actually... One thing about RCT... Sorry, go ahead.
Ben (41:31):
We haven't discovered it.
Kanjun (41:34):
Well, one thing about RCTs that's interesting is that we only do RCTs in situations. We don't do RCTs in physics because we can understand mechanistically why the phenomenon is happening. We only do RCTs in situations where the situation is so complex we've kind of given up on mechanistically how to explain the result. And so we just kind of divide into two different camps and see which one is more probabilistically likely, which to me... Maybe it is a method that we'll always have because there are some things that are always beyond our understanding. But to me it's very unsatisfying.
Ben (42:16):
Yeah. I hadn't heard it described like that, but you're right. Complex sciences, biological sciences; anything to do with humans, so social sciences, that's true. I will try and say, so what do you think is the best idea? Or you can do both. What was the worst idea in your meta science paper? Because maybe that's the one we should go for on the grounds that actually it should be a lottery and you don't know. You might say there was also failure audit, 10 year insurance, you had your traveling scientists, you had long short prizes, anti-portfolio, interdisciplinary institute, you had the funder as detector, and you had a variety more. What's the thing which you just think was probably Michael's like, "Oh, that's just really awful but we should just keep it in there because you never know. It might be right." Or the otherwise like, "This is mine and this should be right at the top because the best idea in it." What do you think?
Kanjun (43:10):
Maybe I'll answer a slightly different question which is like, "What are the most interesting ideas?" And then...
Ben (43:19):
Or you could go, what's the most high risk idea? What's the idea most unlikely that anyone's going to go for, but probably is the one they want to try?
Kanjun (43:27):
Well, all of the ideas are a little bit out there. I think the most interesting ideas are a little bit less about the funding programs because those programs are just given as examples of underlying generating functions. What's much more interesting is looking at the generating functions. So we have this general idea of latent potential. As a funder you're assuming that you-- When you allocate money, you can give it to people where there's latent potential; where their potential is not yet unlocked by another funder or another way of doing discovery. So your goal should be to find latent potential. I think this is actually a relatively new idea. Most funders don't think about their job as finding latent potential. Just like in finance, your job is finding edge.
Also as a funder, your job is finding edge. You shouldn't be allocating more money. And it's not very effective, I guess, to allocate more money just where everyone is allocating money. So this idea of latent potential is kind of this core idea. Then we have all of these-- The way we got to a lot of this part of the essay is that we were trying to understand, “What are different ways of excavating latent potential? How do you find it?” So the anti-portfolio idea, for example, came from Bessemer Venture Partners, which has on their website a list of all of the great ideas that they missed. If a funder had a list of all the great ideas that they missed, the theory about latent potential here is that there's something in the incentives or motivation of the program manager or the people making the funding decisions that maybe is overly risk averse or doesn't have a feedback loop and there's latent potential in that feedback loop and in shifting that incentive structure.
I think maybe another idea that is on the list or we don't have is a Noble Prize for funders. Kind of gets at a similar generator of latent potential which is changing the way that the program manager is behaving. There's a totally different way of thinking about latent potential which is around how you might construct new fields or how might we 10x the rate of field production. So that is a question that generates a lot of potential program ideas. The community's idea maybe is one of them. I used to run this big group house and one idea I had was like, "What if you started lots of group houses and funded lots of group houses and just got interesting people to live in them?" You end up with these interesting communities where maybe that would increase the rate of field production. Or another thing is maybe there's a field shutdown process where at some point the field feels too incremental and you have to shuffle people into a different field. So you get cross-disciplinary work as well as this kind of death of fields and maybe that would result in new fields. There are lots of ideas for like, “Okay, if you think about this question, you'll end up with lots of different ideas.”
Ben (46:48):
Well, people should definitely read the essay, or rather, I like to say open source book.
Kanjun (46:53):
Right.
Ben (46:54):
I'm going to riff on two of those things. So one you mentioned laterally and we might come back to the Neighborhood idea as well. I think that's really interesting because surely what 10, 20, 30, 50, maybe even 80% of the value in universities is the social capital of bringing people together. Why not recreate that and see what happens? The other one you said is about investing edge because that's really interesting in financial markets. There is this school of thought where you should try and identify where you have skill and then play where you have skill and don't play where you don't have skill. This is true of a lot of sport and other games. Therefore even if you are lucky, it will have been viewed as a mistake if you played where you didn't have skill. And actually if you won, that is also judged to be a mistake. Whereas if you have skill and you played a good bet and it didn't go your way-- which markets happens a lot-- that is actually the correct process because you played where you had skill. Where you had edge, the edge didn't come off. I think this is very true of funders. They probably have idiosyncratic skill because it's a social science, but they probably do have that and then they should play to that and not where they feel they don't. I think that's a really...
Kanjun (48:14):
Every funder is started by a different person and that person has different networks, et cetera. So I think their edge could be different. They could end up investing in really different programs but they don't. Another idea that we don't talk about that much in the essay is this idea of institutional antibodies. So another reason why-- and this is the entire second part of the essay. It’s about kind of bottlenecks to change. “Why does change not happen? We have all these program ideas. We just came up with a hundred of them off the top of our heads. Why doesn't anyone do them? That's so strange." That really got into the second question of like, "Okay, well actually there are a lot of bottlenecks and one is institutional antibodies where you try to do something different and existing institutions actually lump asked you for it.” They really don't like that. There are reasons for it. One is maybe they feel threatened. Maybe Harvard feels threatened by a new funding institution. I think Harvard said something really negative about the Thiel Fellowship when it first came down because it was a threat. And by definition, if someone's finding edge somewhere else and it's a little bit competitive, it will be somewhat of a threat to some existing powerful institution.
Ben (49:29):
Don't we call those type of things-- So you call them antibodies. But it strikes me that what we're really talking about or what a layman might think about is culture or at least part of that culture. There's something interesting about institutions which have a very long history about the culture that they develop. I know you're very interested in institutions and culture broadly. So I guess my question is, is that one of the problems about old institutions is the way that culture has ossified whether it's competition or not. And therefore you need new institutions or maybe new arms of old institutions. And maybe you could potentially look at through the lens of, or you might want to comment on the culture that maybe you are trying to build at your firm or in startups in general because that seems to be potentially one of the competitive edges that startups have. It's that they don't have to deal with a legacy culture of whatever that might be. Except there's this observation that old institutions seemingly have this problem, this bottleneck which you see in funders. .
You can see in a lot of old institutions and in fact in startup language. Jeff Bezos calls companies on that. They’re two companies. He wants to be a day one company and he says, "You're day two company, you’re dead." That's another way of saying dead's probably slightly overused. But it's saying that your culture and everything is ossified and you can't do all of these things that you want to do for innovation startup. But it strikes me that that seemed to be a very human thing. On the other hand, cultural and institutional knowledge when you take the long cycle of history has been incredibly valuable for making progress, at least up until this point. So I'm not entirely sure that something about it has been preserved for very good reasons. So yeah, your thoughts on culture.
Kanjun (51:18):
A lot of thoughts on this. Okay, there are a few categories of thoughts. One is an idea of old institutions that's overfitting. Second is, I think the phenomenon of institutional antibodies is speaking to actually something broader than culture. The way I think about culture is culture is a set of beliefs held by the people in that culture, and it is also reinforced by a set of systems. So kind of just going back to the point of institution-- I think there was a third thing that you talked about around-- I forgot. Maybe we should cut out the...
Ben (52:11):
The history of culture also being important for traditional knowledge as well on that flip side. But knowledge over time.
Kanjun (52:20):
That's right. So knowledge versus culture and history. So I'll talk about the institutional antibodies thing first. I think that goes beyond just the beliefs. I think that's actually more about the competitive system dynamic. So it's about the broader ecosystem of institutions and what happens when there's a competitive dynamic in general. So whenever there's a competitive dynamic you're kind of taking away the power of an existing institution and that institution's going to retaliate because they're old. And part of why they're old and still exist is because they have some kind of power. So I think that's actually more of an institutional is an organism and that organism is being threatened and less about the culture of that institution in particular.
I think there are some environments like the startup environment-- and we were inspired a lot by the startup environment where threats happen all the time and the existing institutions retaliate but it doesn't matter. It doesn't matter that much. There are some mechanisms in place like antitrust that prevent existing institutions from retaliating too much. So some ecosystems of institutions like a startup ecosystem is relatively healthy because new institutions form all the time and old ones die when they need to die, and they don't die when they don't need to die, and you have kind of this outside party that enforces antitrust laws. So that's one way to-- That's kind of addressing the institutional antibodies is broader than the culture of the institution.
But now if we talk about institutional culture. I think your question is something like, "Why is it that existing institutions can't implement new ideas? Like the Harvard, Harvard can't implement the Thiel Fellowship." I think it's because basically these two institutions have beliefs that are directly in conflict with each other. So that's one reason why the existing institution can't do it. So Thiel Fellowship says, "University/ College is not useful for some people." And Harvard says, "I am a college, you should come. It's clearly useful for you.” They have this core underlying belief that just can't coexist. So in this case, it's really hard for those two beliefs to exist under the same institution unless you have two really different cultures.
Ben (55:04):
I get that. But Harvard, I guess could do a failure audit or a ten year insurance or high variance funding which also seems to be a bottleneck in this culture. But I can see sometimes you just can't take it on because it's not part of your set beliefs.
Kanjun (55:19):
Yeah. I think Harvard could do a lot of these other things. We'd love for them to do those things. We just feel like people have not been very imaginative in the types of things that they could try.
Ben (55:30):
Why do you think we've lacked imagination?
Kanjun (55:34):
I think it's not very safe.
Ben (55:38):
I guess that hearts back to culture. I understand your point about antibodies of maybe being wider, but I do wonder about this culture thing. You're not safe to have maverick ideas. Maybe they're maverick, maybe they're not even maverick. I'm jumping around, but I guess that leads me to think how you create a positive maverick then if you've got institutional antibodies as a problem. Maybe you've also got culture as a problem, although maybe you've got these new institutions. And you might-- I cut you off before thinking about the knowledge question as well. You can return to that one as well.
Kanjun (56:14):
That's okay. We'll come back to that one. So this problem of safety I think is really interesting both from an institutional perspective and an individual perspective. So humans, if we feel unsafe as individuals we won't take risks generally. There's some survival instinct that prevents us from doing that. Institutions, I think a lot of funders don't feel safe because their funding is coming from some outside party. It's coming from the government or it's coming from someone who's wealthy or someone else who wants to see success. They want to see that their funding is going to a good place. So as someone who runs a funding institution I would need to be like, "Okay, I need to be able to show success because I'm not the source of this funding." But if you see funders run by the high network individual-- For example, the Arnold Foundation who funded Brian Nozick, who started the Center for Open Science which we also talk about in the second part of our essay. Brian was rejected by the NSF-NIH. Literally everyone, I guess NSF, literally everyone rejected him for years until the Arnold Foundation found him.
And I think part of why Arnold Foundation was able to do something strange or fund something strange is because John Arnold was involved in it and he was the supplier of all the money so he felt very safe. When I interact with funders of different types where the source of their money is held by the decision makers versus not held by the decision makers, they actually end up behaving in really different ways. When a funder's money is held by the decision maker, then the funder takes a lot more risk because they feel safer. They can spend their own money however they want. They have no one to report to.
Ben (58:10):
That's really interesting. So slightly adjacent to that would suggest that you might believe in all of the work done on psychological safety which is work in teams. Google sponsored it for Amy Evan's work. So this idea that when you're feeling safe you will point out bad decisions and also venture more riskier, audible decisions. And when you're not, you don't point out something which you think is obviously bad because you're worried it sounds stupid. And also you don't take more risks in new ideas because you're just feeling not safe in your team.
Kanjun (58:42):
Yeah. I think psychological safety is really useful in situations where you want higher variants. You want people to take risks. And you may actually not want it in situations where you don't want people to take risks.
Ben (58:54):
Maybe. Although they also point out stupid things you're doing. So it's not just the risk side. That adds to your point. You've been doing it for 10 years and you get a new member of the team like, "Well, they've been doing it for 10 years," but obviously this way is better and then they don't suggest it. Somehow they think it's better. Another thing then is I observe a strong streak of creativity in your work and life. There's setting up a Neighborhood...
Kanjun (59:22):
Sorry, we haven't talked about knowledge yet.
Ben (59:25):
Okay. Let's go on knowledge and then we'll go-- Maybe knowledge into creativity is a good way of going. So holding knowledge...
Kanjun (59:30):
Okay. Yeah, we can do that.
Ben (59:32):
Holding knowledge on an institutional level. Or let's hold knowledge in and then see where this goes as well. So strong streak creativity in your life. Setting up the Neighborhood you obviously had feelings before. I really wanted to talk about all of the dancing as well. The dancing comes in because I was going loop back to somewhere early in the conversation about meaning and value because I think dancing has that. Actually, dancing might be quite a good example because there's also knowledge held in dances. The way that we dance and that talent is also developed through time. Actually, we can segue, sometimes institutions hold onto that knowledge although sometimes it's groups and community groups. I think of things like, I guess Brazilian dance or Capoeira which held in a community and then has come out. Maybe not exactly a dance though, although you could call that. So how important is creativity? And you can talk about knowledge and institutions as well.
Kanjun (01:00:37):
Yeah. I guess to answer that question directly, I think for me, creativity drives basically everything I do. I only realized this a few years ago. I never really identified as creative personally. I had a really close group of friends in high school and a few years ago talking to one of them I was like, "I think I might be a creative person." And he was like, "Duh." Everyone knew that. It was really obvious. I was like, "That's strange." But anyway, I think like...
Ben (01:01:11):
All of those people understand you better than you understand yourself.
Kanjun (01:01:19):
Yeah. I guess it points to how powerful stories of yourself can be and how limiting they are. I believe in human creativity and the potential for like if we can unlock human creativity, then there's this extraordinary potential in humanity. I think that's why I'm so interested in artificial intelligence. It seems to me like fire or like electricity. One of the greatest tools for unlocking human creativity that we've ever encountered. So I think all of my projects; AI, working on meta science, the Neighborhood, the fund, the podcast, it's all about human potential and understanding human potential, unlocking human potential. Can we set up systems and excavate ideas that can unlock human potential?
I guess to your point of knowledge, there's something about knowledge I'm very confused about which is I don't understand really... It seems like institutions, there's like a benefit from institutions and cultures holding old knowledge and then there's a point at which it's not useful in many situations to hold that knowledge. So institutions like Harvard might have really outdated knowledge or beliefs that they're holding. And so this is why I was like, "Maybe there's an analogy to overfitting where times have changed and you actually should be dropping some of your beliefs.” But I'm also confused because it seems actually good for some institutions to hold the belief of, “Universities are good” because they're good for some people and good for other universities, other institutions like the Thiel Fellowship or New Science which is a new funder out here, to hold the belief that universities are not useful because they're not useful for some people.
So now you have institutions that hold different beliefs, cultures that hold different beliefs, and so you get a lot more variants or diversity and you're able to 'service' many more people because different beliefs, different institutions are a good fit for different people. But I'm kind of confused about this dynamic of like, "What causes an institution to grow?" Based on this beliefs like, "How come the Thiel Fellowship is still so small?" I'm sure there are more people that could service-- I don't know. I'm a bit puzzled about this relationship between knowledge and culture and diversity.
Ben (01:04:01):
Sure. So my reflection on that is that's because that form of knowledge is actually much closer to art than we would like to acknowledge. And so as we are riffing on earlier, art has a time in place and culture and can be interpreted by one set of group and people in time and place and one. And actually just adjacent to that in another country or another time or another thing, they can interpret that very same piece very differently. Plausibly both getting a lot of value from it, or one group could get one value or one less value. But because we pretend to ourselves that knowledge is closer to that physical natural science, like you said, the physics, like how the atom works. Well, actually institutional knowledge is not like how the atom works. To your point, I don't think the alien would view the same way like Thiel Fellowships or Harvard.
To the alien it might be, "Well, you might do it that way or you might not do it." It's not like the atom would be. Therefore it is closer to art than all of the complexities around art. That's probably why you need a randomized controlled trial to ascertain it. But even though I actually think it would give you a false read because as time changes and as people change, actually the value of that will also change which is why it riffs back to your earlier thing about creativity and art. It seems to me to be a lot closer to that, but I'm not sure.
Kanjun (01:05:21):
That's really interesting. Yeah, this idea that the beliefs expressed in art are dependent on particular things in order to be effective or useful or meaningful. So they're dependent on how everyone else is interacting with each other; some of the other norms and beliefs that exist. You can't take a belief fully out of context because it's part of this dependent system of beliefs and behaviors.
Ben (01:05:49):
Correct. And therefore that's the same of institutional like that. I'm going to finish off on the creativity and the dance element because I'd be interested to know what you think non-dancers do not understand about people who dance.
Kanjun (01:06:10):
I guess I was a non-dancer until college. Then I started doing competitive Ballroom dance which is a very structured dance style. That was very comforting to me because before doing Ballroom I was like, "I can't dance. I'm super clumsy." I'm still super clumsy. I don't have any spatial awareness. I'm not paying attention. I'm always in my head. So Ballroom was this really interesting way of getting to understand the connection with a partner and with my body and with the floor and with the air, the space around me, in an environment that felt like I wasn't just failing the entire time and doing a bad job. I think there's something interesting here about non-dancers where a lot of people say, "I can't dance."
It I think is more because of this lack of positive feedback, this feedback cycle where people are not getting any positive feedback for their attempt to dance. So then they end up with this belief that they can't dance. So for me, I did Ballroom first and then after 2012, I started to strip away some of the structure of Ballroom. I got tired of it and I was like, "I really want to do something where I can be a lot more expressive and still have this partner connection because that adds so much complexity to the dance. But where I can make things up or improv or try new things, discover new things." And so slowly got into first, West Coast Swing and then Fusion. Fusion is this really interesting partner dance style where it's literally what it says; it's fusion of all styles. You're just making things up the entire time and just combining contact improv, with ballroom, with swing, with all of these different styles and making up new styles all the time.
It's basically kind of like constructivist dance with a partner to music; constructivist movement with a partner to music and I love that. Some people come to Fusion and they say, "I'm not very good at it. How do you get good?" I do think that the structured training of Ballroom and this understanding of connection and how to hold my body and how to connect my body to itself was really helpful. I guess maybe a piece of advice I give people for dance or like new dancers is an easy way to hack this sense of connection is to imagine that you are moving through molasses. So air is no longer air, it's molasses, and it's very, very viscous. And now you're basically always pushing against some force that's pushing back. This actually basically imitates a lot of this sense of internal connection that you get in dance when you've done dance for a long time.
Ben (01:09:22):
Wow. That's really fascinating. So think of it like molasses or I guess like those slow Tai chi movement dances. And also that if you are a non-dancer and you think you can't dance, that's obviously false in some fundamental way. I guess I reflect every three or four year old can dance in some form. Therefore it's not something you must lose the ability to. You must somehow decide that. I've learned a lot.
Kanjun (01:09:53):
Dance is inside of you. It just needs to be unlocked.
Ben (01:09:57):
And somehow I'm not quite sure. But the fusion dance seems to me a kind of long loop analogy to general intelligence. This idea you mixing a lot of things, it's kind of everything that it is, but it helps with structure and you are forming new things in real time with a chat partner or the environment or everything that you are. Obviously that's a very human thing, but somehow in this whole conversation it seems to me a distant cousin of what we were talking about on the connections of all of these other types of things.
Kanjun (01:10:30):
That's actually really interesting because in AI we often talk about-- So if you're training an AI system in some simulated environment, we often talk about multi-agent simulations. So you want multiple AIs in the environment. Why do you want multi-agent simulations? Well, some people say, "Okay, well, you get maybe interesting emergent behavior." But I think that's actually much less interesting. What other agents cause is more diverse set of data, outcomes in the environment. So the environment is static and other agents are modifying it. So you're much more likely to encounter new states of the environment when other agents are modifying it than not. So I don't like dancing by myself because I kind of get stuck in local minima of movement. Whereas if there's somebody else then they're always introducing new states of the environment that caused me to have to figure out how to react in new ways and discover new things about myself and my own sense of movement. I do think there is this parallel to multi-agent simulations.
Ben (01:11:34):
It seems to me to draw another maybe tenuous parallel, that friction or dance between two entities, either in dance or something else seems to be more likely to create this new field or a field where we don't even know it's a field because you've had a new novel interaction within dance. Well, plausibly you do it enough times and people like it. That's actually a new dance form. At that moment in time no one would thought it was a new dance form because it was the first time it ever happened between those parties. And if that analogy holds, that would be other AI agents or whatever making those new fields to stay form.
Kanjun (01:12:08):
Yeah, that's interesting.
Ben (01:12:12):
So how about we play a short version of underrated, overrated and then you can talk about a couple of your current projects and things. So you can make a comment, you can pass, you could just say underrated, overrated. We have some of these things. Underrated or overrated having agency?
Kanjun (01:12:36):
Depends on what environment you're in. I think vastly underrated in the vast majority of the world. Slightly overrated in the rationality community. Speaking as someone who is very close to it. I met my co-founder at a rationality workshop. What agency is kind of the underlying description of human capability? I think of it as like, "What is our ability to change the world from one state to any arbitrary state?" The farther away that state is, the more agency we have or something like that.
Ben (01:13:15):
I could definitely see that. It's like whenever I meet a long-termist or something like that, I definitely think they overrate existential risks, but that's probably because everyone else underrates existential risks. So you're probably meeting somewhere in the middle. Although on that one, that's why I can never quite get my head around the totality of EA or effective altruism because they seem to just have so little weight on art and creativity which just cannot jive this no world view even though they try and make exceptions for it. So they vastly underrate it, but might mean I erect it a little bit. Okay, next one, city planning; overrated, underrated?
Kanjun (01:13:55):
Dramatically underrated. I think all cities should be raised except for some and then built from scratch.
Ben (01:14:02):
Would it be from a centralized planner or would you somehow let-- I guess you could call it a kind of market forces or people choose where to be? How much zoning would you there from, I guess, nothing to everything. Are you just in the middle?
Kanjun (01:14:19):
Yeah, I would basically... Okay, maybe not raise cities. I live in San Francisco which is the most frustrating city in the world. A good friend of mine studied zoning in lots of different cities. My vote would be basically almost no zoning. Tokyo is one of the most interesting cities in the world partly because it's zoning laws are very, very loose. So basically anyone can build anything anywhere. It's not fully true, but people have a lot of agency over what they're able to do with their space and so you see this really crazy stuff; really interesting outgrowths. To me, Tokyo is one of the most beautiful places on earth. It's kind of this melting pot of humanity in a way or where people really can express themselves in the environment.
I think San Francisco is the opposite of that. Everyone's expression is completely shut down because you're not allowed to do anything. So I wouldn't do central planning necessarily, but maybe a little bit of it. And much narrower streets, human scale, planning human scale cities. I think we can actually probably convert existing cities if there were enough motivation into much more human scale cities where we build onto the roads, or on the roads we have like parks and restaurants and pop-up shops and things like that.
Ben (01:15:47):
Yeah. So I can see that. And San Francisco is ridiculous like that, isn't it? It must be one of the highest GDP per capita places like in the world with maybe five other places. But one quirky thing about Tokyo which I'm sure you know is that-- or in fact Japanese planning in general is they think about their buildings only with 30 to 80 year lifespans. So because of that, actually this idea of renewal-- And you think of that because you think of the really old temples and wooden buildings which obviously lasted a thousand or even 2000 years. But actually those are the exceptions.
Kanjun (01:16:22):
The Shinto temple also gets replaced. So that's just really interesting. I think actually this is a very underrated thing which is death. In the western culture we really underrate death. Institutions should die. It's part of why we have all these problems in science because institutions can't die. Buildings should die.
Ben (01:16:42):
For those listening actually, before the 12th of January 2023, my next performance lecture is all about death. That actually in the modern day society we don't really talk about it enough. Whereas even going back just 50 years, but certainly fifty hundred, two hundred years, the death of everything-- whether it's buildings, institutions or particularly people, was a much talked about thing.
Kanjun (01:17:08):
Actually, I just made a connection which is I think there are lots of things with tradeoffs. So longevity versus death has very clear tradeoffs. Similarly, institutional history versus brand new institution with no knowledge has very clear tradeoffs. So just wanted to make that point of there are different parts in the space that you can choose with different tradeoffs.
Ben (01:17:32):
Yeah. I hadn't thought of them as the opposite, but that's exactly right in what we talked about in terms of institutions, death and renewal and all of that. Okay. Underrated, overrated; a couple more left. Innovation labs-- I think we're thinking ARPA or here in the UK we now have ARIA. Underrated or overrated ideas; innovation agencies?
Kanjun (01:08:00):
Probably neutrally rated. It's not really rated. They're useful and they're not that useful if you don't do new things.
Ben (01:18:14):
Okay. Very good. All right, last two. High frequency trading or in particular high frequency trading algorithms?
Kanjun (01:18:24):
That's hilarious. I guess the context is I was a high frequency trader in college and to pay for college. We'd always tell people... People are like, "What's the purpose? Are you doing something good for the world?" Everybody in high frequency trading says, "We're providing liquidity to the markets. That's the answer. That's the good that we're doing in the world.” So I think probably that purpose is overrated from within the community. I think most likely we should have much less high frequency trading.
Ben (01:18:59):
This is interesting because there's a debate within business market economists. So one of the functions of markets supposedly is to find prices or whatever the correct price is. And they have no idea why we need so many trades to try and actually find out what the correct price is because you don't need that in a lot of other forms of markets. But you seem to need it in stocks. There is also a debate as to whether high frequency trading actually provides liquidity or not. But if it funded your college, then in some ways must be massively underrated because it has allowed you the second or third order to produce all of this other amazing stuff. So that is a bet. Definitely what should have taken if you had that validity. Okay, last one is, I guess broad ranging is diversity in tech or in any domain. But I guess I'm interested in diversity in technology as it always comes up.
Kanjun (01:19:58):
What exactly do you mean by diversity?
Ben (01:20:02):
So you can't clarify. I'm probably thinking of people diversity but you could take it further. And I guess riffing back on our tradeoffs is this idea of lots of different things obviously being good for ideas and other things versus narrow focus which potentially might allow you to go faster on one thing. But you could take the kind of women in tech answer on any domain as well. Or you could take it broadly.
Kanjun (01:20:32):
Yeah. I think probably diversity in general is underrated, I suspect in Silicon Valley style cultures. The Silicon Valley style culture is very maximizing and capitalism in general is rather maximizing. So I think there're probably lots of things that don't get funded or don't get done even though they might be really interesting or useful. So the arts I think are a good example where there's not very much art here for that. I think for that reason it's very maximizing, it's very utilitarian. People are like, "Why do art?"
Ben (01:21:12):
[Inaudible 01:21:12] In San Francisco. There can be great people as well but not that much into art. In fact, I was reading just recently. Julian Gough wrote the end narrative in the original Minecraft but supposedly didn't sign any contracts and has now made that open source or essentially creative commons license. He has this whole long post on this riff about he has nothing against the people who did try and make him sign contracts but that was a capitalist system. And his system, his art, they clash in these very interesting ways for when you want to make or create things, whether you're talking about financial capital or if you want to use finance speak; these other bits of capital, social capital, intellectual relationship, human and all of those type of things.
Kanjun (01:22:04):
That's really interesting. I'll definitely read that. There are a lot of clashes and I like capitalism. It's very good at an important thing which-- this idea is Michael's-- which is aligning what is true with what is good with power. When those things are misaligned, you end up with very corrupt states. In capitalism it actually goes moderately well, but also it has a lot of tradeoffs. I wonder if there exist actually much better systems once we have more intelligence to be able to align things that can have a lot more diversity of ideas and types of things that people can be doing in addition to aligning what is true, what is good, and power. So maybe aligning what is true, what is good, power, and beauty; something like that. That beauty piece is definitely missing.
Ben (01:23:01):
Yeah, that's the next system obviously. My day job's obviously well within the capitalist system as well. I spoke with an earlier podcast someone called Jacob Soll, Jake Soll, who traced the history of ideas. That the early capitalists like Adam Smith traced a lot of their thought back to Cicero; these Romans and groups. But actually early capitalists, one of the ways that they thought about the world, one was an industrialized sort of world and that state and governments were also meant to make and create markets. But they also had this aligning the good and the incentive. I remember actually Amartya Sen has this anecdote or analogy for how he thought the early capitalists work. And that's if you are being chased down the street by someone who wants to knife you for your money or because you look wrong, and you suddenly throw a lot of money in the air behind you and they stop and they go for the money instead.
So it is actually meant to align people to something which is actually aligned for the greater good. So it's actually a moral cause. So the early capitalists actually thought it saved us from our basal nature to do something which you could align from incentives. I had never really heard that. Then I went to read back some of these early capitalists and back to Cicero and it is seemingly true, at least my interpretation of what they were saying, thinking then that that's that. We've evolved it to this state. So it's interesting that to your point, it will likely evolve again and how it evolved for these things is still very open.
Kanjun (01:24:38):
I think that's really interesting. How are we doing? Are we doing okay on time? I can do a slight diversion?
Ben (01:24:46):
Go for your divergent thing.
Kanjun (01:24:49):
Okay. I think this is really interesting in that capitalism has successfully aligned us toward satisfying people's desires and wants and somewhat away from our basal nature. Violence has decreased a lot. There are goals to go forth that are not tribe against tribe kind of goals. It actually feels like we're pretty ripe for some kind of transformation because AI is getting better really fast. If you were to ask a question like this, it's getting better way faster than most people in the world see. I think people can see ChatGPT and see all its flaws, but ChatGPT is just the tip of the iceberg of even research that we've done already. So there's still a ton of low hanging fruit in terms of the capabilities. So I think all of this stuff will happen faster than we think it will. And maybe that means a system like capitalism needs to be augmented somehow.
Ben (01:25:57)
That's an agree for me. So I actually work a lot in biological sciences and I bore everyone. I don't have technical details because I'm not really a coder. But what DeepMind have done with protein folding in biological and computational biology, and actually this is-- I mean there are a couple of generations in. But it's early journeys. It is so mind boggling having studied it 10, 20 years ago. This is the equivalent of magic or a form of magic. It seems that far away. Therefore the downstream effects are just very hard for us to imagine because even people in the field can only just start imagining it and what it would do. In something related to your point, typically we've had to use control trials because we don't really know. This is like some of the beginnings of actually we might start to understand-- and we've got a linking of the biological mechanisms of something. So it's not that we're at zero. But between zero and a hundred, we're closer to 10 or 20 than we are to a hundred for sure. You can just see on the edges of where I have domain expertise that these things are just opening up in ways which just is very hard to imagine and therefore I think could be both very exciting and scary at the same time. Riffing on the capitalism...
Kanjun (01:27:17):
Really one quick thing there which is one of the things I'm most excited about AI. Most people talk about automation and kind of doing what people are doing. That's maybe interesting, but I'm most interested in systems that are able to monitor very complex systems. Because they're interacting with the economy or with all of the people, the model can interact with all of these people simultaneously. It's building up some model of the system that we can then interpret and inspect. That might lead us to a much deeper understanding mechanistically of how human complex systems work in the way that outfolds. Allows us to better understand mechanistically how protein folding works. But you were going to say riffing on capitalism.
Ben (01:28:00):
I'm going to riff on that and I'll come back to capitalism. Are you most interested in human systems like markets and social stuff, or things like weather systems or other complex physical phenomena or essentially both because you think they will probably apply to both?
Kanjun (01:28:16):
More human systems. I think weather and other systems like that we can evaluate and measure using instruments. We can measure how good the weather is somewhere and then we can make better algorithms and collect more data. That doesn't require as much 'intelligence.' But what's interesting about human systems is that humans are full agents that need to be modeled. You need to model both the human agent and also somehow take what they're saying, communicating in language, and turn it into some something useful. That's just huge amounts of data. So you need something that's able to understand linguistically all of that data and maybe beyond linguistically like, "What is their body language, et cetera?" And then turn it into some useful behavior. I think for a system to be able to do that it needs to actually have a pretty good model of not just individual humans. but also groups of humans interacting with each other. And I think that kind of model is a really interesting model to inspect from the perspective of like, "How does a complex human system work? How does an institution or a culture work?" I can imagine deploying some AI chat bot within an institution. If it's actually learning from the people inside of it, then it will end up learning a lot of things about this institution. It would be really fascinating to inspect what is the fingerprint of this institution versus this other institution.
Ben (01:29:39):
Yeah. And because group behavior is different from the individual behavior, it is this uniquely or very interesting place to assess that.
Kanjun (01:29:48):
Exactly. Also you can think about democracy. Democracy is an algorithm that has really terrible inputs right now. It's just binary for every person. We use that terrible input because it's the highest resolution that we could reasonably aggregate from people so far. But AI systems, maybe that doesn't have to be true. It doesn't have to be the highest resolution.
Ben (01:30:11):
That's the best worst system we have or something. What is the quote? "Democracy's really bad. It's just better than anything else we have" or something like that.
Kanjun (01:30:21):
Right, right. Exactly.
Ben (01:30:23):
Okay. One final riff on the capitalism and then I'll ask you on your current projects and advice. If you go back to the seventeen hundreds and eighteen hundreds-- at least my reading of it, you had thinkers like Malthus who basically thought, “We could never really get rich as a large population.” And if you looked at the preceding thousand, 2000, 4,000, 10,000 years of human history, you would've broadly agreed with them which many people did that. There was seemingly these kind of limits whether they were physical limits and all of these others.
And then we essentially had the industrial revolution. You can talk about corporate labs, innovation, corporate forms, industrialization, energy, technology, innovation which seemed to break the Malthusian trap. And so we are at least in aggregate really wealthy in a way that the person in 1850 or 1800 or even 1900 would've been completely astonished by. This seems like magic. But on the other hand, I think they would be very surprised. In fact, my reading of what they were writing about at the time is they were very surprised that we cannot be splitting the pie better. So they would basically say, "There's no way we could have grown this pie this big." They would just be like, "That's incredible. What did you guys do? That's like magic."
But on the other hand is like, "How dumb could you be that you could grow this pie but you haven't-- You've got more than enough but there's no way that you can split it? Surely splitting it was the easier problem and growing it was the hard problem.” That's really interesting. You read the work and you go like, "Wow, they just thought the really difficult thing was growing the pie. They thought splitting the pie was just going to be really easy." Well, when you have it and you have all of these things like, "Oh, when we're really wealthy no one will work or they'll split it, all that." It seems to be the reverse is that-- and we see with AI will come. It looks like we are going to be rich in whatever form we are, at least in aggregate. But splitting the pie we have not got any closer really. I mean maybe at the edges, some ideas, welfare states, some insurance and all of that. But it's nowhere near what the thinking the 1850 thought we would get. But they had no belief that we would ever get this wealthy.
I find that's a really interesting dichotomy about how we got it wrong. So I think we probably got everything wrong in the future. But I always think this about capitalism is they didn't think they would be this rich, but they just didn't think we'd be this stupid in terms of splitting the pie. It's just incredible.
Kanjun (01:32:58):
I think this is a really big and important problem especially when it comes to AI being deployed. I want to kind of harken back to our points about death which is there was a study about how if you increase the estate tax to basically a hundred percent or even like 95%, then a lot of our incoming inequality problems would be solved. This is quite interesting. One hypothesis that they had is that basically accumulation effects are what result in the wealth gap, not necessarily what you're able to get in a single generation. Once you accumulate a lot of that wealth, get rich fund investment, it's a little bit harder to lose than it is to just get from scratch. And once your rate of increase gets much higher than someone could ever make in their day or lifetime, now you're just accumulating wealth at a much higher rate than anyone could even gain it when they start from zero. So yeah, I think this point of death or estate tax or-- Maybe hypothesis here would be something part of the reason why it's not actually about splitting the pie. It's about disrupting accumulation here. It's not a pie, it's actually...
Ben (01:34:24):
The pies that you continuously make because obviously the analogy falls down. But you're true in that.
Kanjun (01:34:29):
The pies that are always growing; self-growing pies.
Ben (01:34:34):
Great. Like, “We’ve invented self-growing pies, magic, but we haven't...” And so there is something on that because in the UK and Britain, the landowners have been rich for a very, very long time. Land is still basically the majority of Britain's wealth which is very weird. So for a thousand years they've been one of the richest people and that has not had the renewal, whereas actually most other long-dated families, technology or merchants or whatever have gone through and you see that. But actually the landowners have not because it's an accumulated wealth which you don't give up. Okay. So one last cheeky question and then finish. We had a slightly cheeky question from Twitter about how you can use your name as a verb. I have seen on your website that you can use it definitely kind of as a noun because you have kanjectures of which we talked about a couple. But do you also see yourself as a verb to action; so “kanjun” as well as have kanjectures?
Kanjun (01:35:36):
So this question's from Aram, one of my best friends who was my housemate for many years. The joke is that I love to steal people's food. People's food is much tastier when it's someone else's food than it's my food. So often I'll go out with friends, I won't order very much, and I'll just steal food from them. So Aram coined the term too can-june, which means to steal someone's food. However, there's another thing that I do which is whenever I talk to somebody-- and you didn't encounter it that much here although you would have if we had more time to riff. I kind of intensely question them about things; like trying to understand them. Like, "Why did you make these decisions? Why do you think this? What do you think about this?" So also too can-june means-- this is like my default state encountering anyone-- means to basically intensely question someone about their life and their thoughts. So it can mean either; take your pick.
Ben (01:36:44):
That's great. So actually that's two verbs and a noun. I will riff back to our earlier thing and posit that's because food from somebody else's plate has more meaning to you than food from your own.
Kanjun (01:37:00):
I love it. Yes.
Ben (01:37:02):
The calories are obviously the same. The value to it is more. I don't often do this. I'm open, you can ask me a question if you would like to ask me a question.
Kanjun (01:37:15):
Oh, I asked you some questions, but I have lots of questions about your relationship to art. I'm really curious how your reaction to-- The culture I'm in, in Silicon Valley, EA, and there's maybe a non-value of neural art. I'm really curious from your perspective as somebody who's a playwright and who clearly art is very important to. Why do you think that there's this disvaluing of art and what do you think are the things that are lost in a culture like this?
Ben (01:37:54):
So in my view, this stems quite clearly from a utilitarian led thinking which you can trace to both Peter Singer all the way back to John Bentham. And then you can also see it in the work of Derek Parfit who was viewed as one of the greatest living British-- actually global philosophers. He died a few years ago. But it was actually an encoder of his work, “Reasons and Persons.” So to put a long answer short is their shortcut for this is something which you'll know, but for listeners that you get round to expected utility theory or expected value. So this is a shortcut for trying to value something, particularly stocks or cost benefit or things with cash flows and things that you can count.
There are a lot of paradoxes which don't work for expected value. The classic one is Petersburg Paradox. But things like when you have a 51/49 bet. 51 you get a lot of value, but 49 you lose everything. Let's say you destroy the world or you double the value of the world and you play it for long enough and you're going to destroy the world. Strict expected value basically tells you to play this game. And that is because there's a lot of things which you can't put into an expected value calculation. Now, the so-called in my view, fudge for this in expected utility which can work for some things, is that you have this idea of utility which you then try to fudge the value to bring it closer back to humanness. But this is really hard for us to do. And actually mostly we fail to do it except in very simple cases.
Then even in complex cases where you can then take some idea of what humans might think, it's not clear that there's a consensus. So a classic one from healthcare economics is it's very expensive to save a preterm baby. It costs somewhere like half a million to a million dollars or more. Whereas you could spend that in saving the life of a diabetic which only costs maybe $20,000 or $30,000. In first order expected value you always save the diabetic. Then you kind of think about utilities, "Oh, maybe there's more life to something like the baby and the society" and you can try and fudge it. Or you can ask people and you find that the person in the street, the majority say, "No, we should save some babies." And so that's just one way of getting this kind of dispart which doesn't play into expected value.
But because expected value is such a core way of how their thinking has grown up, they can't do things which are hard to measure. Or to our point, things like art or creativity which is not only hard to measure, potentially impossible, but is also not time and variance. So moves across time, people, space, and culture. And because that doesn't fit into that framework-- and I guess some of them do get there if they think really hard about it or they do fudges or they might think of themselves as slightly pluralist. It really doesn't fit into that framework and therefore all of that falls down. Therefore when you had enough time to do expected value over malaria nets, you definitely can't do it over art. And then the second order effects which are just really hard to do about the power of art, you just can't put into your calculations. Yet they seem to be at really crucial tipping points for the world.
So two or three examples. Maybe it's emergent behavior talking about that. Greta, is that emergent behavior or not. But for instance, the white Texan lawyer for Martin Luther King Jr became a lawyer for minority rights because he listened to Louis Armstrong play jazz and he said, "I've heard genius in a black man. The only thing I know is that I need to fight for equality." He's on the record of saying that. That's the interesting formative power of art. You think of something like...
Kanjun (01:42:01):
Art is like a way to see the humanity in anyone else.
Ben (01:42:05):
Exactly that. And all of these important social progress points you had an artistic narrative which changed the system or made the system better. Some of that goes to our deepest myths. I call them myths because they're things that are true only because humans believe them to be true. Like money. The tree doesn't care about money. The dog doesn't care about money. The alien probably wouldn't care about money depending on whether we had to interact. Humans only care about money because we've made that. So that's one of our greatest myths. And essentially that's an act of creativity. In fact, that's kind of an act of art which is across that and humanity. Because that doesn't fit into expected value theory very well with some soft order pluralism, they can't get it so easily and therefore it fails by the wayside. But that's why you then get these critiques outside about how they don't understand the system. They do obviously understand the power story and art and things, but because you can't really weigh it up between individual stories, they tend not to really invest in it very much. So that's an answer to that. But I feel fairly certain that seems to be the roots of it. And when I speak to a lot of EAs that seems to be true. If you read their influential philosophy, that also seems to be true.
Kanjun (01:43:27):
Yeah, I think that's true. Peter Singer explicitly says funding the arts is not as good as saving lives in different countries. One thing that I think I've been puzzled by is this question of measurement. Michael and I think -- One of the things we talked a lot about is funders trying to measure research results and how that results in short-term incremental decisions. And I think measurement is actually-- It's something that I deal with a lot in my day-to-day life. I used to run a startup. Startups are all about measurement because it's all about short-term. But running a research lab, I can't measure anything. There's very little stuff that I can measure. So I've had to completely change my perspective on measurement.
So I think there's this really interesting point about utilitarianism. Startup founders can be utilitarian because actually they're not losing that much when they are super measurement oriented. But when you're doing research or you're in the arts or you're trying to do real long term change in world countries, then actually things are really hard to measure. The problem is that when I talk to an EA-- I would consider myself somewhat EA in terms of being interested in the philosophy but also finding some things quite problematic. But if I were to talk to an EA they would say something like, "Well, the issue is that your utility function just hasn't captured all of the things that you're missing." There was a time when I thought, "Yeah that seems true. We should be able to capture other things in the utility function." But now I'm at a point where I'm wondering, "Actually, that may not be true at all. Actually, some of these things may not be measurable. Maybe not for a very, very long time.” So actually using expected utility as a framework is just going to lead to the wrong actions. It's going to lead to non-optimal actions in the long term. So I think your point about art is super interesting. Old me would've said, "Okay, can you count art in the utility function?" The now me is like...
Ben (01:45:37):
And they believe you can. I think I'm arguing cannot. So what's the phrase? "Not everything that can be counted counts and then not everything that counts can be counted either." I have got quite a long podcast with a philosopher called Larry Temkin who has this critique on EA. It is for those listening, a little bit nerdy and it does go on for three hours. But his main point is that his work on moral philosophy and something called the transitivity or intransitivity problem shows that the axioms behind expected utility theory do not hold. They do not hold for moral choices. So they hold for maths; obviously three is bigger than two, two is bigger than one, three will always be bigger than one. And it holds for heights. So it holds for maths, it does not hold for social or moral reasoning.
He has a 500 page book to explain all of this, but you can get it as read. And when he first posited it a lot of people thought, "Hmm, that doesn't seem to be right because that's crazy." And now actually there's a majority of thinkers who actually think this is true. Therefore my point of view is if this is right and it seems like it is right. If one of the fundamental axioms cannot hold, then you have to use it with caution. Doesn't mean it can be a useful tool, it's obviously a useful tool. But to apply it to all moral reasoning you need to do with caution and it seems to be not potentially with a much caution. Early in the year Larry Temkin explains this in actually his book. It's very interesting reading about that if anyone wants to go back and listen to that.
Kanjun (01:47:24):
That's really interesting. I'll definitely listen to it.
Ben (01:47:27):
Yeah. I'll send you the link. Alright, so let's end on-- You could also feel free to ask me another question. But current projects that you are working on that you might want to mention. I think we have mentioned quite a lot of them. You could also offer any advice that you have either for startups; could be what you're looking for in startups, women founders or your own journey or something in AI. So yeah, current projects and any thoughts or advice you have.
Kanjun (01:47:58):
What would you like for me to give advice on? Advice is always very personal.
Ben (01:48:06):
Okay. So let's do advice on if you are thinking about working in AI or as a startup, what you should be looking to do? Because you already gave me really good [inaudible 01:48:17].
Kanjun (01:48:21):
Join Generally Intelligent.
Ben (01:48:23):
Yeah. Okay. So that's number one. You are hiring. If you're interested in that you should look them up on the website and you should go and join them. But maybe you are abroad so that might be tricky for you. So if you're not going to go and hire at Generally Intelligent, what's the next best thing?
Kanjun (01:48:41):
We also hire remote engineers.
Ben (01:48:45):
Other best thing. If you want to work on AGI, join Kanjun.
Kanjun (01:48:51):
That's right.
Ben (01:48:53):
So that's that.
Kanjun (01:48:55):
What else?
Ben (01:48:58):
Well, you can maybe look at current projects.
Kanjun (01:49:01):
My current projects?
Ben (01:49:02):
Or advice.
Kanjun (01:49:04):
Sure. Yeah. I guess I can briefly talk about current projects. So Generally Intelligent, we've already talked about-- Well, we are hiring and we're very interested in inspecting systems to get a deeper understanding of what's going on and using that to figure out can we get good guarantees around robustness, safety, et cetera, as capabilities increase. Other projects-- I guess if you're a startup founder I have a fund to investing folks called Outside Capital. We do a fair amount of AI investment or if you're interested in investing in a fund. What else?
Ben (01:49:49):
You want to talk about the Neighborhood at all?
Kanjun (01:49:52):
Maybe. It's not something I advertise too much. But the Neighborhood is a really fun project. The reason why we ended up doing it-- Jason Benn, my former housemate and coworker is the one driving most of it. I'm just helping him. We used to have this house called The Archive and it was a great house. I think it changed our lives in really important ways. Maybe this is the piece of advice I would give which is something like you really become the people around you. I really become the people around me. The like five people I spend the most time with. I think choosing those people very carefully is quite important. For me, I've chosen them in particular ways such that they are people I really look up to and would want to become more like because people tend to internalize the beliefs and the limitations of the people around them.
So I want people around me who are expanding my idea of what I can be capable of and not reducing it. So The Archive was a 25 person house that I co-ran for five years. One of the big things-- Michael actually made this comment that The Archive seems like a self-actualization machine. Like someone comes in and a year later they are just much more actualized. I think that was a really interesting thing that we did culturally accidentally just because of the people that we were. We were really trying to understand how do we become better versions of ourselves and discover what our potential could be. I think very seriously asking that question and having people around us asking those questions was really transformative. We shut down The Archive in the middle of Covid.
The Neighborhood is like a scaled up more sustainable for 30 somethings version of it where essentially we want something that is like a university campus for modern adult living where you can have your best friends around you and have this lively intellectual culture that we found at The Archive. And kind of live in an area in a city where you just run into people that you know and want to be around. So in the Neighborhood-- I live at the southern tip of it. I run into a lot of people which is really interesting and have spontaneous interactions in a way that in a normal city I wouldn't have.
Ben (01:52:26):
That's amazing. I really think that's an underrated thing. I wish in my twenties I was walking distance with my close friends and that. The only piece of advice listening to that is I would design it if it's at all possible-- and might not be-- to take you all the way to end of life. So when you're 60, 70, and you are 80 that would be great.
Kanjun (01:52:50):
That is the goal. We'll see what kind of institutions we need to build. In a lot of ways we're constructing a new type of institution, a new way of living. It's not really centralized. It's like a set of decentralized institutions. So we're planning schools and then eventually there will be like end of life type of things when you're growing older, when you want to leave a legacy, et cetera.
Ben (01:53:15):
Mini town within the town. Well, that's great. That advice sounds really great. So choose your friends carefully.
Kanjun (01:53:26):
Yes.
Ben (01:53:27):
And with that, Kanjun, thank you very much.
Kanjun (01:53:30):
Thank you. This is super fun.