Michael Nielsen: metascience, how to improve science, open science | Podcast
Michael Nielsen is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta science and how to improve science, in particular, the social processes of science. His latest co-authored work is ‘A Vision of metascience: An engine of improvement for the social processes of Science’ co-authored with Kanjun Qiu (open source book link). His website notebook is here, with further links to his books including on quantum, memory systems, deep learning, open science and the future of matter.
I ask: What is the most important question in science or meta science we should be seeking to understand at the moment ?
We discuss his vision for what a metascience ecosystem could be; what progress could be and ideas for improving the the culture of science and social processes.
We imagine what an alien might think about our social processes and discuss failure audits, high variance funding and whether organisations really fund ‘high risk’ projects if not that many fail, and how we might measure this.
We discuss how these ideas might not work and be wrong; the difficulty of (the lack of) language for new forming fields; how an interdisciplinary institute might work.
The possible importance of serendipity and agglomeration effects; what to do about attracting outsiders, and funding unusual ideas.
We touch on the stories of Einstein, Katalin Kariko (mRNA) and Doug Prasher (molecular biologist turned van driver) and what they might tell us.
We discuss how metascience can be treated as a research field and also as an entrepreneurial discipline.
“...."How good a use of the money actually is? Would it be better to repurpose that money into more conventional types of thing or not?" It's difficult to know exactly how to do that kind of evaluation, but hopefully, meta-scientists in the future will in fact think very hard and very carefully about how to do those kinds of evaluation. So that's the meta-scientist research discipline.
As an entrepreneurial discipline, somebody actually needs to go and build these things. For working scientists it's often remarkably difficult to do that because it doesn't look like a conventional activity. This isn't sort of science as normally construed. Something that I found really shocking-- you may be familiar with and hopefully many listeners maybe familiar with, the replication crisis in social psychology. So this was, I guess most famously in 2015, there was a paper published in which 100 well-known experiments in social psychology were replicated. I think it was 36% of the significant findings were found to replicate and typically the effect size was about roughly halved.
So this was not a great look for social psychology as a discipline and raised a lot of questions about what was going on. That story I just told is quite well-known. What is much less well-known is that in fact going back many decades, people had been making essentially the same set of sort of methodological criticisms. Talking about the file drawer effect, talking about p-hacking, talking about all these kinds of things which can lead to exactly this kind of failure. And there are some very good papers written in-- I think the earliest I know is from the early sixties. Certainly in the 1970s and 1980s you see these kinds of papers. They point out the problems, they point out the solutions. “Why did nothing happen?” "Well, because there's no entrepreneurial discipline which actually allows you to build out the institutions which need to be built out if anything is actually to change."
We discuss how decentralisation may help. How new institutions may help. The challenges funders face in wanting to wait until ideas become clearer.
We discuss the opportunity that developing nations such as Indonesia might have.
We chat about rationality and critical rationality.
Michael gives some insights into how AI art might be used and how we might never master certain languages, like the languages of early computing.
We end on some thoughts Michael might give his younger self:
The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not.
Available wherever you get podcasts. Video above or on YouTube and transcript below.
PODCAST INFO
Apple Podcasts: https://apple.co/3gJTSuo
Spotify: https://sptfy.com/benyeoh
Anchor: https://anchor.fm/benjamin-yeoh
Transcript: Michael Nielsen and Ben Yeoh (only lightly edited)
Ben
Hey everyone. I'm super excited to be speaking to Michael Nielsen. Michael is a scientist at the Astera Institute. He helped pioneer quantum computing and the modern open science movement. He is a leading thinker on the topic of meta-science and how to improve science, in particular, the social processes of science. His latest co-authored work is, “A Vision of Meta-science: An Engine of Improvement for the Social Processes of Science.” Michael, welcome.
Michael
Thank you so much for having me on the podcast, Ben.
Ben (00:33):
So first question, big question of science. What do you think is the most important question we should be seeking to understand around science or meta-science today?
Michael (00:45):
Okay. Science and meta-science are such large subjects that it's impossible to give a really comprehensive answer to that. I'll just stick with the sort of science itself. I think there's this really interesting transition that's happening in science at the moment where for the longest time, the way we made new objects in the world, in some sense it was sort of by inspired tinkering. And we’re now over the last 20, 30 years kind of going through a transition where we actually have a theory of the world that's pretty darn good. And we're starting to use that theory to inform how we build up new objects. It's a little unclear. What I mean by that is sort of atom by atom we are starting to do assembly.
We're not doing it ad hoc. We're actually able to design things from first principles. I think this is a really large change in technology. Over the next hundred or 200 years, we're going to figure out how to do this kind of first principles design whereas before it was always there were some external objects lying around in the world. Maybe you discover copper or iron in the Iron Age and you make use of it but there's no principled underlying theory. So I think that's actually a very exciting thing which is underlying a lot of changes in science over the last 20 or 30 years and I expect it will be a really large force over the next, in fact, probably centuries.
Ben (02:28):
You sketch with Kanjun a really dizzying vision of what a science ecosystem could be. And I understand your point is there's a lot of variety that we could be expected to see in a really healthy ecosystem. Then right in the start of your essay or book, I think you have this interesting provocation around what would an alien think when presumably an alien would understand the laws of gravity, understands the laws of physics. Although maybe if they've time traveled maybe there'd be something else on gravity. But you make the point that actually around the thinking of the social progress around science, there might be some elements that an alien finds the same. Maybe something like a controlled trial. You would've thought that design would also stick in an alien culture and some things that an alien might find really very different. So when you talk about some of these first principle building blocks, are you thinking of the principles that an alien would hold to be true? And in which case, which of those principles do you think we've discovered already?
Michael (03:33):
Yeah. Scientists certainly often obsess quite a bit about things like their citations and their number of publications and their h-index and all these other kinds of things. And it's pretty difficult to imagine that an alien scientist or an alien society would've rediscovered those. They don't seem like platonic elements of the world just out there waiting to be discovered. Although it also seems likely to me that alien societies might have some pretty strange constructs of their own that we would also laugh at. In terms of what's actually fixed, I would frame that question as a question about fundamental meta-scientific principles. Are there particular ways in which it's good to organize the social processes of science to enable discovery? So an example might be for example, in the 16th century, Francis Bacon had a certain number of interesting ideas, but a couple-- One was that there should be extremely limited deference to authority.
At the time of course, there was still a great deal of deference to the church, to the state, to Aristotle, to the other great thinkers of antiquity. And Bacon said, "No, that's not right. Really, we should get away from that. In fact, nature is the final." And anybody, if you do an experiment and it contradicts what Aristotle says it's so much the worst for Aristotle. You don't simply defer just because the great man of 2000 whatever, 1500 years earlier said so. There's a whole bunch of notions there. But this notion of deference to nature rather than to authority and freedom of inquiry, those are examples of meta-scientific principles that I think it's reasonably likely that in fact other alien civilizations, to use your framing, might well have discovered things which are quite similar to that as kind of being best practices how to organize the process of science.
Ben (05:43):
And how well do you think we understand how we've made progress in science? Because there's kind of essays out there saying, "Well, we don't really understand that much about how much progress we've made. It's all been a little bit haphazard." But then some people say, "Well, no, actually we do have some of these progresses and what we think." Do you think we've understood very well the processes that we've made? Or do you think it's barely touched and just been completely haphazard and we don't really quite know what we've been doing?
Michael (06:15):
We said they have lots of rules of thumb. Economists will tell you that total factor productivity is somehow strongly related to the practice of science. People try and write papers relating to patents, to productivity growth and things like this. It's hard certainly trained as a theoretical physicist not to feel like we're still in the pre Aristotelian phase of this. These are very gross kinds of ways of thinking that they just seem very coarse. We don't have anything like a detailed theory of how to design institutions to support discovery. But people are making little bits and pieces of progress which is fun. I think of it as kind of... I mean, it's a proto field. It's a proto field now with a long history. You can find people proposing doing this sort of a very explicitly kind of investigation going back a century or so. And then you have people like Bacon or in fact many others in the 16th and 17th century who in some sense were doing what I would call meta-science.
Ben (07:31):
So still the early days which is very exciting. So how should we go about improving the culture of science or these social processes? Perhaps one way of asking that-- because that's the whole big part of your book, is you pick up on one idea on high variance. So things that people support or donor variants. So maybe I thought I would ask you in the least that section, what do you think is the best idea in that and what do you think is the worst idea? What's the one that you would just not do and maybe else should do?
Michael (08:06):
So certainly the processes which we use now in universities and more broadly in the research culture, a lot of those seem pretty arbitrary and ad hoc. If you think about the NIH panel system or things like this, in many cases those kinds of systems weren't designed after a lot of careful reflection and thought. They were just done in a very ad hoc contingent manner by people who rush to get things done which is terrific. But then when you don't have any process for actually upgrading those systems later on, you can get really stuck in quite a rut. So I certainly think that I'm far from being alone in this. A lot of people I think have considerable problems with the notion of essentially committee based peer review of proposals for scientific projects.
You have five or 10 or sometimes more of your peers sitting in judgment of your proposal and you get these kind of averaging effects where the thing that is pretty acceptable to everybody is the thing that can get funded. And the idea that maybe is radically polarizing it's very, very difficult to get funded. So some people love it, some people hate it. That will tend to die under that kind of process. And that process wasn't designed to be the best. It's just something that made quite a bit of sense early on as something to try out. But it's now a little bit frozen. I've heard so many people refer to it as the gold standard. I don't know in what sense it's the gold standard. Maybe the sense in which it’s the thing more people complain about than anything else.
Ben (10:03):
Right. Okay. So that high variance would perhaps weigh against it. All right, I'm going to spit out a few of these and then we can maybe talk about why they don't happen or maybe which one is the worst or best on that. So use high variance on that. So I was very interested, failure audit, 10 year insurance, an institute for traveling scientists, long/ short prizes (prizes in general), having an anti-portfolio and an interdisciplinary institute. So out of that, maybe I'm just going to pick one first and then you can dwell on whether those you really hate or really like one of those ideas is. I'm really interested in why science hasn't really got a failure audit much built into it because a lot of other industries have it. Some it's even mandated somewhat either by regulations and law that you have to go and look. If your bridge failed, you'd look and examine why exactly that happened.
And actually even in one of my domains investing you're always looking at your investing mistakes; what worked and what didn't, particularly what didn't. And yet at least on the funding part where you haven't-- So like your high variance part, did they go looking back and, “Well, committees all gave this an average score and we put it through. Was that a failure or not?” So that does raise the question of why something like a failure audit hasn't happened. You talk about some of the institutional bottlenecks maybe for that. So how good an idea do you think this is? Why hasn't it happened and what should we do?
Michael (11:33):
Just to fill in some background. We propose a large number of different ideas more just as kind of grist for the mill than anything else in this long essay with my collaborator, Kanjun Qiu. One of those ideas-- it's just a paragraph, is the idea of a failure audit. It's that, for example, if a funder claims that it's serious about high risk, high reward research, then they should actually evaluate what fraction of their projects fail. And if the failure rate is not high enough there should be some kind of restructuring. Maybe the program officer gets fired, maybe the director's job is under threat, some kind of consequences if they really claim to be serious about this kind of high risk model. Whether that's a good idea or not is another question, but let's just assume for the sake of argument that they are.
So just a couple of points about that. One of many motivating factors for this was just reading a report from the European Research Council where they're doing an analysis or a retrospective on what they funded. So throughout this report, they talk very assertively about how they're engaged in such high risk work. But they also claim at the same time that 79% of the projects-- I think it's 79%, it's about 80% of the projects that they fund are extremely successful. So this is not quite a definition-- I don't know what definition of risk they're using. But when 80% of the things that you try work not just well, but really well, but you're also trying to claim that you are really engaging in highly risky behavior-- Well, there seems to me anyway like there's some kind of a mismatch. I can't speak to why they do this but it does seem a little bit confusing. This is certainly a common thing when I talk to individuals, many funders. They will talk a lot about wanting to encourage a lot of risk, but they're not assessing in any sensible way the extent to which they actually are buyers of such risk. And when that's the case, there's no real feedback mechanism which is actually encouraging them to move towards the behavior which they claim to want. So this is just sort of a simple-- There are many variations of this idea which one could try, but the idea is to have some kind of mechanism for doing that kind of alignment between people's stated intentions and the actual outcomes.
Now, when I've talked to people and individual funders about this, about trying some sort of variation of it, they're often fascinated by the idea. I've talked for hours to people at some funders about it but they won't do it. And I think for really quite a natural reason which is, yeah, they're nervous about having consequences in this kind of a way. It's the most natural thing in the world to want to avoid those consequences. Of course, I don't like having my nose rubbed in my failures either. But it does leave this gap between what they think they're doing or what they say they're doing and what they're actually doing.
Ben (15:01):
Maybe we need to pay them more to offset that but with consequences. So you don't think it is because they actually want to see success and they're not interested in high risk? You think they are interested in high risk but they just don't particularly want to be audited on it?
Michael (15:20):
At the individual level I'm quite certain many are interested in high risk. There is of course this issue around, “What exactly does one mean by this?” So you talk to individuals about what specifically they're looking for, and it turns out that what they're looking for is actually quite different than what one other scientists in their potential pool of fundees thinks is high risk. So also having that kind of mismatch creates something of a communication problem. So I live in San Francisco in Silicon Valley and I have some friends who work in the startup and technology land. It's interesting there to talk about risk seeking behavior on the part of the venture capitalists. Certainly I've chatted with friends and acquaintances who've been pitching some idea for a startup company and they eventually realize that what they're pitching is not ambitious enough to get the attention of the top VCs. So there it's a case of what-- Basically, the good they're selling is not ambitious enough for the VCs. In the case of many of the funders, they're saying they want a purchase risk but they're only buying safety because eventually the scientists just start offering safe kind of proposals.
Ben (16:50):
That's what's happening. There's something slightly akin within investing world, one of my primary domains. If you claim you are a value investor and you actually define value, then when you're audited at the end of the year your own end investors will ask you, "How did you make your money?" Hopefully you made some. But if you didn't make it doing value and you bought--In investing you can buy growth and all sorts of other things, well, they say actually you failed. “You might have made us money so you might have had successful projects, but you didn't do what you said. And we can audit and we check you.” And actually, you check it in very many different domains. So this goes back to your VC example. VCs are expected to take these hundred x, a thousand x return type of businesses. Typically, a café business is not going to do that even though you might have an okay return. So you wouldn't expect to see a café business in a typical VC portfolio. And if you're pitching it, you're not really likely to that because it doesn't match what they're looking for. But there is an audit on that. It's both a success and a failure audit. So I thought it was really interesting that it didn't seem that science funding really had that.
Michael (18:00):
Not to my knowledge. I don't think I've ever talked to a funder that said that they really do a pointy version of this. Many of them do retrospectives. That's quite common. But they tend to be very soft in terms of the consequences. Actually, for an interesting reason which is those documents are both evaluations for the funder in terms of how they're looking at themselves and you can view them potentially as self-improvement documents, but they also tend to be marketing documents as well. In particular, it may be a document which is being used to communicate with government and with the wider public. Then there's a very interesting situation where to the extent it's a marketing document. They're trying to use it in fact to raise money or to ensure the continued supply of money. That's obviously a difficult spot to be really brutally honest with yourself about your own failures. Sort of absent really strong controls to ensure that you are honest in that way.
Ben (19:04):
I guess it's hard to put into that language of risk and you need something maybe simple. Like the person proposing the project has got to say, "Well, I honestly believe there's only a 4% chance of this project working." And you assess, "Well, I funded 10 projects at 4% chance which actually means all of them should probably fail. At the end then if I got one out that was really lucky." But it's quite hard probably for the originator of the project to honestly say, "Well, this is only a 4% chance of working” or something like that. Interesting I do a lot of analysis of early stage drug development, and we actually know before you enter phase one, you are talking roundabout 1% chance give or take a little bit within the error. So you kind of know what the average is when you go into that funding without having to say that. So if you get one in a hundred right you're doing about average on that. But we have a language of risk that you use which I don't see when I look at a lot of projects. And maybe that's because it's hard to do. But I guess the originator of the project or the funders should probably put a percentage chance and if they funded something they should say, "Well, I honestly thought this had a 10% chance of working."
Michael (20:12):
So I mean, in the conversation we're having we are putting a lot of emphasis on risk and working and failure. I guess I just want to emphasize-- That's a particular-- You could have a particular set of VCs around that. Those VCs may or may not be right. We're sort of for the sake of the conversation assuming that they are.
Ben (20:31):
Yeah, this might be completely wrong.
Michael (20:33):
It might be wrong. But once you have doubled down or decided that you want to pursue a particular thesis, having mechanisms in place to ensure it's not just words but something that's actually being done is obviously a valuable thing to have. The one really large funder that seems to do this quite well... I don't know of a really formal estimate that's been done of the failure rate of DARPA's programs. But people pretty close to the agency have estimated rates of sort of 80 and 85% failure which is truly remarkable given the scale at which they operate. And yet, I think most people at least view DARPA as having been really quite an outstanding success.
Ben (21:25):
And that's one of the things I'd emphasize on reading your essay is that you just put out so many ideas, some of which you think might not be any good at all or might not be correct. But the whole point being is that there seem to be so many ideas out there which we haven't really tested at all. And you make the point that actually that might be really the early days of it. So I might just touch on a couple more ideas and then move on to exploring this thing as ideas of funder as a detector. But I was very interested in your interdisciplinary institute because I sense a lot of universities or research thinks think of themselves as interdisciplinary or they have things and maybe some universities have that different type of scientist meeting. But the vision you gave of something which was truly interdisciplinary, I didn't see out there in the world. And when I read that I thought, "Well, that is actually surprising that there's a lot of talk as well about how great it is to be interdisciplinary and how much you learn from far domains and near domains and mixing of people in agglomeration effects." But to your point-- and you say this with a lot of these things, it hasn't really been done in the real world. Maybe assess the idea. Do you think interdisciplinary institute's actually a good idea? And what would you do if you thought it was a good idea?
Michael (22:42):
So again, this is just another-- It's sort of an amusing idea. It's simply to point out that in fact programmatically, you can try and engineer serendipity. If you've got 30 pairs of disciplines and you simply hire three people at the intersection of all 30 choose two, whatever that is. I think it's the-- I can't remember. It's about 450 odd-- sort of intersections to work at the intersection of any pair of those disciplines. Most of those researchers are going to fail. A few of them though will be doing something that is not supported anywhere else in the world and that happens to actually have a lot of latent value and they may pay for all the rest. So again, it's really sort of a mechanism design point of view where you say, "Let's take seriously the idea that maybe actually we have a systematic problem with funding interdisciplinary work. Let's simply see how to do it in a scalable way, design the mechanism to achieve that end." And then there's a question, sort of a meta-scientific question which is, "Is the value created actually sufficient to justify the upfront investment?" We don't know the answer to that question. You probably I think need to do a lot of work to make a prediction about it, but it's at least plausible as kind of an interesting way of going.
Ben (24:06):
And you could test some aspect about it.
Michael (24:09):
You could certainly do it at a low level relatively easily. The standard way in which a lot of interdisciplinary work is done, it's by people who are pitching their particular interdisciplinary work. I was involved in quantum computing beginning in the early 1990s. At that point it was very much a field that was seen as interdisciplinary. It was at the intersection of computer science and physics, didn't really have a natural home in either place. And so a certain number of quantum computing people would go to universities and sort of pitch at us essentially an interdisciplinary kind of a project. So that's kind of a bottom up approach. This thing we've been talking about is much more top down where you're just sort of saying, "Let's seek to create it at the intersection of every possible pair of disciplines." It might be that the bottom up approach is better to sort of look for things which are bubbling up. Again, that's something that actually needs to be tested. If you wanted to do something which was a little-- didn't involve quite as many resources as funding 450 at the intersection of 450 disciplines, you could simply sample from that set, for example and you'd get some interesting information.
Ben (25:19):
Yeah. Actually it would probably be pretty interesting even with fewer numbers of pairs.
Michael (25:22):
Even with 10 or 20 pairs. Just randomly chosen. You think about, actually... A good example, much of the modern deep learning revolution has of course been enabled by GPUs. GPUs were created—basically, my understanding is for video games. Just because modern displays had gotten such high resolution they needed dedicated chips to drive them. They're also very good at just doing linear algebra in general. And so circa sort of-- the latter part of the [inaudible 25:58] or whatever we call that decade, there was a relatively small number of people who were both familiar with AI or familiar with neural networks and also had some experience of GPU programming. And those people had an interesting competitive advantage in getting into neural nets. I just don't think that the intersection of video game programming and artificial intelligence is something that you could have predicted was going to be a good spot to be at the intersection of.
Ben (26:29):
Yeah. Hard to get that right; top down. And maybe that brings me onto one of your last ones, at least in this section, which was the anti-portfolio. You gave some quite interesting examples in the essay. For instance, one of the key scientists behind mRNA pretty much didn't get funded within the public university system a little bit. And then the university kind of celebrated it and said, "Well, actually you should definitely have funded her more." And then these other near misses or complete misses that we can see. Sometimes scientists now being a taxi driver rather than being in science. I had this overwhelming sense of, "Oh my gosh. How much have we missed out?" So I'm not sure if the anti-portfolio would really help with that, but I thought it was quite an interesting idea borrowed maybe from investing in VC about, "Well, these were the mistakes that we could have invested in and we didn't."
But then the next step being, "Well, why did we miss those and why didn't we invest further?" Because some of it, like in mRNA, they could have controlled it because everyone thought, "We're not really ever going to get a drug out of this.” So very low chance, let's not really fund it. But to your first point if you are funding high risk, this is exactly what we probably should state. “Government public, something other funded for a while because it is so low chance of success.” And the people in the field were telling you, “But if we make this work it's going to have a really high impact.” I just thought it was really interesting how you sketch that out and we missed it. Do you think that's a system--? Because you can always have individual misses and you make this point. But the way you sketch it out, it seems to me that it does seem to be more of a systems problem. How certain are you of that and what do you think we should be doing about it?
Michael (28:12):
So one of the most famous stories that you learn as a young scientist is that of Albert Einstein stuck in the Swiss patent office, unable to get an academic job. It's totally sort of a funny story. In this one year, 1905, he did plausibly four or five things that are Nobel Prize worthy which is pretty good for a guy who couldn't get an academic position, including I should say, he reconceived the notions of space, time, energy and mass which is not bad for sort of a side hustle. The thing that is omitted from this story is anything about what changed in the university system as a result. Did people say, "Oh, maybe we made a bit of a mistake here? Maybe this guy should have been able to get a good job." Of course, after having done that he quickly was made a professor and had a long academic career after that. But as far as I know, there was no postmortem done sort of systematically to say, "Why did we miss this person? Why was he actually not offered jobs before that?" That's sort of a spectacular example. You mentioned this example, Katalin Kariko, the scientist or one of the key scientists behind mRNA who in fact lost her job or was demoted, I guess, by the University of Pennsylvania. Had basically a lot of problems in getting to the point where she could pursue that work. Repeatedly turned down for NIH grants.
The University of Pennsylvania and the NIH will now happily take a lot of credit for her work. But as far as I know, they haven't done a serious postmortem internally. I mean, these are very spectacular, sort of very relatable examples. But if you've worked in science, actually you will know a lot of examples which are like this. But there is no then systematic reform. There is no way of conducting a postmortem which actually leads to systemic changes. So of course, it's fine. Any system at all is going to miss some people. The question that I think needs to be asked is, "How do you change in response? Do you just accept that that's the price?" Maybe that's the right response to the Kariko example but maybe it's not. Doug Prasher, the scientist who discovered green fluorescent protein which later won the Nobel Prize-- At the time of the Nobel was awarded for that work. He was working as a shuttle bus driver at a car dealership which is fine work but probably not the best use of his talents. There are many such stories and just no way for them then to feed back to cause systemic change. That's the issue, not that mistakes are made.
Ben (31:20):
Yeah. And I worry that although science is a discipline where outsiders with good ideas can be proved by the science, that it's not as strong as we would believe. And you have all of these data, whether it's minorities or outside, or your CV doesn't come from a prestigious university or your idea is just way out the mainstream where it just doesn't seem to shed light. So I'm not exactly sure but I do sense that might be a problem. Then it's interesting because you go on to make this analogy of the funder as a kind of detector system where they're looking for dark intelligence matter out there. And that that as a model is like, "Well, if it's out there, maybe there could be more ways of finding that type of dark matter and intelligence." When I was thinking about that I was reflecting that in some ways you're also incentivizing more dark matter intelligence to form in ways which won't work in the current systems that we have. How useful is that analogy in thinking about, "Well, are they really just looking for stuff which is out there and if we change some of this system, we could find it more as to your point is that that dark matter like it is in the universe just to be so much wider than we can currently invest."
Michael (32:40):
Okay. Let me just give it a really practical example. So a friend of mine, Adam Marblestone, has this notion of what he calls a focused research organization. A focused resource research organization is basically, think sort of tens of millions of dollars, an independent organization which is typically going to-- It has one very specific goal and it will create a particular type of a tool or a particular type of a data set. So for example, the cultivarian is an example of a focused research organization. And what the cultivarian is trying to do is it's trying to develop synthetic biology for non-model organisms. So a lot of the work that's done in synthetic biology at the moment it's typically done using e-coli as the particular organism which is modified.
So we have great tools for doing it in e-coli, but we don't have great tools for doing synthetic biology in a lot of other organisms. And so what they're trying to do in the cultivarian is they have a very specific list of tools that they want to create so that they can be used in certain other types of organism. So this is a very specific kind of-- It's a crossover between engineering and science. In many ways, this is something that is extremely likely to succeed because they just have kind of a checklist. "We need to do it here, here, here, here, and here. Here are some of the bottlenecks. Here's what the outputs are going to be. We're going to exist for a certain fixed period of time. It's going to cost--" I have no idea how much that's going to cost, but let's say 30 million dollars or something like that.
This is just a type of scientific problem that has been certainly been solved in the past but it's always been solved in a very bespoke fashion. So something like the Large Hadron Collider, the Human Genome Project, LIGO; the gravitational wave detector, these are all examples of projects where you had a very specific outcome very clearly in mind. You had a very specific process which was intended to get you to that goal. You just needed a large enough organization, the right set of resources to do it. But those were funded in a bespoke fashion. They weren't funded... Basically, people had to go off and sort of make an individual case. The clever thing about the focused research organizations is they're trying to do it in a scalable way. They're creating this container, convergent research, which seeks out people who have ideas for things which fit this general template.
And it just turns out-- What they tell me, they talk to a large number of scientists about this and most of the scientists don't immediately necessarily have anything to say. They're not used to this kind of container. There's no funding vehicle for it previously. And so if they've had an idea which would be a good match for the focused research organization, they don't necessarily-- It's not something they've developed in the past because there was no avenue for taking it further. It was a form of intellectual dark matter, this kind of very nascent thing and you got on with doing other things. You got your NIH R01 one grant or whatever was available and you did that kind of work. So it's interesting what they're essentially doing with the focused research organizations is building a kind of a detector to search out this intellectual dark matter.
Hopefully people have many, very nascent in most cases, ideas for focused research organizations that might actually be just a much better use of their talents than doing more conventional projects. So that's just kind of to sketch a very specific example of a person who identified a particular type of knowledge that at the moment, most funders just have-- They have nothing they can do with that. They're not set up to do it at all, but they founded this template and now they're systematically searching it out. I like to think of it as-- It's almost an antenna for eliciting this kind of information. Most people of course say, "Oh, I don't have any ideas like that." But a few people say, "Oh, actually I have this very half-baked idea." And then they may go off for three months or six months, think about it a lot more, and then actually come back with a really solid proposal. It's very early days for the focused research organization. I think the first two were funded last year. So we'll know whether it's a good model in five or 10 years. But certainly I think it's very interesting as an example of expanding the range of things which funders look to fund.
Ben (37:22):
Yeah, that makes a lot of sense to me. So correct me if I'm wrong, but I think in the essay you make two points around meta-science as a field. First of that it should be treated as a research field. And I could see that related to philosophy of science, history of science and things that this particular element of meta-science could definitely be seen as a research field. And then your other point is that it should be seen as an entrepreneurial discipline. So you actually have to trial out things, scale it, try these new social processes out. I'm interested in both, but on that entrepreneurial part, it seems that that's particularly challenging to say, "Well, we've got this new field and now we've got to try a lot of other things and then scale it out which we see on the experimental level. But we are seeing this at the meta level.” Do you think this is the only way that we should pursue it? How strong do you feel is this entrepreneurial field and what is it about it which means that you think, "Okay, this is what we've got to do. We've got to trial these small things. There's loads of ideas out there we don't know and then we scale and do it." That's kind of what entrepreneurs do and therefore I'm saying it's an entrepreneurial field. Have I got that right in terms of what you're saying and arguing on that?
Michael (38:35):
Well, there's certainly at least two components-- Actually, I would say three. We might come back to the third. One is just studying the processes people use and how well do they work? How can they be improved? What's not working at all, these kinds of things? So taking an evaluative approach, I mentioned, for example, the focused research organizations before. At some point in the future it's going to take a little while. It's going to be very good to do an evaluation of those and to start to think about questions like, "How good a use of the money actually is? Would it be better to repurpose that money into more conventional types of thing or not?" It's difficult to know exactly how to do that kind of evaluation, but hopefully, meta-scientists in the future will in fact think very hard and very carefully about how to do those kinds of evaluation. So that's the meta-scientist research discipline.
As an entrepreneurial discipline, somebody actually needs to go and build these things. For working scientists it's often remarkably difficult to do that because it doesn't look like a conventional activity. This isn't sort of science as normally construed. Something that I found really shocking-- you may be familiar with and hopefully many listeners maybe familiar with, the replication crisis in social psychology. So this was, I guess most famously in 2015, there was a paper published in which 100 well-known experiments in social psychology were replicated. I think it was 36% of the significant findings were found to replicate and typically the effect size was about roughly halved.
So this was not a great look for social psychology as a discipline and raised a lot of questions about what was going on. That story I just told is quite well-known. What is much less well-known is that in fact going back many decades, people had been making essentially the same set of sort of methodological criticisms. Talking about the file drawer effect, talking about p-hacking, talking about all these kinds of things which can lead to exactly this kind of failure. And there are some very good papers written in-- I think the earliest I know is from the early sixties. Certainly in the 1970s and 1980s you see these kinds of papers. They point out the problems, they point out the solutions. “Why did nothing happen?” "Well, because there's no entrepreneurial discipline which actually allows you to build out the institutions which need to be built out if anything is actually to change."
So it's just academics doing what academics do; studying a problem, figuring out what needs to happen maybe. But then they don't actually have the kind of infrastructure which is necessary to go and really make the changes. Turns out there's just a lot of institution building. And fortunately, that has happened in the modern era but it required a tremendous amount of energy and foresight and intelligence and also, I should say, actually funding from some unusual sources. A lot of the money for this work actually came from the Arnold Foundation run by the Hedge Fund operator, John Arnold.
Ben (42:18):
It didn't came from this different source of funding which I think is interesting. It struck me that these institution builders may well be not desktop scientists but they might well be meta-scientists. But actually the disciplines are potentially a little bit different for institution building. Actually, you see that in some other perhaps slow to change or slower to change industries. So for instance, lawyers have potentially run themselves by lawyers but now they're slowly evolving. They realize that actually, "You know what? If you're really good at law does not necessarily mean you're good at running your own organization." And they started to get in people who might be better at that. Accountants are the same. They might be very good at accounts. Is that the same as running an organization, small or large? And it struck me that science is having this similar problem that actually there's even more in law than in accounting about what you should do. It should maybe be attracting not those who've necessarily studied physics, but could have studied anything or interested in this to help build those institutions. And that's the kind of entrepreneurial part. Do you think that can happen just understanding some of science to build those organizations like a whole different discipline? Or do you think it's got to come from scientists themselves?
Michael (43:33):
I think just looking at examples where this has happened successfully in the past, there are a certain number of people who largely come from outside of science who have contributed successfully. But with that said, most of the examples begins with a scientist who is very close to the particular problems, understands them well, feels them very acutely, has an existing network and begins to develop solutions from there. Often when they do it, a classic is that they need to leave their home institution. Sometimes they need to leave science entirely. You’re often beginning to seek very unconventional sources of funds. Actually, I should say the default thing that happens under those circumstances is it just fails because there is no sort of support mechanism. It's done in this very strange, very bespoke fashion.
Brian Nosek who was the person behind some of this work on the replication crisis, he started the Center for Open Science sort of as an entrepreneurial kind of an organization to build out a lot of the required infrastructure. He took a leave of absence from his university position. And partially, just because the kinds of person who they needed to hire, there was just a person that is quite difficult to hire conventionally in an academic environment. So that's an example where it was done successfully. But when I talk to people who have ideas like this, they don't have any model for how to do this successfully.
Ben (45:17):
Yeah. I was talking to innovation historian Anton Howes and he has this idea that back in history, you essentially get people who were disgruntled but couldn't let it go. And from that you have all of these type of things which happened. He's got a lot of historic examples whereas more individual inventor scientists forming these things, but scaled in slightly different does seem to apply to today. And you think about homes for other types of thinkers. I think of the Santa Fe on complexity. There was no home for these, I guess you call them slightly maverick scientists and thinkers, although maybe not today. But they couldn't find homes in their home institutions and so homes had to be built for that.
The other thread of work I see through the essay and from your previous work which I feel is quite important although it doesn't necessarily touch on this exactly, is on decentralization and also open science. Entrepreneurial stuff seems to work better decentralized. We can talk about how much you can do top down versus bottom up like we said on kind of pairing disciplines. And then this idea of open science-- Even going back in history, how much you could build on what was public knowledge versus whether it was shared or whether you kept it as patents. But this idea that if you are thinking of just the glory of science as opposed to any profit motive, then building on other people's knowledge or knowledge out there might be quite useful for speeding up that which is a kind of open science idea. How important do you think are those two threads to expanding the field of meta-science, particularly entrepreneurial? Or are they really separate threads which may or may not happen and don't have to intersect with how meta-science is?
Michael (47:00):
I think that they are sort of separate. The point about decentralization, it's really just this point that you want change to be able to come from anywhere. So in particular, you don't want gatekeepers who are able to inhibit change. The ideas of science, we have a pretty good balance there. There's certainly a large number of Noble prizes are awarded to people who actually a long time ago when they were doing the original work, they were the outsider. They were the grad student or whatever. And maybe the famous expert was saying, "No, this can't be done." Certainly there are examples where science is not so receptive to outside ideas but overall, has a really remarkable track record of accepting that kind of thing. An example actually given in the essay which I rather like is the determination of the structure. The DNA molecule done by Watson, Crick, Franklin and arguably her student, Goslin as well.
They were very much kind of scrappy outsiders. They were at well-known institutions but they were almost completely unknowns. The other person who was in this race at the time was Linus Pauling who was the most famous chemist in the world. He was a Nobel Prize winner. I don't know whether he'd won his second Nobel Prize by that point, but he certainly had won. And it wasn't any Nobel Prize, it was a truly spectacular piece of work. Pauling actually announced that he'd found the structure first before them but he was wrong. The remarkable thing is Pauling accepted this immediately. They pointed out-- I think it was Watson pointed it out to him the error that he'd made and then showed Pauling the structure that they'd found, and Pauling just looked at it and realized that he'd goofed.
And this kind of situation where nobody with a good idea can sort of essentially emerge almost immediately victorious over the incumbent is obviously a very healthy thing. It's possible in science at the level of ideas, it is much harder at the level of institutions. If you have a much better idea for how the NIH should be dispersing its funding, good luck. I mean, the only way you can be taken seriously is if you are already at power. If you're sort of at that level-- If you're at the level of the director of the NIH or you have a Nobel Prize or something like that, sure, you can get taken seriously, maybe, but you're not going to do it as a grad student like Watson or Crick or somebody like that. So that ability to do decentralized upgrades in the social processes, the institutions of science is something which we just have no mechanism for at the moment. And I think that's the reason for much of what I see as sclerosis.
Ben (50:10):
And so your solution is new institutions, or can you do it new arms of old institutions?
Michael (50:16):
A few things. I mean, there are some patterns that work decentralized already. I won't talk about them now. One thing that would help a tremendous amount is a much more serious discipline of meta-science which is able to do evaluations which are dispositive. So I mentioned before this example of the replication crisis in social psychology. That is an example where you had an outsider who was actually really able to change an entire discipline because the strength of the evidence was so good. But it was also rather a peculiar situation where sort of the key paper-- arguably the key paper that they wrote, 270 authors replicated a hundred papers over multiple years of work. So it wasn't just get a data source, write some Python scripts, generate a few nice graphs.
This was a very serious kind of a project where they had said what level of evidence would be so overwhelmingly convincing that in fact even people who hated this conclusion would be forced to accept it or would be forced to... I think really it's change your mind; that's the relevant question. So even somebody who is initially hostile would change their mind. At the moment we just don't have very strong techniques for doing that. This is an example of where it was done, but what you would like is many more such examples and in particular, just a lot of people to work on developing techniques which are strong enough to do that. I think there are a few people who are doing that. Pierre Azoulay, an economist at MIT has I think also done some very strong work where it starts. I don't think it quite meets this bar, but it's getting close to the point where the evidence is so strong for certain processes that you might actually start to think, "Oh, I was wrong before in what I believed was the right way to do things." So that's kind of the bar. We introduce this term, decisive meta-scientific result, meaning a result which is so compelling that it would actually cause somebody initially skeptical to change their mind.
Ben (52:46):
Skeptics changing their mind. That's a good hurdle to cross. So I'm going to see if I can pin you down then on a meta-science question or idea. What would be the one thing that you would bet on most or most want to change in meta-science or some experiment to run do you think would be most valuable? Or you could also do the work like high variance. We could do, what do you think is the best idea in meta-science and what do you think is the worst idea in meta-science? And maybe we should go from the worst because then we'll be... Anyway, best and worst?
Michael (53:17):
Going to the bottom of the barrel you can always go further down. I'm certainly very fond of the idea of people thinking much more explicitly about increasing the rate at which new fields are founded. Actually, some funders do a certain amount of this work. They think explicitly in terms of trying to support fields in their relatively early days, but never that early. Something funny happens at the beginning of new fields very often. It's often very hard for people to get support for their work. I was involved in quantum computing in the relatively early days. I started working on it in 1992 just doing things like, “What journal do you publish in?” Surprisingly hard question. If you submit it to a lot of journals that you might think were relevant they'll just say, "I'm sorry, what is this?"
So it requires some editors to actually be a little bit friendly. If you try and arrange something like a scientific meeting actually people are like, "What is this?" So there's all these sort of very interesting barriers. But at the same time if you look at the papers which are being published retrospectively, they're enormously important; not all of them, but many of those papers are incredibly important. So there's kind of this very interesting mismatch where the work that is being done is often of incredibly high value relative to the resources that are being put into it. But also the barriers are much higher than much more conventional work. And I've used the example of quantum computing because I saw that very firsthand. But in fact, reading histories and talking to people seems to be quite a common feature across many fields.
There's often a lot of really very important low hanging fruit in the early days and yet pioneers surprisingly often it's very difficult for them to obtain even very extremely minimal resources. So certainly I think there's a lot of room for funders to think about this question of how can we accelerate the rate at which new fields are being produced? One of the things that they can do is to try-- To think about the question much more seriously, "What are the very early signals which currently we're not able to detect at all?" If you have a really good story, a really compelling kind of an account of how this is all going to play out and so on, that's actually a sign that maybe you're a little bit later on in the whole process. Very few fields start that way. Alan Turing inventing computing in the 1930s. He certainly didn't anticipate video games or something like this. He was actually trying to solve a logic problem. So many fields are born in very strange, almost entirely illegible ways. And this is one of the reasons why funders have trouble funding a lot of that work very often. But it's also an opportunity for them. So that's something I would love to see done.
Ben (56:41):
I could definitely see new fields. I've seen it tiny bit afar from science but actually more in the arts when you're talking about new ways of working or work. The interesting thing what I've seen in the very early days, I would say call it something where it's about a quarter baked-- So working on something else is their main thing and they've got this quarter baked idea and typically, almost everyone in the world or in that room won't understand what the person's talking about. It's quarter baked, so they don't even have the language to describe it. Quantum computing is quite a good example. You have to invent the terms and the language which are derived from other things but its own unique thing. Yet what I see, at least in art world-- And I think tangentially I see it in some science as well, is this quarterback idea is part obsessive.
You still have to do other things because you've got to do other things, but it nags at you or it nags at the person and it doesn't go away at the back of their mind. And then somehow the successful ones I see is they get a piece of usually chance funding or time or something and they work it into enough of a language that someone else can then potentially get it and go-- And someone with status or something saying, "You know what? This is worth a bet on. I can barely understand it, but I think I understand enough of it to know that it's true as opposed to completely wacky." You see that in art easier and I guess the stakes are lower in art. Then if it develops, it develops into it a language. You see this in language of art. They develop the whole other language and go, "Okay, that is a compelling vision," which in its very early days everyone thought was crazy or couldn't even understand and maybe wasn’t art. You could see it sometimes too early for its time. So I give a lot of credence to this quarter baked or fifth baked idea thing where they're trying to express something and in a language that you can't quite understand. But I can completely see it from a funder, if they can't understand it, well, how are you going to fund it? Except that that's maybe the signal that they're really obsessed about it. You can't understand it, but someone may be something and they're trying to invent something new about it. There seems to be tentative signs around it, still very low probability at success, but you have got these kind of meta-science which I think could be explored.
Michael (58:58):
I think the difficulty there, the challenge of a funder is the natural tendency is to want to wait to see whether or not things will become clearer. But of course they don't necessarily become clearer because if the person is not being supported at all, well, the demands of everyday life just mean that they will do other things mostly. So that's really quite a significant barrier. To your point about sponsors, so the English physicist, David Deutsch, wrote what is arguably the first really serious paper-- one of the first serious papers about quantum computing. In 1985, was communicated to the journal by Roger Penrose who has since won the Nobel Prize. My understanding is that actually Penrose was very skeptical of the paper, but sort of in a friendly fashion enough to be willing to have an opinion.
And really, certainly the community of physicists had almost no opinion at all for 15 years. It was just certainly in the late nineties that the most common conversation I would have with other physicists about quantum computing was, "Is this physics at all?" There was no interest. It was just, "Is this physics at all?" Got that question from many people who now work full-time on quantum computing. But I think it's really to Penrose's credit and to your point, it's an example of how having this kind of friendly, maybe somewhat skeptical but also ultimately supportive kind of sponsor can be very helpful.
Ben (01:00:42):
Great. There are two other things in the essay which you sort of look at askance which I'm quite interested in. One is essentially science in non-European, non-American type of institutions. I guess India, China, Russia, maybe to some degree, which you only fleetingly talk about but I was quite intrigued by. Do you think there's any particular learnings from that? There's a whole other type of, I guess sibling ecosystem. Anything we can learn from that or do you think it will go on a different track and should it go on a different track rather than converge?
Michael (01:01:19):
Well, you will know from the essay certainly we hope that it will go on a different track or at least there won't just be mindless duplication of the existing-- There's obviously a large research ecosystem in the United States, in the UK, in Europe and many other countries. But for countries who have a lot of scale in terms of population and whose economies are growing very rapidly, obviously China and India are the two most obvious examples. But you also have places like Brazil.
Well, there are many other places; Indonesia which match this kind of description. They have a really interesting opportunity. Historically because their economies have been relatively small, small relative to their population at least, they have tended not to put that much of their GDP into developing a research ecosystem. Now they're at a point where in fact they are very rapidly developing that research ecosystem. And the question is, "Do they do the same kinds of things? Do they duplicate the NIH? Do they duplicate UKRI or do they try and do something more adventurous?" I certainly hope that they'll take the opportunity that they have to study these systems, hopefully identify what they think are some shortcomings and then maybe make some big bets about doing things in somewhat different ways.
They can actually probably to some extent have their cake and eat it too. Nothing says that they can't spend 70% of their research budget in a way that looks relatively similar to the UK or to the United States but then also actually have a big chunk which is spent in very unconventional ways. Sort of just a way of saying, "Well, maybe we can actually do this much, much, much better." As far as I know this is not happening. I don't know of any large scale initiative to do it. It is a little bit encouraging. Certainly a lot of the innovation you do see actually comes from small countries that have just decided, “Let's try a little bit of a random experiment. We're not going to beat the United States on scale. Maybe we can do it in some other way.” Some of the foundations in Denmark do very interesting experiments. The New Zealand Health Research Council did, I think the world's first large scale experiment with randomized funding or lotteries that you just sort of give some of the money to an applicant at random. That kind of innovation I think is fairly natural in places which are already somewhat peripheral. But I certainly hope that India and China will do some experiments in that way. They have this opportunity.
Ben (01:04:13):
And I think these mid-size countries-- So Denmark's already well on its way. Singapore maybe. But places like South Korea, I think about their social housing as an example. They didn't bother with wired phones. There's a joke that those are stock phones and they went immediately to mobile. So skipped all of that infrastructure because it was of a generation that they didn't need anymore. I kind of simplify the story. And I wonder whether there might be in those countries which are already there, and like you say, there's a variety. The other one on the-- Again, which you saw askance was you mentioned Hume’s is-ought because you mentioned that all of this meta-science does not tell society or anyone what should be studied. Whether that should be climate, physics, quantum computing, how the economy is run or anything like that. Do you have a sense about what should we and where we are on the balance? Do you have any projects where you think, "Well, this is really important science questions?" Do you think the mechanisms that science decides on it is any good at the moment? What should we be exploring on this? Or is this something which is just that much far away from meta-science which it should leave well alone and just stick with trying to develop its own field?
Michael (01:05:34):
So I guess my-- This is partially a personality thing. I certainly have very strong opinions about the way the world ought to be but I also feel that's not my own personal comparative advantage sort of professionally. It's more sort of a personal thing. Something to chat about with friends and to have some opinions about as a civic actor. So a very concrete example, in the United States for somewhat arbitrary historic reasons, some of the big funders-- There's the National Institutes of Health (NIH) which does biomedical research typically with a human focus. There's the National Science Foundation which does basic research. There's DARPA which is doing defense oriented stuff. There's the Department of Energy, does a lot of high energy physics. Those organizations, the way in which their budget changes is determined in fact largely at the political level.
So the NIH, for example, has actually grown consistently at a rate-- It's roughly 1% above if you look sort of over a long enough time period. 1% per annum faster than the NSF. And I think this is most likely really a reflection of politics and political constituencies. People, taxpayers, perhaps understandably rather more interested in research which seems like it might be connected with improving their health than with the more speculative kind of basic research which the NSF tends to fund. That might be a bad choice and there are certainly some sort of 'is' questions which can be asked. For example, you might ask the question, "What fraction of human health span extension or change has actually been due to discoveries made at the NSF versus discoveries made at the NIH?"
It's quite plausible that that's a reasonable kind of a research question or might be turned into one with a lot more thought than I've just put into it. And you can imagine trying to sort of disaggregate that in a variety of ways. That might then feed into some sort of decisions at the political or the values level but I'm not going to try and wade into that. I think you want to develop good techniques which can be used to answer that kind of question. But then it's an entirely separate question and to some extent, I want it to remain separate. How one reasons about it after the fact, you might decide that it's tremendously important that you have a very strong defense research establishment in which case you would be strongly in favor of increasing DARPA's budget perhaps. But perhaps you're not, and that's not a question that can be settled by studying the way the world is. It's a question where you need to bring some of your own values. But to the extent that you can decouple those two things, I think actually both really strongly benefit. Both sides of that benefit.
Ben (01:08:47):
Well, maybe delving into that just a little bit more. This movement of effective altruists or EA do have some strong sense of this. There's an interesting part of the EA movement which is actually very concerned with progress and actually on new institutions which is a kind of interesting overlap. You had really interesting initial reflections in a blog on EA which EA by its nature also they solicit criticism so thought about that. You had a conversation with some EA's and they're obviously worried about existential risk, pandemics by security and how you can think about utility and doing good. After all of that, and you've gone through that, has it changed the way you think rather than going through all of that which was kind of on the record already. Have you changed your mind at all around some of your thinking around EA and how they think about that? Or have you arrived at roughly the same place still with an open mind but it's kind of like, "Well, they have the way of thinking which is good?”
Michael (01:09:50):
Actually, I'm not sure I'm quite understanding the question, Ben. Can you...?
Ben (01:09:54):
Yeah. I guess, having had all of these conversations around effective altruism and what they're thinking, do you think that is a good way of thinking around the world and has your view changed since your blog and conversation on EA?
Michael (01:10:04):
Okay. So I think the connection you are drawing to your previous question, like in some sense they have a set of values which they're using to drive, their giving in a whole lot of ways.
Ben (01:10:15):
And essentially they've decided this is how things ought and is to be. So that's one world view. I'm a little bit too old. EA was not invented when I was at university so I've also come to and it's like, "Oh, I find that quite intriguing. It's not my way." And obviously you've done some work on it. So I was just kind of intrigued partly because that's the connection. Yes, they have a way and also you've done some thinking and it might have changed. So it was kind of like, "Is that EA way of doing it? How much does that resonate in part?" It's a way of thinking about the world where they have-- And it's a new movement as well as opposed to utilitarians were there before and you had the various things. But I find it's interesting because it's a new movement about how to think of the world and at least some of them have quite strong views as to how to do this, maximizing good or maximizing utility.
Michael (01:11:04):
Yeah. It's a really interesting question. They are using their values to drive their giving. I suppose you can break your question down into two pieces. One of which is, "Are they just mistaken about certain facts about the way the world is?" And actually, I probably think they are. But then there's also the question of, “How do I align with them on values?" And just so happens that for personal reasons I don't particularly align with them on values. I like many EA's a great deal. I have some close EA friends but it's not for me. There's a big long personal conversation about why that's the case. You could talk about why that's the case.
I don't know that it's necessarily so interesting why do I have the particular set of values I have? Well, that's of interest to my close friends and family and probably not to too many other people. The question that you might find more interesting is, “How do I think they're mistaken about the way the world is?” Certainly I think EA is actually practiced when they're oriented towards research. I think they don't appreciate nearly enough the value of illegible work, very early stage work. So most of the things which they seem to be strongly oriented to support, they're very big, very legible goals. The most common one is the people-- Well, the one that people perhaps talk the most about and think the most about is AI safety.
It's a big scary monster which is very easy to describe. To sort of think about it, it is a big goal. But historically, a lot of the most significant work on progress has not been legible in advance. I mentioned Turing before as an example solving a logic problem. Didn't realize that it was going to become maybe the most important industry of the 20th and 21st century. He wasn't working towards a big goal. And this is such a common pattern across research. I tend to think that systematically the EA organizations undervalue that. When I talk to EA friends they're perfectly aware of this. I don't really understand why they're not more indexed on it though.
Ben (01:14:05):
Yes, it's hard to... I guess there's a corporate truism that what you can't measure doesn't get managed. But then there's an aphorism above that which is this idea that if you only try and do the things that you can measure, this is not everything that can be counted counts and not everything that counts can be counted, actually, you go even more wrong because of those things.
Michael (01:14:36):
I love this book, "Seeing like a State" by James Scott where he talks about the way in which or what he called the high modernists have often caused problems when they're in governance because they will adopt some goal as being good for the locals in some part of the country which they're governing. But they're ignorant of so much local knowledge which is actually important for the functioning of the systems. And so with complete good intentions, sometimes they actually make things go very backwards. I think it's sort of the government version of this. Figure out what you could measure, use it to manage it, and then sometimes be very surprised when in fact it's a disaster.
Ben (01:15:22):
Some of the disasters in international aid can be well traced back to those routes. I guess I partly ask because I see EA's are building new institutions and new organizations and some of them will be quirky and that's probably a good thing.
Michael (01:15:38):
That I'm totally in favor of. Actually sort of...
Ben (01:15:42):
But it's also separate. It kind of doesn't matter in that sense that they may have got the version of the world wrong because they're a new organization doing things in a new way and therefore will hopefully discover something around the dark intelligent matter even if it was not the thing that we may or may not have thought was there.
Michael (01:15:59):
Yeah. I think some interesting question about scale, what fraction of all philanthropic giving do you want to be EA? The interesting thing to my mind anyways, there's not really a natural regulator for that. Nothing sets the scale, or at least there's nothing really strongly sets the scale there. It's a question just of to what extent do these ideas become fashionable? So yeah, maybe it ends up as actually 50% of all philanthropic giving. Maybe it ends up as 5% of all philanthropic giving. I don't know what it is at the moment. I think it's maybe on the order of 1% or something so it's actually pretty tiny. So you might say, "Well, okay, if it's tiny it's fine for it to grow.” But if there's nothing obviously stopping the growth-- and there will certainly be many internal drivers; lots of people whose careers and self-image has now bound up with it that doesn't seem so great. Of course, I mean this is the story of so many things which don't have a natural mechanism limiting their scale.
Ben (01:17:15):
Yeah. And when you don't have the tradeoff audit-- You see this today in terms of like, "Well, how much should we be working on climate versus deep poverty versus pandemics?" No necessarily natural capital working on any of those so really tricky.
Michael (01:17:32):
Whenever it's politics which is deciding it there's a natural sort of oligarchy tends to form where... You would think that if you had two pretty promising approaches to solving some problem and one of them had let's say 90% of the funding and the other one had 10%, and if you did a serious evaluation and conclude the 90% funded one was only slightly more likely to be successful, you'd get a rebalancing. But actually that's often not what happens. What really matters is politics. And many more of the people on the 90% funded one are actually in a position where they're able to influence future flows of capital. So actually it gets even more lopsided. It's kind of a, I think a crazy-- It's just a natural way in which oligarchy is formed and that pattern you see absolutely everywhere. But it's such a problem in philanthropic funding, this kind of rich get richer effect.
Ben (01:18:38):
Great. Okay. Well, wrap up with a couple of questions and then your current projects and advice. So maybe just a couple of quick underrated, overrated things then. So rogue AI risk, AI safety, do you think underrated or overrated?
Michael (01:18:53):
I have no idea.
Ben (01:18:55):
No idea. We'll pass. Okay. Critical rationality or rationality as a movement?
Michael (01:19:03):
Well, I think those two things are used to mean very different things actually.
Ben (01:19:07):
Okay. Yeah. That's probably true.
Michael (01:19:08):
Critical rationality I tend to associate with Karl Popper and with David Deutsche and with a few others. Rationality sort of in many of its modern incarnations is a slightly different branch of the tree.
Ben (01:19:20):
Perspective, value, probability and that type of thing.
Michael (01:19:21):
Yeah, exactly. And EA actually for that matter which is a different group of people completely. Just having clarified the terms, what was the...
Ben (01:19:32):
Let's go through critical rationality because we covered...
Michael (01:19:34):
I think probably underrated.
Ben (01:19:37):
Yeah. Maybe that's part of a...
Michael (01:19:39):
It's funny I can't read Popper-- That's not quite true. I just don't respond to him. But I still think underrated.
Ben (01:19:51):
I found Popper and actually Deutsche quite hard. I think I get it, but I'm not sure I do get it. And I speak other to people who I think they think they get it. I'm not sure they get it, but maybe that's where it is. So I probably think mildly underrated too but I'm very uncertain because maybe I just completely don't understand. One on this-- although this could be quite a long answer you can just give a short one if you'd like. Memory systems? Obviously you've done a huge amount of work on that. Maybe I'll just split them because you've got the kind of your card based repetition space systems and then you've got memory palaces. I get the overall sense that you probably think memory is more important than people think. So the overall underrated. That's my impression from you. But I'm not entirely sure because you've written a lot on it. So is that your view and what should we know about memory systems?
Michael (01:20:48):
So certainly underrated. It's a combination of I guess technology and science that really points out how people with very minimal effort... Actually, it feels like one of those products that is promising you the world with no effort. You know, "Lose 25 pounds in three weeks while eating hamburgers only."
Ben (01:21:16):
Just by saying this mantra three times.
Michael (01:21:18):
Yeah, exactly. It turns out though that with memory systems actually you can have a much, much better long-term memory for relatively minimal investment and it's just due to some quirks about the way the human brain works which have been known to psychologists for more than a century. There are thousands of papers about it. People have now built systems. They're a little bit aversive in some ways. They're like bicycles. It takes some work to get good at using them and most people just give up because they don't see the effect. If you stick with it and master them, actually they can be very useful if you are doing the kind of work that benefits from a really good long term memory. Not everybody is, but for the people who are, I think they're absolutely wonderful.
Ben (01:22:06):
Yeah. I think for me personally, the space repetition type ones I find are a little bit more useful because I think they can do more interactions with other forms of whatever you are learning. Whereas the memory palace ones are great for long lists of things but are actually less practically useful. They can be. So you've got a long list of history places, you've got your history exam, well why not learn them that and you'll learn them perfectly? But less practically relevant although not always. Whereas I actually think the space repetition ones amongst with other ways you can actually form new-- Well, maybe new to myself ideas, but newer ideas just due to that. But that's a personal thing. So last couple of questions then. What are some of the current projects or things that you're working on or what you're most excited about?
Michael (01:22:59):
So I'm on vacation at the moment which is part of the reason why I'm here.
Ben (01:23:02):
Very exciting.
Michael (01:23:03):
Little break in London. I don't know what I'll do next. We'll see. I've just finished this book that took way longer than I thought. I am just enjoying goofing around; reading about religion, trying to understand religion better, trying to understand art better and trying to understand a little bit about the latest technical progress in AI. So maybe a project related to one of those, but I don't know. And then there's some follow up things from the meta-science work that I will almost certainly do. But I just want to clear my head a bit before making a decision.
Ben (01:23:35):
Sure. Well, if you're looking at art, is this particularly visual art and do you have a view on where, I guess AI art will be? I would say that I'm less worried about what some people worry about AI art because my view art has-- I don't know what percentages, but there's this term in philosophy of art of the beholder share. And so this is the fact that art is valuable because someone receives it. So it's not just obviously you've got paint and creation and then it's all great. I see this very much in my theater practice. Essentially, theater is not really theater unless it has an audience. And actually that's true of quite a lot of art. I think people-- because it's new and because we're going to generate so much stuff and because of where it's come from, I have not quite seen that part. That's my own personal view. A lot of people have the other side. But if you're thinking about it, do you have a view or a thought there?
Michael (01:24:33):
I have many, probably far too many. What's a particularly interesting thing? Okay. So maybe one concern and one observation. The concern is just essentially capturing of the commons. So if people-- Where it used to be that anybody could buy a set of paints and just start to paint, if you actually need technology which is the IP of somebody, yeah, they've inserted themselves as an intermediation layer and historically that's... It always causes some problems. There's always some tradeoffs. I don't know how that will turn out. Most of the conversation which I hear about it is by people who have some sort of motivated reason either to be very, very keen on this possibly because they may own some of it, or to be very, very anti because they're worried that they will be put out of a job by it.
I don't think either of those things necessarily-- It leads to interesting perspectives but doesn't necessarily lead to a particularly clear perspective unless those people are really unusually honest. So a lot of the conversation I hear about that just seems, "I'm a VC and wow, I'd love to own some of this" is kind of the subtext very often. But yeah, you don't get actually very interesting thoughts from that or not a lot of clear thoughts. The interesting observation, I do just enjoy seeing what some of the artists who are working with this are doing. Like when they're able-- I admire so much people like Matisse or Cezanne or Picasso who were able to discover new ways of seeing. Picasso, probably particularly to some extent, Rembrandt, that's sort of a slightly different thing that he was doing.
And I wonder if we're going to see the same kind of genius with the new AI art systems. That would be very exciting. It's very tempting if you look at the way the systems work to think, “Well, they're not really going to be creating new ways of seeing in the same kind of way." You don't get that sort of-- Once you begin to understand cubism, it's really remarkable. You get what they were trying to do and you realize that you've expanded the way you can see the world. Somebody discovered that and it's just incredible. I guess I'm sort of just cautiously optimistic that maybe the same thing will happen with AI art. Brian Eno, the musician and composer has this really interesting observation that it took hundreds of years for us to figure out what possibilities are latent inside the grand piano.
And today, an instrument as rich as the grand piano is being invented every day and nobody will ever master it. That is kind of sad in some sense; to have all these kind of latent possibilities which will never be explored. So that's also a potential outcome maybe from the AI art systems. Maybe it turns out that actually the system is changing sufficiently rapidly that nobody ever masters it. Certainly, I think you see this with software systems at the moment. The rate of change in something like JavaScript frameworks or whatever is so rapid that nobody ever gets really good. A friend who's a dancer and a programmer committed to me that she finds it irritating when people have been programming for five years and they think they're really, really good. They're a senior software engineer at Google or meta or whatever. She's been dancing for 28 years and feels like she's just getting the hang of it. There is something to be said for that kind of deep art and if the AI systems are changing sufficiently rapidly, it might be that nobody ever masters them. That would be a little bit sad. That's a very long answer.
Ben (01:28:50):
Yeah. That's a really interesting observation. I hadn't known that. I guess I can take the other side and say it's quite nice that there's always going to be opportunity. But I think of the other observation then is like we'll never even master the programming language of sea. There will be no real Picasso of sea or something about that which was so elegant and beautiful or something. We're beyond and past it and it might not have even been-- I don't know whether the current languages are better. Is English really that much better of Latin? But you've got heights of Latin expression which we've had and we'll have in English which we might never get in these programming languages which have been around for a short time. I hadn't thought about it like that but that could well be true.
Michael (01:29:35):
Somebody who was at Google pretty early on-- So one of the people who famously helped build Google's early systems, Jeff Dean-- I guess they knew Dean at that time and commented just unselfconsciously that he poured out code. I just thought that was such a lovely way of describing somebody. Sort of a real master is pouring out code.
Ben (01:30:02):
Yeah. Maybe this settles back I think this act of creativity; arts, humanities is much closer to where we are on science coding software than I think it seems like first glance when you speak to actual scientists and particularly that messy layer before you make it legible and put it into an equation. All of that thinking before that seems to me to have much more in common with what dancers do, with what artists do, with what performance do than you might have thought. So I do think that ties around and I think that might even be true in what we discover on the meta layer when we think about those. That act of imagination that you need for something which isn't quite discovered yet, that part of whatever makes us human on that is still quite mysterious and seems to dwell in this creative part or blob or however we do it. In any case then, last question. Do you have any advice or thoughts for people? Maybe young people thinking about their careers or what they might want to do or maybe someone who's wanting to do the leap into a new organization or something within meta-science or open science? Do you have any advice or thoughts about what you would do, what you would tell them?
Michael (01:31:21):
I'm certainly interested... They say that the advice you give others is the advice you'd wish you'd given your younger self. That's probably true. Paul Buchheit, the creator of Gmail has this lovely equation that advice equals limited life experience plus overgeneralization. That's certainly true. The one thing I wish I'd understood much earlier is the extent to which there's kind of an asymmetry in what you see, which is you're always tempted not to make a jump because you see very clearly what you're giving up and you don't see very clearly what it is you're going to gain. So almost all of the interesting opportunities on the other side of that are opaque to you now. You have a very limited kind of a vision into them. You can get around it a little bit by chatting with people who maybe are doing something similar, but it's so much more limited. And yet I know when reasoning about it, I want to treat them like my views of the two are somehow parallel but they're just not.
Ben (01:32:31):
Yeah. Well, I guess that might suggest maybe one needs to try out more things to actually know.
Michael (01:32:39):
That does I think is probably generically... It's generically true. One way in which it's not is it depends on what kind of a safety net you have. Some people have, and I am certainly one of those, have pretty reasonable safety nets and so that enables me to try new things. But the great majority of people in the world do not. And so they're I think, justifiably extremely cautious. But then people also the size of the safety net they think they need does tend to expand as well. If your safety net includes driving a Mercedes and having a et cetera, actually you can probably do a little bit better than that. A friend of mine who's a science fiction writer-- Science fiction writers do not make much money. Even famous science fiction writers do not make much money for the most part. He was trying to decide, "Was it fair on his daughter that he had chosen to be a science fiction writer?"
Because it meant that she probably wouldn't-- He couldn't afford to send her to the best high schools and things like that. Maybe she might get a scholarship or something like that. There were certainly some opportunities. He said that he was on a-- I think it was just a little boat somewhere off Northwestern Australia with a collection of his science fiction buddies who are just an incredible group of people. And he said he realized there that she might not necessarily get quite as-- She might miss some opportunities but she would also have opportunities like that, that she just would not have if he had chosen a more conventional and solely higher paying line work. So I think of that also as kind of a living that kind of a life is a safety net of its own. It's certainly something that you're providing for you and your family and your friends. I don't know whether that's clear or not. Hopefully it's clear.
Ben (01:34:58):
I would interpret that saying there's these immeasurable, these uncountable elements which are actually very valuable. And if you only measure it in dollars, then you might miss actually some of the vast positiveness or wealth or safety net that...
Michael (01:35:18):
Yeah. She was getting an expanded conception of the world, I guess, in some really interesting way. And I could see it would be valuable. You can't eat that, unfortunately. That's the flip side. But you actually don't need to make that much money to be able to eat.
Ben (01:35:35):
Great. Well, on that note, thank you very much.
Michael (01:35:38):
Thanks so much, Ben. This was fun.