Ajeya Cotra

Path
GrantmakingResearch
University major
Electrical EngineeringComputer Science
Cause area
Global Priorities ResearchAI Safety
Job title
Senior Research Analyst

image

Majored in:

Ajeya received a B.S. in Electrical Engineering and Computer Science from UC Berkeley.

There she co-founded the Effective Altruists of Berkeley student group and taught a course on effective altruism.

Current role:

Ajeya Cotra is a Senior Research Analyst at Open Philanthropy.

Background:

Ajeya joined Open Philanthropy in July 2016 as a Research Analyst.

Listen to a podcast (transcript below):

Watch her talk on effective altruism:

Ajeya’s background [03:23, transcript]

Ajeya Cotra: I’m a senior research analyst at Open Phil, and like you said, Open Phil is trying to give away billions of dollars. We’re aiming to do it in the most cost-effective way possible according to effective altruist principles, and we put significant amounts of money behind a number of plausible ways of cashing out what it means to be trying to do good — whether that’s trying to help the poorest people alive today, or trying to reduce factory farming, or trying to preserve a flourishing long-term future. We call these big-picture schools of thought ‘worldviews’, because they’re kind of like a mash-up of philosophical commitments and heuristics about how to go about achieving things in the world, and empirical views. I’m looking into questions that help Open Phil decide how much money should go behind each of these worldviews, and occasionally, within one worldview, what kind of big-picture strategy that worldview should pursue. We call these ‘worldview investigations’.

Ajeya Cotra: This is closely related to what 80,000 Hours calls ‘global priorities research’, but it’s on the applied end of that — compared with the Global Priorities Institute, which is more on the academic end of that.

Robert Wiblin: We’ll get to that in just a minute, but how did you end up doing this work at Open Phil?

Ajeya Cotra: I found out about effective altruism 10 years ago or 11 years ago now, whenever Peter Singer’s book The Life You Can Save came out. I was in high school at the time, and the book mentioned GiveWell, so I started following GiveWell. I also started following some of the blogs popping up at the time that were written by effective altruists folks — including you, Jeff Kaufman, Julia Wise, and a bunch of others. I was pretty sold on the whole deal before coming to college, so I really wanted to do something EA-oriented with my time in college and with my career. So I co-founded EA Berkeley, and was working on that for a couple years, still following all these organisations. I ended up doing an internship at GiveWell, and at the time, Open Phil was budding off of GiveWell — it was called ‘GiveWell Labs’. So I was able to work on both sides of GiveWell and Open Phil. And then I got a return offer, and the next year I came back.

Ajeya Cotra: I was actually the first research employee hired specifically for Open Phil, as opposed to sort of generically GiveWell/Open Phil/everything. So I got in there right as Open Phil was starting to conceptually separate itself from GiveWell. This was in July 2016.

Robert Wiblin: Had you been studying stuff that was relevant at college, or did they choose you just because of general intelligence and a big overlap of interests?

Ajeya Cotra: I mean, I had been, in my own time, ‘studying’ all the EA material I could find. I was a big fan of LessWrong, reading various blogs. One thing I did that put me on Open Phil’s/GiveWell’s radar before I joined was that I was co-running this class on effective altruism. UC Berkeley has this cool thing where undergrads can teach classes for credits — like one or two credits, normal classes are like four credits — so having to put together that class on effective altruism was a good impetus to do a deep dive into stuff, and they gave us a grant. Our class was going to give away $5,000; they were going to vote on the best charity to give it to. We got that money from GiveWell.

Ajeya Cotra: But in terms of the actual subject matter I was focused on in university, not really. It was computer science — technically like an electrical engineering and computer science degree — but I didn’t really do anything practical, so it was kind of a math degree. Being quantitatively fluent I think is good for the work that I’m doing now, but I’m not doing any fancy math in the work that I’m doing now. We have people from all sorts of backgrounds at Open Phil: something quant-y is pretty common, philosophy is pretty common, economics is pretty common.

Robert Wiblin: Yeah, there’s a funny phenomenon where people study very advanced maths, and then on a day-to-day basis, it really actually does seem to make a huge contribution to their ability to think clearly, just by their willingness to multiply two numbers together on a regular basis.

Ajeya Cotra: Yeah, totally, totally.

Robert Wiblin: That’s the level of analysis you’re doing. But for some reason, it seems like maybe in order to be comfortable enough to do that constantly, you need to actually train up to a higher level.

Ajeya Cotra: That’s my line. I tell people it’s probably good to study something quantitative because it gives you these vague habits of thought. I’m not sure exactly how much I believe it. I think philosophy does a lot of the same thing for people in a kind of different flavor. It’s more logic and argument construction, which is also super important for this kind of work.

What do you like and dislike most about your job? [02:45:18, transcript]

Ajeya Cotra: Likes…obviously the mission, and I think my colleagues are just incredibly thoughtful and kind people that I feel super value-aligned with. And that’s awesome. And then dislikes, it comes back to the thing I was saying about how it’s a pretty siloed organisation. So each particular team is quite small, and then within each team, people are spread thin. So there’s one person thinking about timelines and there’s one person thinking about biosecurity, and it means the collaboration you can get from your colleagues — and even the feeling of team and the encouragement you can get from your colleagues — is more limited. Because they don’t have their head in what you’re up to. And it’s very hard for them to get their head in what you’re up to. And so people often find that people don’t read their reports that they worked really hard on as much as they would like, except for their manager or a small set of decision makers who are looking to read that thing.

Ajeya Cotra: And so I think that can be disheartening. And then in terms of my particular job, all this stuff I was saying… It’s very stressful putting together this report, in a lot of the ways that we were talking about earlier. And just feeling responsible for coming to a bottom-line number without a lot of feedback or a lot of diffusion of responsibility that comes from a bunch of people putting in the numbers. And like…

Robert Wiblin: That seems particularly hard.

Biggest challenges with writing big reports [02:17:09, transcript]

Ajeya Cotra: One thing that’s really tough is that academic fields that have been around for a while have an intuition or an aesthetic that they pass on to new members about, what’s a unit of publishable work? It’s sometimes called a ‘publon’. What kind of result is big enough? What kind of argument is compelling enough and complete enough that you can package it into a paper and publish it? And I think with the work that we’re trying to do — partly because it’s new, and partly because of the nature of the work itself — it’s much less clear what a publishable unit is, or when you’re done. And you almost always find yourself in a situation where there’s a lot more research you could do than you assumed naively, going in. And it’s not always a bad thing.

Ajeya Cotra: It’s not always you’re being inefficient or you’re going down rabbit holes, if you choose to do that research and just end up doing a much bigger project than you thought you were going to do. I think this was the case with all of the timelines work that we did at Open Phil. My report and then other reports. It was always the case that we came in, we thought, I thought I would do a more simple evaluation of arguments made by our technical advisors, but then complications came up. And then it just became a much longer project. And I don’t regret most of that. So it’s not as simple as saying, just really force yourself to guess at the outset how much time you want to spend on it and just spend that time. But at the same time, there definitely are rabbit holes, and there definitely are things you can do that eat up a bunch of time without giving you much epistemic value. So standards for that seemed like a big, difficult issue with this work.

Robert Wiblin: Okay. So yes. So this question of what’s the publishable unit and what rabbit holes should you go down? Are there any other ways things can go wrong that stand out, or mistakes that you potentially made at some point?

Ajeya Cotra: Yeah. Looking back, I think I did a lot of what I think of as defensive writing, where basically there were a bunch of things I knew about the subject that were definitely true, and I could explain them nicely, and they lean on math and stuff, but those things were only peripherally relevant to the central point I wanted to make. And then there were a bunch of other things that were hard and messy, and mostly intuitions I had, and I didn’t know how to formalise them, but they were doing most of the real work. One big example is that of the four things we talked about, the most important one by far is the 2020 computation requirements. How much computation would it take to train a transformative model if we had to do it today. But it was also the most nebulous and least defensible.

Ajeya Cotra: So I found myself wanting to spend more time on hardware forecasting, where I could say stuff that didn’t sound stupid. And so as I sat down to write the big report, after I had an internal draft… I had an internal draft all the way back in November 2019. And then I sat down to write the publishable draft and I was like, okay, I’ll clean up this internal draft. But I just found myself being pulled to writing certain things, knowing that fancy ML people would read this. I found myself being pulled to just demonstrating that I knew stuff. And so I would just be like… I’d write ten pages on machine learning theory that were perfectly reasonable intros to machine learning theory, but actually this horizon length question was the real crux, and it was messy and not found in any textbook. And so I had to do a lot to curb my instinct to defensive writing, and my instinct to put stuff in there just because I wanted to dilute the crazy speculative stuff with a lot of facts, and show people that I knew what I was talking about.

Robert Wiblin: Yeah. That’s understandable. How did the work affect you personally, from a happiness or job satisfaction or mental health point of view? Because I think sometimes people throw themselves against the problems like this and I think it causes them to feel very anxious, because they don’t know whether they’re doing a good job or a bad job, or they don’t feel they are making progress, or they feel depressed because they worry that they haven’t figured it out yet and they feel bad about that.

Ajeya Cotra: Yeah. I had a lot of those emotions. I think the most fun part of the project was the beginning parts, where my audience was mostly myself and Holden. And I was reading these arguments that our technical advisors made and basically just finding issues with them, and explaining what I learned. And that’s just a very fun way to be… You have something you can bite onto, and react to, and then you’re pulling stuff out of it and restating it and finding issues with it. It’s much more rewarding for me than looking at a blank page and no longer writing something in response to somebody else. You have to just lay it all out for somebody who has no idea what you’re talking about. And so I was starting writing this final draft — the draft that eventually became the thing posted on LessWrong — in January of 2020.

Ajeya Cotra: And I gave myself a deadline of March 9th to write it all. And in fact, I spent most of January and half of February really stressed out about how I would even frame the model. And a lot of the stuff we were talking about, about these four parts, and then the first part is if we had to do it today, how much computation would it take to train… All of that came out of this angsty phase, where before I was just like, how much computation does it take to train TAI, and when will we get that? But that had this important conceptual flaw that I ended up spending a lot of time on, which is like, no, that number is different in different years, because of algorithmic progress.

Ajeya Cotra: And so I was trying to force myself to just write down what I thought I knew, but I had a long period of being like this is bad. People will look at this, and if they’re exacting, rigorous people, they’ll be like this doesn’t make sense, there’s no such thing as the amount of computation to train a transformative model. And I was very hung up on that stuff. And I think sometimes it’s great to be hung up on that stuff, and in particular, I think my report is stronger because I was hung up on that particular thing. But sometimes you’re killing yourself over something where you should just say, “This is a vague, fuzzy notion, but you know what I mean”. And it’s just so hard to figure out when to do one versus the other.

Robert Wiblin: Yeah. I think knowing this problem — where often the most important things can’t be rigorously justified, and you just have to state your honest opinion, all things considered, given everything you know about the world and your general intuitions, that’s the best you can do. And trying to do something else is just a fake science thing where you’re going through the motions of defending yourself against critics.

Ajeya Cotra: Yeah. Like physics envy.

Robert Wiblin: Yeah. Right. I think…

Ajeya Cotra: I had a lot of physics envy.

Robert Wiblin: Yeah. I’m just more indignant about that now. I’m just like, look, I think this, you don’t necessarily have to agree with me, but I’m just going to give you my number, and I’m not going to feel bad about it at all. And I won’t feel bad if you don’t agree, because this unfortunately is the state-of-the-art process that we have for estimating, is just to say what we think. Sometimes you can do better, but sometimes you really are pretty stuck.

Ajeya Cotra: Yeah. And I think just learning the difference is really hard. Because I do think this report, I believe has made some progress toward justifying things that were previously just intuitions we stated. But then there were many things where I hoped to do that, but I had to give up. I think also, doing a report that is trying to get to a number on an important decision-relevant question is a ton of pressure, because you can be really good at laying out the arguments and finding all the considerations and stuff, but your brain might not be weighing them right. And how you weigh them, the alchemy going on in your head when you assign weights to lifetime versus evolution versus things in between make a huge difference to the final number.

Ajeya Cotra: And if you feel like your job is to get the right number, that can be really, really scary and stressful. So I’ve tried to reframe it as my job is to lay out the arguments and make a model that makes sense. How the inputs get turned into outputs makes sense and is clear to people. And so the next person who wants to come up with their views on timelines doesn’t have to do all the work I did, but they still need to put in their numbers. My job is not to get the ultimate right numbers. I think reframing it that way was really important for my mental health.

Robert Wiblin: Yeah. Because that’s something you actually have a decent shot at having control over, whether you succeed at that. Whereas being able to produce the right number is to a much greater degree out of your hands.