The Nonlinear Library: EA Forum

By The Nonlinear Fund

Listen to a podcast, please open Podcast Republic app. Available on Google Play Store.


Category: Education

Open in Apple Podcasts


Open RSS feed


Open Website


Rate for this podcast

Subscribers: 3
Reviews: 0

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org

Episode Date
EA - Books: Lend, Don't Give by Jeff Kaufman
00:26
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Books: Lend, Don't Give, published by Jeff Kaufman on March 22, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 22, 2023
EA - Announcing the European Network for AI Safety (ENAIS) by Esben Kran
05:47
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the European Network for AI Safety (ENAIS), published by Esben Kran on March 22, 2023 on The Effective Altruism Forum. TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET! Why did we create ENAIS? ENAIS was founded by European AI safety researchers and field-builders who recognized the lack of interaction among various groups in the region. Our goal is to address the decentralized nature of AI safety work in Europe by improving information exchange and coordination. We focus on Europe for several reasons: a Europe-specific organization can better address local issues like the EU AI Act, foster smoother collaboration among members and the free travel within Schengen also eases event coordination. About the network ENAIS strives to advance AI Safety in Europe, mitigate risks from AI systems, particularly existential risks, and enhance collaboration among the continent's isolated AI Safety communities. We also aim to connect international communities by sharing insights about European activities and information from other hubs. We plan to offer infrastructure and support for establishing communities, coworking spaces, and assistance for independent researchers with operational needs. Concretely, we organize / create: A centralized online location for accessing European AI safety hubs and resources for field-building on the enais.co website. The map on the front page provides direct access to the most relevant links and locations across Europe for AI safety. A quarterly newsletter with updated information about what field-builders and AI safety researchers should be aware of in Continental Europe. A professional network and database of the organizations and people working on AI safety. Events and 1-1 career advice to aid transitioning into AI Safety or between different AI Safety roles. Support for people wanting to create a similar organization in other regions. We intend to leverage the expertise of the network to positively impact policy proposals in Europe (like the EU AI Act), as policymakers and technical researchers can more easily find each other. In addition, we aim to create infrastructure to make the research work of European researchers easier and more productive, for example, by helping researchers with finding an employer of records and getting funding. With the decentralized nature of ENAIS, we also invite network members to self-organize events under the ENAIS banner with support from other members. What does European AI safety currently look like? Below you will find a non-exhaustive map of cities with AI Safety researchers or organizations. The green markers indicate an AIS group, whereas the blue markers indicate individual AIS researchers or smaller groups. You are invited to add information to the map here. Vision The initial vision for ENAIS is to be the go-to access point for information and people interested in AI safety in Europe. We also want to provide a network and brand for groups and events. The longer-term strategy and vision will mostly be developed by the people who join as directors with guidance from the board. This might include projects such as policymaker communication, event coordination, regranting, community incubation, and researcher outreach. Join the network! Sign up for the network here by providing information on your interests, openness to collaboration, and location. We will include you in our database (if you previously filled in information, we will email you so you may update your information). You can choose your level of privacy to not appear publicly and only to m...
Mar 22, 2023
EA - Free coaching sessions by Monica Diaz
00:53
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Free coaching sessions, published by Monica Diaz on March 21, 2023 on The Effective Altruism Forum. I’m offering free one-on-one coaching sessions to autistic people in the EA community. I’m autistic myself and have provided direct support to autistic people for over 9 years. My sessions focus on self-discovery, skill-development, and finding solutions to common challenges related to being autistic. It can also be nice to talk to someone else who just gets it. Send me a message if you're interested in free coaching sessions, want to learn more, or just want to connect. You can also book a 30-minute introductory meeting with me here: Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 22, 2023
EA - Whether you should do a PhD doesn't depend much on timelines. by alex lawsen (previously alexrjl)
06:33
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Whether you should do a PhD doesn't depend much on timelines., published by alex lawsen (previously alexrjl) on March 22, 2023 on The Effective Altruism Forum. I wrote this as an answer to a question which I think has now been deleted, so I copied it to my shortform in order to be able to link it in future, and found myself linking to it often enough that it seemed worth making a top-level post, in particular because if there are important counterarguments I haven't considered I'd like to come across them sooner rather than later! I'd usually put more thought into editing a top-level post, but the realistic options here were not post it at all, or post it without editing.Epistemic status: I've thought about both how people should thinking about PhDs and how people should think about timelines a fair bit, both in my own time and in my role as an advisor at 80k, but I wrote this fairly quickly. I'm sharing my take on this rather than intending to speak on behalf of the whole organisation, though my guess is that the typical view is pretty similar.BLUF: Whether to do a PhD is a decision which depends heavily enough on personal fit that I expect thinking about how well you in particular are suited to a particular PhD to be much more useful than thinking about the effects of timelines estimates on that decision. Don’t pay too much attention to median timelines estimates. There’s a lot of uncertainty, and finding the right path for you can easily make a bigger difference than matching the path to the median timeline. Going into a bit more detail - I think there are a couple of aspects to this question, which I’m going to try to (imperfectly) split up: How should you respond to timelines estimates when planning your career? How should you think about PhDs if you are confident timelines are very short? In terms of how to think about timelines in general, the main advice I’d give is to try to avoid the mistake of interpreting median estimates as single points. Taking this metaculus question as an example, which has a median of July 2027, that doesn’t mean the community predicts that AGI will arrive then! The median just indicates the date by which the community thinks there’s a 50% chance the question will have resolved. To get more precise about this, we can tell from the graph that the community estimates: Only a 7% chance that AGI is developed in the year 2027 A 25% chance that AGI will be developed before August of next year. An 11% chance that AGI will not be developed before 2050 A 9% chance that the question has already resolved. A 41% chance that AGI will be developed after January 2029 (6 years from the time of writing). Taking these estimates literally, and additionally assuming that any work that happens post this question resolving is totally useless (which seems very unlikely), you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.Having done all of that work (and glossed over a bunch of subtlety in the last comment for brevity), I now want to say that you shouldn’t take the metaculus estimates at face value though. The reason is that (as I’m sure you’ve noticed, and as you’ve seen in the comments) they just aren’t going to be that reliable for this kind of question. Nothing is - this kind of prediction is really hard. The net effect of this increased uncertainty should be (I claim) to flatten the probability distribution you are working with. This basically means it makes even less sense than you’d think from looking at the distribution to plan for AGI as if timelines are point estimates. Ok, but what does this mean for PhDs? Before I say anything about how a PhD decision intera...
Mar 22, 2023
EA - Design changes and the community section (Forum update March 2023) by Lizka
12:04
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Design changes & the community section (Forum update March 2023), published by Lizka on March 21, 2023 on The Effective Altruism Forum. We’re sharing the results of the Community-Frontpage test, and we’ve released a Forum redesign — I discuss it below. I also outline some things we’re thinking about right now. As always, we’re also interested in feedback on these changes. We’d be really grateful if you filled out this (very quick) survey on the redesign that might help give us a sense of what people are thinking. You can also comment on this post with your thoughts or reach out to forum@centreforeffectivealtruism.org. Results of the Community-Frontpage test & more thoughts on community posts A little over a month ago, we announced a test: we’d be trying out separating “Community” posts from other kinds by creating a “Community” section on the Frontpage of the Forum. We’ve gotten a lot of feedback; we believe that the change was an improvement, so we’re planning on keeping it for the near future, with some modifications. We might still make some changes like switching from a section to tabs, especially depending on new feedback and on how related projects go. Outcomes Information we gathered We sourced user feedback from different places: User interviews with people at EA Global and elsewhere (at least 20 interviews, different people doing the interviewing) Responses to a quick survey on how we can improve discussions on the Forum (45 responses) Metrics (mostly used as sanity checks) Engagement with the Forum overall (engagement on the Forum is 7% lower than the previous month, which is within the bounds we set ourselves and there’s a lot of fluctuation, so we’re just going to keep monitoring this) Engagement with Community posts (it dropped 8%, which may just be tracking overall engagement, and again, we’re going to keep monitoring it) There are still important & useful Community posts every week (subjective assessment)(there are) The team’s experience with the section, and whether we thought the change was positive overall Outcomes and themes: The responses we got were overwhelmingly positive about the change. People told us directly (in user interviews and in passing) that the change was improving their experience on the Forum. We also personally thought that the change had gone very well — likely better than we’d expected as a ~70% best outcome. And here are the results from the survey: The metrics we're tracking (listed above) were within the bounds we’d set, and we were mostly using them as sanity checks. There were, of course, some concerns, and critical or constructive feedback. Confusion about what “Community” means Not everyone was clear on which posts should actually go in the section; the outline I gave before was unclear. I’ve updated the guidance I had originally given to Forum facilitators and moderators (based on their feedback and just sitting down and trying to get a more systematic categorization), and I’m sharing the updated version here. Concerns that important conversations would be missed Some people expressed a worry that having a section like this would hide discussions that the community needs to have, like processing the FTX collapse and what we should learn from it, or how we can create a more welcoming environment for different groups of people. We were also pretty worried about this; I think this was the thing that I thought was most likely going to get us to reverse the change. However, the worry doesn’t seem to be realizing. It looks like engagement hasn’t fallen significantly on Community posts relative to other posts, and important conversations have been continuing. Some recent posts on difficult community topics have had lots of comments (the discussion of the recent TIME article currently has 159 comments), and Community posts have...
Mar 21, 2023
EA - Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters by Pablo
37:24
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Future Matters #8: Bing Chat, AI labs on safety, and pausing Future Matters, published by Pablo on March 21, 2023 on The Effective Altruism Forum. Future Matters is a newsletter about longtermism and existential risk. Each month we collect and summarize relevant research and news from the community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, listen on your favorite podcast platform and follow on Twitter. Future Matters is also available in Spanish. A message to our readers This issue marks one year since we started Future Matters. We’re taking this opportunity to reflect on the project and decide where to take it from here. We’ll soon share our thoughts about the future of the newsletter in a separate post, and will invite input from readers. In the meantime, we will be pausing new issues of Future Matters. Thank you for your support and readership over the last year! Featured research All things Bing Microsoft recently announced a significant partnership with OpenAI [see FM#7] and launched a beta version of a chatbot integrated with the Bing search engine. Reports of strange behavior quickly emerged. Kevin Roose, a technology columnist for the New York Times, had a disturbing conversation in which Bing Chat declared its love for him and described violent fantasies. Evan Hubinger collects some of the most egregious examples in Bing Chat is blatantly, aggressively misaligned. In one instance, Bing Chat finds a user’s tweets about the chatbot and threatens to exact revenge. In the LessWrong comments, Gwern speculates on why Bing Chat exhibits such different behavior to ChatGPT, despite apparently being based on a closely-related model. (Bing Chat was subsequently revealed to have been based on GPT-4). Holden Karnofsky asks What does Bing Chat tell us about AI risk? His answer is that it is not the sort of misaligned AI system we should be particularly worried about. When Bing Chat talks about plans to blackmail people or commit acts of violence, this isn’t evidence of it having developed malign, dangerous goals. Instead, it’s best understood as Bing acting out stories and characters it’s read before. This whole affair, however, is evidence of companies racing to deploy ever more powerful models in a bid to capture market share, with very little understanding of how they work and how they might fail. Most paths to AI catastrophe involve two elements: a powerful and dangerously misaligned AI system, and an AI company that builds and deploys it anyway. The Bing Chat affair doesn’t reveal much about the first element, but is a concerning reminder of how plausible the second is. Robert Long asks What to think when a language model tells you it's sentient []. When trying to infer what’s going on in other humans’ minds, we generally take their self-reports (e.g. saying “I am in pain”) as good evidence of their internal states. However, we shouldn’t take Bing Chat’s attestations (e.g. “I feel scared”) at face value; we have no good reason to think that they are a reliable guide to Bing’s inner mental life. LLMs are a bit like parrots: if a parrot says “I am sentient” then this isn’t good evidence that it is sentient. But nor is it good evidence that it isn’t — in fact, we have lots of other evidence that parrots are sentient. Whether current or future AI systems are sentient is a valid and important question, and Long is hopeful that we can make real progress on developing reliable techniques for getting evidence on these matters. Long was interviewed on AI consciousness, along with Nick Bostrom and David Chalmers, for Kevin Collier’s article, What is consciousness? ChatGPT and Advanced AI might define our answer []. How the major AI labs are thinking about safety In the last few weeks, we got more information about how the lead...
Mar 21, 2023
EA - Where I'm at with AI risk: convinced of danger but not (yet) of doom by Amber Dawn
10:29
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Where I'm at with AI risk: convinced of danger but not (yet) of doom, published by Amber Dawn on March 21, 2023 on The Effective Altruism Forum. [content: discussing AI doom. I'm sceptical about AI doom, but if dwelling on this is anxiety-inducing for you, consider skipping this post] I’m a cause-agnostic (or more accurately ‘cause-confused’) EA with a non-technical background. A lot of my friends and writing clients are extremely worried about existential risks from AI. Many believe that humanity is more likely than not to go extinct due to AI within my lifetime. I realised that I was confused about this, so I set myself the goal of understanding the case for AI doom, and my own scepticisms, better. I did this by (very limited!) reading, writing down my thoughts, and talking to friends and strangers (some of whom I recruited from the Bountied Rationality Facebook group - if any of you are reading, thanks again!) Tl;dr: I think there are good reasons to worry about extremely powerful AI, but I don’t yet understand why people think superintelligent AI is highly likely to end up killing everyone by default. Why I'm writing this I’m writing up my current beliefs and confusions in the hope that readers will be able to correct my misconceptions, clarify things I’m confused about, and link me to helpful resources. I also personally enjoy reading other EAs’ reflections about cause areas: e.g. Saulius' post on wild animal welfare, or Nuño's sceptical post about AI risk. This post is far less well-informed, but I found those posts valuable because of their reasoning transparency more than their authors' expertise. I'd love to read more posts by ‘layperson’ EAs talking about their personal cause prioritisation. I also think that 'confusion' is an underrepresented intellectual position. At EAGx Cambridge, Yulia Ponomarenko led a great workshop on ‘Asking daft questions with confidence’. We talked about how EAs are sometimes unwilling to ask questions that would make them less confused for fear that the questions are too basic, silly, “dumb”, or about something they're already expected to know. This could create a false appearance of consensus about cause areas or world models. People who are convinced by the case for AI risk will naturally be very vocal, as will those who are confidently sceptical. However, people who are unsure or confused may be unwilling to share their thoughts, either because they're afraid that others will look down on them for not already understanding the case, or just because most people are less motivated to write about their vague confusions than their strong opinions. So I’m partly writing this as representation for the ‘generally unsure’ point of view. Some caveats: there’s a lot I haven’t read, including many basic resources. And my understanding of the technical side of AI (maths, programming) is extremely limited. Technical friends often say ‘you don’t need to understand the technical details about AI to understand the arguments for x-risk from AI’. But when I talk and think about these questions, it subjectively feels like I run up again a lack of technical understanding quite often. Where I’m at with AI safety Tl;dr: I'm concerned about certain risks from misaligned or misused AI, but I don’t understand the arguments that AI will, by default and in absence of a specific alignment technique, be so misaligned as to cause human extinction (or something similarly bad.) Convincing (to me) arguments for why AI could be dangerous Humans could use AI to do bad things more effectively For example, politicians could use AI to devastatingly make war on their enemies, or CEOs could use it to increase their profits in harmful or reckless ways. This seems like a good reason to regulate AI development heavily and/or to democratise AI control, so that it’s har...
Mar 21, 2023
EA - Estimation for sanity checks by NunoSempere
06:01
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Estimation for sanity checks, published by NunoSempere on March 21, 2023 on The Effective Altruism Forum. I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate for a quantity of interest, and which I also feel warmly about but which aren’t the subject of this post. In this post, I explain why I like quantitative sanity checks so much, and I give some examples. Why I like this so much I like this so much because: It is very defensible. There are some cached arguments against more quantified estimation, but sanity checking cuts through most—if not all—of them. “Oh, well, I just think that estimation has some really nice benefits in terms of sanity checking and catching bullshit, and in particular in terms of defending against scope insensitivity. And I think we are not even at the point where we are deploying enough estimation to catch all the mistakes that would be obvious in hindsight after we did some estimation” is both something I believe and also just a really nice motte to retreat when I am tired, don’t feel like defending a more ambitious estimation agenda, or don’t want to alienate someone socially by having an argument. It can be very cheap, a few minutes, a few Google searches. This means that you can practice quickly and build intuitions. They are useful, as we will see below. Some examples Here are a few examples where I’ve found estimation to be useful for sanity-checking. I mention these because I think that the theoretical answer becomes stronger when paired with a few examples which display that dynamic in real life. Photo Patch Foundation The Photo Patch Foundation is an organization which has received a small amount of funding from Open Philanthropy: Photo Patch has a website and an app that allows kids with incarcerated parents to send letters and pictures to their parents in prison for free. This diminishes barriers, helps families remain in touch, and reduces the number of children who have not communicated with their parents in weeks, months, or sometimes years. It takes little digging to figure out that their costs are $2.5/photo. If we take the AMF numbers at all seriously, it seems very likely that this is not a good deal. For example, for $2.5 you can deworm several kids in developing countries, or buy a bit more than one malaria net. Or, less intuitively, trading 0.05% chance of saving a statistical life for sending a photo to a prisoner seems like a pretty bad trade–0.05% of a statistical life corresponds to 0.05/100 × 70 years × 365 = 12 statistical days. One can then do somewhat more elaborate estimations about criminal justice reform. Sanity-checking that supply chain accountability has enough scale At some point in the past, I looked into supply chain accountability, a cause area related to improving how multinational corporations treat labor. One quick sanity check is, well, how many people does this affect? You can check, and per here1, Inditex—a retailer which owns brands like Zara, Pull&Bear, Massimo Dutti, etc.—employed 3M people in its supply chain, as of 2021. So scalability is large enough that this may warrant further analysis. One this simple sanity check is passed, one can then go on and do some more complex estimation about how cost-effective improving supply chain accountability is, like here. Sanity checking the cost-effectiveness of the EA Wiki In my analysis of the EA Wiki, I calculated how much the person behind the EA Wiki was being paid per word, and found that it was in the ballpark of other industries. If it had been egregiously low, my analysis could have been short...
Mar 21, 2023
EA - My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" by Quintin Pope
52:37
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The Effective Altruism Forum. Note: manually cross-posted from LessWrong. See here for discussion on LW. Introduction I recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered. Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments. As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. All bulleted points correspond to specific claims by Yudkowsky, and I follow each bullet point with text that explains my objections to the claims in question. I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems. I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less: I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at. and more: Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm. My objections Will current approaches scale to AGI? Yudkowsky apparently thinks not, and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter. I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as: Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly. Teaching neural networks to directly modify themselves by giving them edit access to their own weights. Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves. Using program search to find more efficient optimizers. Using simulated evolution to find more efficient architectures. Using efficient second-order corrections to gradient descent's approximate optimization process. Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks. Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers. Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task. Having language models devise their own programming...
Mar 21, 2023
EA - Forecasts on Moore v Harper from Samotsvety by gregjustice
49:46
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forecasts on Moore v Harper from Samotsvety, published by gregjustice on March 20, 2023 on The Effective Altruism Forum. [edited to include full text] Disclaimers The probabilities listed are contingent on SCOTUS issuing a ruling on this case. An updated numerical forecast on that happening, particularly in light of the NC Supreme Court’s decision to rehear Harper v Hall, may be forthcoming. The author of this report, Greg Justice, is an excellent forecaster, not a lawyer. This post should not be interpreted as legal advice. This writeup is still in progress, and the author is looking for a good venue to publish it in. You can subscribe to these posts here. Introduction The Moore v. Harper case before SCOTUS asks to what degree state courts can interfere with state legislatures in the drawing of congressional district maps. Versions of the legal theory they’re being asked to rule on were invoked as part of the attempts to overthrow the 2020 election, leading to widespread media coverage of the case. The ruling here will have implications for myriad state-level efforts to curb partisan gerrymandering. Below, we first discuss the Independent State Legislature theory and Moore v. Harper. We then offer a survey of how the justices have ruled in related cases, what some notable conservative sources have written, and what the justices said in oral arguments. Finally, we offer our own thoughts about some potential outcomes of this case and their consequences for the future. Background What is the independent state legislature theory? Independent State Legislature theory or doctrine (ISL) generally holds that state legislatures have unique power to determine the rules around elections. There are a range of views that fall under the term ISL, ranging from the idea that state courts' freedom to interpret legislation is more limited than it is with other laws, to the idea that state courts and other state bodies lack any authority on issues of federal election law altogether. However, “[t]hese possible corollaries of the doctrine are largely independent of each other, supported by somewhat different lines of reasoning and authority. Although these theories arise from the same constitutional principle, each may be assessed separately from the others; the doctrine need not be accepted or repudiated wholesale.”1 The doctrine is rooted in a narrow reading of Article I Section 4 Clause 1 (the Elections Clause) of the Constitution, which states, “The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof.”2 According to the Brennan Center, this interpretation is at odds with a more traditional reading: The dispute hinges on how to understand the word “legislature.” The long-running understanding is that it refers to each state’s general lawmaking processes, including all the normal procedures and limitations. So if a state constitution subjects legislation to being blocked by a governor’s veto or citizen referendum, election laws can be blocked via the same means. And state courts must ensure that laws for federal elections, like all laws, comply with their state constitutions. Proponents of the independent state legislature theory reject this traditional reading, insisting that these clauses give state legislatures exclusive and near-absolute power to regulate federal elections. The result? When it comes to federal elections, legislators would be free to violate the state constitution and state courts couldn’t stop them. Extreme versions of the theory would block legislatures from delegating their authority to officials like governors, secretaries of state, or election commissioners, who currently play important roles in administering elections.3 The doctrine, which governs the actions of state cou...
Mar 20, 2023
EA - Some Comments on the Recent FTX TIME Article by Ben West
08:13
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Comments on the Recent FTX TIME Article, published by Ben West on March 20, 2023 on The Effective Altruism Forum. Background Alameda Research (AR) was a cryptocurrency hedge fund started in late 2017. In early 2018, approximately half the employees quit, including myself and Naia Bouscal, the main person mentioned in the TIME article. At the time, I had considered AR to have failed, and I think even the people who stayed would have agreed that it had not achieved what it had wanted to. Later in 2018, some of the remaining AR staff started working on a cryptocurrency exchange named FTX. FTX grew to become a multibillion-dollar company. In late 2022, FTX collapsed. It has since been alleged that FTX defrauded their investors by misrepresenting the relationship between AR and FTX, and that this effectively led to them stealing customer deposits. The recent TIME article doesn’t make a very precise argument; here is my attempt at steelmanning/clarifying a major argument made in that article, which I will then respond to: Some EAs worked at AR before FTX started Even though those EAs (including myself) quit before FTX was founded and therefore could not have had any first-hand knowledge of this improper relationship between AR and FTX, they knew things (like information about Sam’s character) which would have enabled them to predict that something bad would happen This information was passed on to “EA leaders”, who did not take enough preventative action and are therefore (partly) responsible for FTX’s collapse Personal Background I worked at Alameda Research (AR) for about three months in early 2018. I was not involved in stealing FTX customer funds, and hopefully people trust me about that claim, if only because I quit before FTX was founded. To make my COI clear: I left the company I founded to join AR; doing so was very costly to me; AR crashed and burned within a few months of me joining; I blamed this crashing and burning largely on Sam. People who know I had a bad experience at AR are sometimes surprised that I’m not on the “obviously Sam was obviously 100% evil” bandwagon. I’ve been wanting to write something but found it hard because there weren’t specific things I could react to, it was just some vague difference in vibes. So I appreciate the TIME article sharing some specific things that “EA Leaders” allegedly knew which the author suggests should have caused them to predict FTX’s fraud. My Experience at AR at a High Level I thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin. But my opinion of Sam overall was more positive than the sense I get from the statements in the TIME article. (This is not very surprising, given that the TIME article consists of statements that were probably intentionally selected to be the worst possible thing the journalist could find someone to say about Sam.) It's hard to convey nuance in these posts, and I'm sure someone is going to interpret me as trying to defend Sam here. This is not what I’m trying to do, but I do think it’s worth trying to share my reflections to help others refine their models. Adding my personal experience to supplement some statements from the article But one of the people who did warn others about Bankman-Fried says that he openly wielded this power when challenged. “It was like, ‘I could destroy you,’” this person says. “Will and Holden would believe me over you. No one is going to believe you.” I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was f...
Mar 20, 2023
EA - Save the Date April 1st 2023 EAGatherTown: UnUnConference by Vaidehi Agarwalla
03:01
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Save the Date April 1st 2023 EAGatherTown: UnUnConference, published by Vaidehi Agarwalla on March 20, 2023 on The Effective Altruism Forum. We're excited to officially announce the very first EA UnUnConference! APPLY HERE. Naming What We Can, the most impactful post ever published on April 1st, have already volunteered to host a Q&A. We’re calling in the producers of the TV hit Impact Island, and would like to invite Peter Barnett to launch his new book What The Future Owes Us. The X-risk-Men Incubation Program is running an enlightening talks session. Location: Mars in Gathertown Date: April 1st, 2023, 24 hours and 37 minutes starting at 12:00pm UTC (or “lunch time” for british people) The case for impact Over the years, humanity realized that Unconferences are a great twist of traditional conferences, since the independence gives room for more unexpected benefits to happen. For the reason, we’re experimenting with the format of an UnUnconference. This means we’ll actively try not to organize anything, therefore (in expectancy) achieving even more unexpected benefits. We encourage you to critique our (relatively solid, in our opinion) theory of change in the comments! We understand this is not the most ambitious we could be. Although we fall short of the dream of EAGxMars, we believe this Ununconference is a proof-of-concept that will help validate the model of novel, experimental conferences and possibly redefine what impact means for EA events for years to come. This team is well placed to unorganize this event because we have previously successfully not organized 10^10 possible events. What to expect All beings welcomed, that includes infants, face mice, gut microbiome, etc. Expect to have the most impactful time Make more impact than everyone on earth could ever do combined Network with the best minds in ultra-near-termist research Never meet your connections again after the event Certificates of £20 worth of impact just for £10! No success metrics No theory of change No food, no wine, no suffering Check out our official event poster! Pixelated lightbulb that looks like mars as a logo for an unconference (DALL-E) Get involved Take a look at the confernce agenda and add sessions to your calendar Comment on this post with content suggestions and anti-suggestions Sign up for an enlightning talk Unvolunteer for the event UnUnvolunteer for the event (your goal will be to actively unorganize stuff) UnUnUnvolunteer for the event (your goal will be to actively ununorganize stuff) .. And so on. We think at least 5 levels of volunteers will be necessary for this event to be a complete success, to minimize risk of not falling into the well known meta trap. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 20, 2023
EA - Tensions between different approaches to doing good by James Özden
18:03
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tensions between different approaches to doing good, published by James Özden on March 19, 2023 on The Effective Altruism Forum. Link-posted from my blog here. TLDR: I get the impression that EAs don't always understand where certain critics are coming from e.g. what do people actually mean when they say EAs aren't pursuing "system change" enough? or that we're focusing on the wrong things? I feel like I hear these critiques a lot, so I attempted to steelman them and put them into more EA-friendly jargon. It's almost certainly not a perfect representation of these views, nor exhaustive, but might be interesting anyway. Enjoy! I feel lucky that I have fairly diverse groups of friends. On one hand, some of my closest friends are people I know through grassroots climate and animal rights activism, from my days in Extinction Rebellion and Animal Rebellion. On the other hand, I also spend a lot of time with people who have a very different approach to improving the world, such as friends I met through the Charity Entrepreneurship Incubation Program or via effective altruism. Both of these somewhat vague and undefined groups, “radical” grassroots activists and empirics-focused charity folks, often critique the other group with various concerns about their methods of doing good. Almost always, I end up defending the group under attack, saying they have some reasonable points and we would do better if we could integrate the best parts of both worldviews. To highlight how these conversations usually go (and clarify my own thinking), I thought I would write up the common points into a dialogue between two versions of myself. One version, labelled Quantify Everything James (or QEJ), discusses the importance of supporting highly evidence-based and quantitatively-backed ways of doing good. This is broadly similar to what most effective altruists advocate for. The other part of myself, presented under the label Complexity-inclined James (CIJ), discusses the limitations of this empirical approach, and how else we should consider doing the most good. With this character, I’m trying to capture the objections that my activist friends often have. As it might be apparent, I’m sympathetic to both of these different approaches and I think they both provide some valuable insights. In this piece, I focus more on describing the common critiques of effective altruist-esque ways of doing good, as this seems to be something that isn’t particularly well understood (in my opinion). Without further ado: Quantify Everything James (QEJ): We should do the most good by finding charities that are very cost-effective, with a strong evidence base, and support them financially! For example, organisations like The Humane League, Clean Air Task Force and Against Malaria Foundation all seem like they provide demonstrably significant benefits on reducing animal suffering, mitigating climate change and saving human lives. For example, external evaluators estimate the Against Malaria Foundation can save a human life for around $5000 and that organisations like The Humane League affect 41 years of chicken life per dollar spent on corporate welfare campaigns. It’s crucial we support highly evidence-based organisations such as these, as most well-intentioned charities probably don’t do that much good for their beneficiaries. Additionally, the best charities are likely to be 10-100x more effective than even the average charity! Using an example from this very relevant paper by Toby Ord: If you care about helping people with blindness, one option is to pay $40,000 for someone in the United States to have access to a guide dog (the costs of training the dog & the person). However, you could also pay for surgeries to treat trachoma, a bacterial infection that is the top cause of blindness worldwide. At around $20 per ...
Mar 20, 2023
EA - Scale of the welfare of various animal populations by Vasco Grilo
10:14
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scale of the welfare of various animal populations, published by Vasco Grilo on March 19, 2023 on The Effective Altruism Forum. Summary I Fermi-estimated the scale of the welfare of various animal populations from the relative intensity of their experiences, moral weight, and population size. Based on my results, I would be very surprised if the scale of the welfare of: Wild animals ended up being smaller than that of farmed animals. Farmed animals turned out to be smaller than that of humans. Introduction If it is worth doing, it is worth doing with made-up statistics? Methods I Fermi-estimated the scale of the welfare of various animal populations from the absolute value of the expected total hedonistic utility (ETHU). I computed this from the product between: Intensity of the mean experience as a fraction of that of the worst possible experience. Mean moral weight. Population size. The data and calculations are here. Intensity of experience I defined the intensity of the mean experience as a fraction of that of the worst possible experience based on the types of pain defined by the Welfare Footprint Project (WFP) here (search for “definitions”). I assumed: The following correspondence between the various types of pain (I encourage you to check this post from algekalipso, and this from Ren Springlea to get a sense of why I think the intensity can vary so much): Excruciating pain, which I consider the worst possible experience, is 1 k times as bad as disabling pain. Disabling pain is 100 times as bad as hurtful pain, which together with the above implies excruciating pain being 100 k times as bad as hurtful pain. Hurtful pain is 10 times as bad as annoying pain, which together with the above implies excruciating pain being 1 M times as bad as annoying pain. The intensity of the mean experience of: Farmed animal populations is as high as that of broiler chickens in reformed scenarios. I assessed this from the time broilers experience each type of pain according to these data from WFP (search for “pain-tracks”), and supposing: The rest of their time is neutral. Their lifespan is 42 days, in agreement with section “Conventional and Reformed Scenarios” of Chapter 1 of Quantifying pain in broiler chickens by Cynthia Schuck-Paim and Wladimir Alonso. Humans and other non-farmed animal populations is as high as 2/3 of that of hurtful pain. 2/3 (= 16/24) such that 1 day (24 h) of such intensity is equivalent to 16 h spent in hurtful pain plus 8 h in neutral sleeping. Ideally, I would have used empirical data for the animal populations besides farmed chickens too. However, I do not think they are readily available, so I had to make some assumptions. In general, I believe the sign of the mean experience is: For farmed animal populations, negative, judging from the research of WFP on chickens. For humans, positive (see here). For other non-farmed animal populations, positive or negative (see this preprint from Heather Browning and Walter Weit). Moral weight I defined the mean moral weight from Rethink Priorities’ median estimates for mature individuals provided here by Bob Fischer. For the populations I studied with animals of different species, I used those of: For wild mammals, pigs. For farmed fish, salmon. For wild fish, salmon. For farmed insects, silkworms. For wild terrestrial arthropods, silkworms. For farmed crayfish, crabs and lobsters, mean between crayfish and crabs. For farmed shrimps and prawns, shrimps. For wild marine arthropods, silkworms. For nematodes, silkworms multiplied by 0.1. Population size I defined the population size from: For humans, these data from Our World in Data (OWID) (for 2021). For wild mammals, the mean of the lower and upper bounds provided in section 3.1.5.2 of Carlier 2020. For farmed chickens and pigs, these data from OWID (for 2014). F...
Mar 19, 2023
EA - Potential employees have a unique lever to influence the behaviors of AI labs by oxalis
07:39
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Potential employees have a unique lever to influence the behaviors of AI labs, published by oxalis on March 18, 2023 on The Effective Altruism Forum. (Cross posted from my personal blog) People who have received and are considering an offer from an AI lab are in a uniquely good spot to influence the actions of that lab. People who care about AI safety and alignment often have things they wish labs would do. These could be requests about prioritizing alignment and safety (eg. having a sufficiently staffed alignment team, having a public and credible safety and alignment plan), good governance (eg. having a mission, board structure, and entity structure that allows safety and alignment to be prioritized), information security, or similar. This post by Holden goes through some lab asks, but take this as illustrative, not exhaustive! So you probably have, or could generate, some practices or structures you wish labs would have in the realm of safety and alignment. Once you have received an offer to work for a lab, that lab suddenly cares about what you think far more than when you are someone who is just writing forum posts or tweeting at them. This post will go through some ways to potentially influence the lab in a positive direction after you have received your offer. Does this work? This is anecdata but I have seen offer holders win concessions, and I have heard recruiters talk about how these sorts of behaviors influence the lab’s strategy. We also have reason to expect this works given that hiring good ML and AI researchers is competitive, and that businesses have changed aspects about themselves in the past partially to help with recruitment. Some efforts for gender or ethnic diversity or environmental sustainability are taken so that hiring from groups who care about these things doesn’t become too difficult. One example is that Google changed its sexual harassment rules and did not renew its contract with the Pentagon over mass employee pushback. Of course some of this stuff they may have intrinsically cared about or done to appease the customers or the public at large, but it seems employees have a more direct lever and have successfully used it. The Strategy There are steps you can take at different stages of your hiring process. The best time to do this is when you have received an offer, because then you know they are interested in you and so will care about your opinion. Follow up call(s) or email just after receiving offer In the follow up call after your offer you can express any concerns before you join. This is a good time to make requests. I recommend being polite, grateful for the offer, and framing these as “Well, look I’m excited about the role but I just have some uncertainties or aspects that if they were addressed would make this is a really easy decision for me” Some example asks: I want the safety/alignment team to be larger I want to see more public comms about alignment strategy I would like to see coordination with other labs on safety standards and slower scaling, as well as independent auditing of safety and security efforts I want an empowered, independent board Theory of change: They might actually grant requests! I have seen this happen. If they don’t, they will still hear that information and if enough people say it, they may grant it in the future. This also sets you up for the next alternative which is. When you turn down an offer If you end up turning down the offer, either to work at another AI lab or some other entity, you should tell them why you did. If you partially turned them down because of concerns about their strategy or that they didn’t fulfill one of your asks, tell them! The most direct way to do this is to email your recruiter. Eg. write to the recruiter something like: “Thanks for this offer. I decided to turn it down...
Mar 18, 2023
EA - Researching Priorities in Local Contexts by LuisMota
14:58
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Researching Priorities in Local Contexts, published by LuisMota on March 18, 2023 on The Effective Altruism Forum. Summary This post explores two ways in which EAs can adapt priorities to local contexts they face: Trying to do the most good at a global level, given specific local resources. Trying to do the most good at a local level. I argue that the best framing for EAs to use is the first of these. I also explore when doing good at the local level might be the best way to do the most good from a global perspective, and suggest a way to explore this possibility in practice. Introduction Effective Altruism is a global movement that aims to use resources as effectively as possible with the purpose of doing good. Members of this global community face different realities and challenges, which means that there is no one-size-fits-all path to making the world a better place. This requires local groups to adapt EA research and advice to their specific contexts. Currently, there is limited guidance on how to do this, and many approaches have been adopted. Research done with this purpose is known as local priorities research, and includes projects like local charity evaluation and local career advice. However, the exact goal of such an adaptation process has often been unclear, in a way that can come at the cost of doing the most good from a global perspective. This post seeks to improve the local group prioritization framework. I break down the current usage of local priorities research into two different approaches: one seeks to do the most good impartially in light of the local context, and the other aims to do the most good for the local region. I make the case that EA groups should focus on the first approach, and discuss various ways in which this could influence local group prioritization research. Existing concepts in priorities research To begin, it's useful to start this discussion with the definition of global priorities research (GPR). The definition I'll use throughout this post is the following, adapted from the definition of the term used by the Global Priorities Institute: Global Priorities Research is research that informs use of resources, seeking to do as much good as possible. “Resources” here includes things like talent, money, and social connections. The agents who have these resources can also vary; ranging from individuals trying to decide what to do with their careers, organizations defining which projects to work on, or community builders trying to figure out what the best directions for their group are. On the other hand, local priorities research (LPR) is the term frequently used to refer to research aimed at adapting priorities to local situations. The essential idea behind this concept is that, as one post puts it, it is “quite similar [to GPR], except that it’s narrowed down to a certain country”. That post defines it as follows. While GPR is about figuring out what are the most important global problems to work on, LPR is about figuring out what are the most important problems in a local context that can best maximise impact both locally and globally. This term is used to describe many research activities, including: Local cause area prioritization Charity evaluation High-impact local career pathway research Giving and philanthropy landscape research Some examples of projects within local priorities research include EA Singapore's cause prioritization report, which identifies AI safety and alternative proteins as Singapore's comparative advantages; the Brazilian charity Doebem, which aims to identify the best health and development charities in Brazil; and EA Philippines's cause prioritization report, which identifies 11 potential focus areas for work in the country, ranging from poverty alleviation in the Philippines to building the EA movem...
Mar 18, 2023
EA - Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space by david reinstein
04:12
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Unjournal: Evaluations of "Artificial Intelligence and Economic Growth", and new hosting space, published by david reinstein on March 17, 2023 on The Effective Altruism Forum. New set of evaluations The Unjournal evaluations of Artificial Intelligence and Economic Growth, by prominent economists Philippe Aghion, Benjamin F. Jones, Charles I. Jones – are up. You can read these on our new PubPub community space , along with my discussion of the process and the insights and the 'evaluation metrics', and the authors' response. Thanks to the authors for their participation (reward early-adopters who stick their necks out!), and thanks to Philip Trammel and Seth Benzell for detailed and insightful evaluation.I discussed some of the reasons we 'took on' this paper in an earlier post. The discussion of AI's impact on the economy, what it might look like (in magnitude and in its composition), how to measure and model it, and what conditions lead to "growth explosions", seem especially relevant to recent events and discussion. "Self-correcting" science? I'm particularly happy about one outcome here. If you were a graduate student reading the paper, or were a professional delving into the economics literature, and had seen the last step of the equations pasted below (from the originally published paper/chapter), what would you think? The final step in fact contains an error; the claimed implication does not follow. From my discussion: ... we rarely see referees and colleagues actually reading and checking the math and proofs in their peers’ papers. Here Phil Trammel did so and spotted an error in a proof of one of the central results of the paper (the ‘singularity’ in Example 3). ... The authors have acknowledged this error ... confirmed the revised proof, and link a marked up version on their page. This is ‘self-correcting research’, and it’s great! Even though the same result was preserved, I believe this provides a valuable service. Readers of the paper who saw the incorrect proof (particularly students) might be deeply confused. They might think ‘Can I trust this papers’ other statements?’ ‘Am I deeply misunderstanding something here? Am I not suited for this work?’ Personally, this happened to me a lot in graduate school; at least some of the time it may have been because of errors and typos in the paper. I suspect many math-driven paper also contain flaws which are never spotted, and these sometimes may affect the substantive results (unlike in the present case). By the way, the marked up 'corrected' paper is here, and the corrected proof is here. (Caveat: Philip and the authors have agreed on the revised corrected proof, it might benefit from an independent verification.) New (additional) platform: PubPub We are trying out the PubPub platform. We are still maintaining our Sciety page, and we aim to import the content from one to the other, for greater visibility. Some immediate benefits of PubPub... It lets us assign 'digital object identifiers' (DOIs) for each evaluation, response, and summary. It puts these and the works referenced into the 'CrossRef' database. Jointly, this should (hopefully) enable indexing in Google Scholar and other academic search engines, And 'bibliometrics' (citation counts etc.0 It seems to enable evaluations of work hosted anywhere that has a DOI (published, preprints, etc.) It's versatile and full-featured, enabling input from and output from a range of formats, as well as community input and discussion It's funded by a non-profit and seems fairly mission-aligned More coming soon, updates The Unjournal has several more impactful papers evaluated and being evaluated, which we hope to post soon. For a sense of what's coming, see our 'Direct Evaluation track' focusing on NBER working papers. Some other updates: We are pursuing collaborations wit...
Mar 18, 2023
EA - Why SoGive is publishing an independent evaluation of StrongMinds by ishaan
10:35
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why SoGive is publishing an independent evaluation of StrongMinds, published by ishaan on March 17, 2023 on The Effective Altruism Forum. Executive summary We believe the EA community's confidence in the existing research on mental health charities hasn't been high enough to use it to make significant funding decisions. Further research from another EA research agency, such as SoGive, may help add confidence and lead to more well-informed funding decisions. In order to increase the amount of scrutiny on this topic, SoGive has started conducting research on mental health interventions, and we plan to publish a series of articles starting in the next week and extending out over the next few months. The series will cover literature reviews of academic and EA literature on mental health and moral weights. We will be doing in-depth reviews and quality assessments on work by the Happier Lives Institute pertaining to StrongMinds, the RCTs and academic sources from which StrongMinds draws its evidence, and StrongMinds' internally reported data. We will provide a view on how impactful we judge StrongMinds to be. What we will publish From March to July 2023, SoGive plans to publish a series of analyses pertaining to mental health. The content covered will include Methodological notes on using existing academic literature, which quantifies depression interventions in terms of standardised mean differences, numbers needed to treat, remission rates and relapse rates; as well as the "standard deviation - years of depression averted" framework used by Happier Lives Institute. Broad, shallow reviews of academic and EA literature pertaining to the question of what the effect of psychotherapy is, as well as how this intersects with various factors such as number of sessions, demographics, and types of therapy. We will focus specifically on how the effect decays after therapy, and publish a separate report on this. Deep, narrow reviews of the RCTs and meta-analyses that are most closely pertaining to the StrongMind's context. Moral weights frameworks, explained in a manner which will allow a user to map dry numbers such as effect sizes to more visceral subjective feelings, so as to better apply their moral intuition to funding decisions. Cost-effective analyses which combine academic data and direct evidence from StrongMinds to arrive at our best estimate at what a donation to StrongMinds does. We hope these will empower others to check our work, do their own analyses of the topic, and take the work further. How will this enable higher impact donations? In the EA Survey conducted by Rethink Priorities, 60% of EA community members surveyed were in favour of giving "significant resources'' to mental health interventions, with 24% of those believing it should be a "top priority" or "near top priority" and 4% selecting it as their "top cause". Although other cause areas performed more favourably in the survey, this still appears to be a moderately high level of interest in mental health. Some EA energy has now gone into this area - for example, Charity Entrepreneurship incubated Canopie, Mental Health Funder's Circle, and played a role in incubating Happier Lives Institute. They additionally launched Kaya Guides and Vina Plena last year. We also had a talk from Friendship Bench at last year's EA Global. Our analysis will focus on StrongMinds. We chose StrongMinds because we know the organisation well. SoGive’s founder first had a conversation with StrongMinds in 2015 (thinking of his own donations) having seen a press article about them and having considered them a potentially high impact charity. Since then, several other EA orgs have been engaging with StrongMinds. Evaluations of StrongMinds specifically have now been published by both Founders Pledge and Happier Lives Institute, and Str...
Mar 18, 2023
EA - The illusion of consensus about EA celebrities by Ben Millwood
03:48
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The illusion of consensus about EA celebrities, published by Ben Millwood on March 17, 2023 on The Effective Altruism Forum. Epistemic status: speaking for myself and hoping it generalises I don't like everyone that I'm supposed to like: I've long thought that [redacted] was focused on all the wrong framings of the issues they discuss, [redacted] is on the wrong side of their disagreement with [redacted] and often seems to have kind of sloppy thinking about things like this, [redacted] says many sensible things but a writing style that I find intensely irritating and I struggle to get through; [redacted] is similar, but not as sensible, [redacted] is working on an important problem, but doing a kind of mediocre job of it, which might be crowding out better efforts. Why did I redact all those names? Well, my criticisms are often some mixture of: half-baked; I don't have time to evaluate everyone fairly and deeply, and don't need to in order to make choices about what to focus on, based on justifications that are not very legible or easy to communicate, not always totally central to their point or fatal to their work, kind of upsetting or discouraging to hear, often not that actionable. I want to highlight that criticisms like this will usually not surface, and while in individual instances this is sensible, in aggregate it may contribute to a misleading view of how we view our celebrities and leaders. We end up seeming more deferential and hero-worshipping than we really are. This is bad for two reasons: it harms our credibility in the eyes of outsiders (or insiders, even) who have negative views of those people, it projects the wrong expectation to newcomers who trust us and want to learn or adopt our attitudes. What to do about it? I think "just criticise people more" in isolation is not a good solution. People, even respected people in positions of leadership, often seem to already find posting on the Forum a stressful experience, and I think tipping that balance in the more brutal direction seems likely to cost more than it gains. I think you could imagine major cultural changes around how people give and receive feedback that could make this better, mitigate catastrophising about negative feedback, and ensure people feel safe to risk making mistakes or exposing their oversights. But those seem to me like heavy, ambitious pieces of cultural engineering that require a lot of buy-in to get going, and even if successful may incur ongoing frictional costs. Here's smaller, simpler things that could help: Write a forum post about it (this one's taken, sorry), Make disagreements more visible and more legible, especially among leaders or experts. I really enjoyed the debate between Will MacAskill and Toby Ord in the comments of Are we living at the most influential time in history? ­– you can't come away from that discussion thinking "oh, whatever the smart, respected people in EA think must be right", because either way at least one of them will disagree with you! There's a lot of disagreement on the Forum all the time, of course, but I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits, and then we in the peanut gallery argue over it. I'd love it if there were (say) a document out there that Redwood Research and Anthropic both endorsed, that described how their agendas differ and what underlying disagreements lead to those differences. Make sure people incoming to the community, or at the periphery of the community, are inoculated against this bias, if you spot it. Point out that people usually have a mix of good and bad ideas. Have some go-to examples of respected people's blind spots or mistakes, at least as they appear to you. (Even if you never end up explaining them to anyone, it's probably goo...
Mar 17, 2023
EA - Getting Better at Writing: Why and How by bgarfinkel
05:05
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting Better at Writing: Why and How, published by bgarfinkel on March 17, 2023 on The Effective Altruism Forum. This post is adapted from a memo I wrote a while back, for people at GovAI. It may, someday, turn out to be the first post in a series on skill-building. Summary If you're a researcher,[1] then you should probably try to become very good at writing. Writing well helps you spread your ideas, think clearly, and be taken seriously. Employers also care a lot about writing skills. Improving your writing is doable: it’s mostly a matter of learning guidelines and practicing. Since hardly anyone consciously works on their writing skills, you can become much better than average just by setting aside time for study and deliberate practice. Why writing skills matter Here are three reasons why writing skills matter: The main point of writing is to get your ideas into other people’s heads. Far more people will internalize your ideas if you write them up well. Good writing signals a piece is worth reading, reduces the effort needed to process it, guards against misunderstandings, and helps key ideas stick. Writing and thinking are intertwined. If you work to improve your writing on some topic, then your thinking on it will normally improve too. Writing concisely forces you to identify your most important points. Writing clearly forces you to be clear about what you believe. And structuring your piece in a logical way forces you to understand how your ideas relate to each other. People will judge you on your writing. If you want people to take you seriously, then you should try to write well. Good writing is a signal of clear thinking, conscientiousness, and genuine interest in producing useful work. For all these reasons, most organizations give a lot of weight to writing skills when they hire researchers. If you ask DC think tank staffers what they look for in candidates, they apparently mention “writing skills” more than anything else. "Writing skills" was also the first item mentioned when I recently asked the same question to someone on a lab policy team. GovAI certainly pays attention to writing when we hire. Even if you just want to impress potential employers, then, you should care a great deal about your own writing. How to get better at writing If you want to get better at writing, here are four things you can do: Read up on guidelines: There are a lot of pieces on how good writing works. The footnote at the end of this sentence lists some short essays.[2] The best book I know is Style: Lessons in Clarity and Grace. It’s an easy-to-read textbook that offers recipe-like guidance. I would recommend this book over anything else.[3] Engage with model pieces: You can pick out a handful of well-written pieces and read them with a critical mindset. (See the next footnote for some suggestions.[4]) You might ask: What exactly is good about the pieces? How do they work? Where do they obey or violate the guidelines recommended by others? Get feedback: Flaws in your writing—especially flaws that limit comprehension—will normally be more evident to people who are coming in cold. Also, sometimes other people will simply be better than you at diagnosing and correcting certain flaws. Comments and suggest-edits can draw your attention to recurring issues in your writing and offer models for how you can correct them. Do focused rewriting: The way you’ll ultimately get better is by doing focused rewriting. Pick some imperfect pieces—ideally, pieces you’re actually working on—and simply try to make them as good as possible.[5] You can consciously draw on writing guidelines, models, and previous feedback to help you diagnose and correct their flaws. The more time you spend rewriting, the better the pieces will become. Crucially, you’ll also start to internalize the techniques you...
Mar 17, 2023
EA - Announcing the 2023 CLR Summer Research Fellowship by stefan.torges
04:37
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the 2023 CLR Summer Research Fellowship, published by stefan.torges on March 17, 2023 on The Effective Altruism Forum. We, the Center on Long-Term Risk, are looking for Summer Research Fellows to help us explore strategies for reducing suffering in the long-term future (s-risk) and work on technical AI safety ideas related to that. For eight weeks, fellows will be part of our team while working on their own research project. During this time, they will be in regular contact with our researchers and other fellows. Each fellow will have one of our researchers as their guide and mentor. Deadline to apply: April 2, 2023. You can find more details on how to apply on our website. Purpose of the fellowship The purpose of the fellowship varies from fellow to fellow. In the past, have we often had the following types of people take part in the fellowship: People very early in their careers, e.g. in their undergraduate degree or even high school, who have a strong focus on s-risk and would like to learn more about research and test their fit. People seriously considering changing their career to s-risk research, who want to test their fit or seek employment at CLR. People with a strong focus on s-risk who aim for a research or research-adjacent career outside of CLR and who would like to gain a strong understanding of s-risk macrostrategy beforehand. People with a fair amount of research experience, e.g. from a partly- or fully completed PhD, whose research interests significantly overlap with CLR’s and who want to work on their research project in collaboration with CLR researchers for a few months. This includes people who do not strongly prioritize s-risk themselves. There might be many other good reasons for completing the fellowship. We encourage you to apply if you think you would benefit from the program, even if your reason is not listed above. What we look for in candidates We don’t require specific qualifications or experience for this role, but the following abilities and qualities are what we’re looking for in candidates. We encourage you to apply if you think you may be a good fit, even if you are unsure whether you meet some of the criteria. Curiosity and a drive to work on challenging and important problems; Ability to answer complex research questions related to the long-term future; Willingness to work in poorly-explored areas and to learn about new domains as needed; Independent thinking; A cautious approach to potential information hazards and other sensitive topics; Alignment with our mission or strong interest in one of our priority areas. Priority areas You can find an overview of our current priority areas here. However, If we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or have other ideas for reducing s-risks, please apply. We commonly tailor our positions to the strengths and interests of the applicants. Further details We encourage you to apply even if any of the below does not work for you. We are happy to be flexible for exceptional candidates, including when it comes to program length and compensation. Program dates: The default start date is July 3, 2023. Exceptions may be possible. Program length & work quota: The program is intended to last for eight weeks in a full-time capacity. Exceptions, including part-time work, may be possible. Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate. Compensation: Unfortunately, we face a lot of funding uncertainty at the moment. So we don’t know yet how much we will be able to pay participating fellows. Compensation will range from £1,800 to £4,000 per month, de...
Mar 17, 2023
EA - Legal Assistance for Victims of AI by bob
01:50
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Legal Assistance for Victims of AI, published by bob on March 17, 2023 on The Effective Altruism Forum. In the face of increasing competition, it seems unlikely that AI companies will ever take their foot off the gas. One avenue to slow AI development down is to make investment in AI less attractive. This could be done by increasing the legal risk associated with incorporating AI in products. My understanding of the law is limited, but the EU seems particularly friendly to this approach. The European Commission recently proposed the AI Liability Directive, which aims to make it easier to sue over AI products. In the US, companies are at the very least directly responsible for what their chatbots say, and it seems like it's only a matter of time until a chatbot genuinely harms a user, either by gaslighting or by abusive behavior. A charity could provide legal assistance to victims of AI, similar to how EFF provides legal assistance for cases related to Internet freedom. Besides helping the affected person, this would hopefully: Signal to organizations that giving users access to AI is risky business Scare away new players in the market Scare away investors Give the AI company in question a bad rep, and sway the public opinion against AI companies in general Limit the ventures large organizations would be willing to jump into Spark policy discussions (e.g. about limiting minor access to chatbots, which would also limit profits) All of these things would make AI a worse investment, AI companies a less attractive place to work, etc. I'm not sure it'll make a big difference, but I don't think it's less likely to move the needle than academic work on AI safety. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 17, 2023
EA - Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality by Conrad S
20:47
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can we trust wellbeing surveys? A pilot study of comparability, linearity, and neutrality, published by Conrad S on March 17, 2023 on The Effective Altruism Forum. Note: This post only contains Sections 1 and 2 of the report. For the full detail of our survey and pilot results, please see the full report on our website. Summary Subjective wellbeing (SWB) data, such as answers to life satisfaction questions, are important for decision-making by philanthropists and governments. Such data are currently used with two important assumptions: Reports are comparable between persons (e.g., my 6/10 means the same as your 6/10) Reports are linear in the underlying feelings (e.g., going from 4/10 to 5/10 represents the same size change as going from 8/10 to 9/10). Fortunately, these two assumptions are sufficient for analyses that only involve the quality of people’s lives. However, if we want to perform analyses that involve trade-offs between improving quality and quantity of life, we also need knowledge of the neutral point, the point on a wellbeing scale that is equivalent to non-existence. Unfortunately, evidence on all three questions is critically scarce. We propose to collect additional surveys to fill this gap. Our aim with this report is two-fold. First, we give an outline of the questions we plan to field and the underlying reasoning that led to them. Second, we present results from an initial pilot study (n = 128): Unfortunately, this small sample size does not allow us to provide clear estimates of the comparability of wellbeing reports. However, across several question modalities, we do find tentative evidence in favour of approximate linearity. With respect to neutrality, we assess at what point on a 0-10 scale respondents say that they are 'neither satisfied nor dissatisfied' (mean response is 5.3/10). We also probe at what point on a life satisfaction scale respondents report to be indifferent between being alive and being dead (mean response is 1.3/10). Implications and limitations of these findings concerning neutrality are discussed in Section 6.2. In general, the findings from our pilot study should only be seen as being indicative of the general feasibility of this project. They do not provide definitive answers. In the hopes of fielding an improved version of our survey with a much larger sample and a pre-registered analysis plan, we welcome feedback and suggestions on our current survey design. Here are some key questions that we hope to receive feedback on: Are there missing questions that could be included in this survey (or an additional survey) that would inform important topics in SWB research? Are there any questions or proposed analyses you find redundant? Do you see any critical flaws in the analyses we propose? Are there additional analyses we should be considering? Would these data and analyses actually reassure you about the comparability, linearity, and neutrality of subjective wellbeing data? If not, what sorts of data and analyses would reassure you? What are some good places for us to look for funding for this research? Of course, any other feedback that goes beyond these questions is welcome, too. Feedback can be sent to casparkaiser@gmail.com or to samuel@happierlivesinstitute.org. The report proceeds as follows: In Section 1, we describe the challenges for the use of self-reported subjective wellbeing data, focusing on the issues of comparability, linearity, and neutrality. We highlight the implications of these three assumptions for decision-making about effective interventions. In Section 2, we describe the general methodology of the survey. For the following sections, see the full report on our website. In Section 3, we discuss responses to the core life satisfaction question. In Sections 4, 5, and 6, we describe how we will assess co...
Mar 17, 2023
EA - Survey on intermediate goals in AI governance by MichaelA
02:50
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Survey on intermediate goals in AI governance, published by MichaelA on March 17, 2023 on The Effective Altruism Forum. It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about: respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes), how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance, what other intermediate goals they’d suggest, how high they believe the risk of existential catastrophe from AI is, and when they expect transformative AI (TAI) to be developed. We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could: Broaden the range of options people can easily consider Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc. Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc. If you'd like to see a summary of the survey results, please request access to this folder. We expect to approve all access requests, and will expect readers to abide by the policy articulated in "About sharing information from this report" (for the reasons explained there). Acknowledgments This report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked "Introduction & summary" document. If you are interested in RP’s work, please visit our research database and subscribe to our newsletter. Here’s the definition of “intermediate goal” that we stated in the survey itself: By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 17, 2023
EA - We are fighting a shared battle (a call for a different approach to AI Strategy) by Gideon Futerman
24:30
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We are fighting a shared battle (a call for a different approach to AI Strategy), published by Gideon Futerman on March 16, 2023 on The Effective Altruism Forum. Disclaimer 1: This following essay doesn’t purport to offer much original ideas, and I am certainly a non-expert on AI Governance, so please don’t take my word for these things too seriously. I have linked sources throughout the text, and have some other similar texts later on, but this should merely be treated as another data point in people saying very similar things; far smarter people than I have written on this. Disclaimer 2: This post is quite long, so I recommend reading the section on " A choice not an inevitability" and "It's all about power" for the core of my argument. My argument essentially is as follows; under most plausible understandings of how harms arise from very advanced AI systems, be these AGI or narrow AI or systems somewhere in between, the actors responsible, and the actions that must be taken to reduce or avert the harm, are broadly similar whether you care about both existential and non-existential harms from AI development. I will then further go on to argue that this calls for broad, coalitional politics of people who vastly disagree on specifics of AI systems harms, because we essentially have the same goals. It's important to note that calls like these have happened before. Whilst I will be taking a slightly different argument to them, Prunkl & Whittlestone, Baum, Stix & Maas and Cave & Ó hÉigeartaigh have all made arguments attempting to bridge near term and long term concerns. In general, these proposals (with the exception of Baum) have made calls for narrower cooperation between ‘AI Ethics’ and ‘AI Safety’ than I will make, and are all considerably less focused on the common source of harm than I will be. None go as far as I do in essentially suggesting all key forms of harm that we worry about are incidents of the same phenomena of power concentration in and through AI. These pieces are in many ways more research focused, whilst mine is considerably more politically focused. Nonetheless, there is considerable overlap in spirit of identifying that the near-term/ethics and long-term/safety distinction is overemphasised and is not as analytically useful as is made out, as well as the intention of all these pieces and mine to reconcile for mutual benefit of the two factions. A choice not an inevitability At present, there is no AI inevitably coming to harm us. Those AIs that do will be given capabilities, and power to cause harm, by developers. If the AI companies stopped developing their AIs now, and people chose to stop deploying them, then both existential or non-existential harms would stop. These harms are in our hands, and whilst the technologies clearly act as important intermediaries, ultimately it is a human choice, a social choice, and perhaps most importantly a political choice to carry on developing more and more powerful AI systems when such dangers are apparent (or merely plausible or possible). The attempted development of AGI is far from value neutral, far from inevitable and very much in the realm of legitimate political contestation. Thus far, we have simply accepted the right for powerful tech companies to decide our future for us; this is both unnecessary and dangerous. It's important to note that our current acceptance of the right of companies to legislate for our future is historically contingent. In the past, corporate power has been curbed, from colonial era companies, Progressive Era trust-busting, postwar Germany and more, and this could be used again. Whilst governments have often taken a leading role, civil society has also been significant in curbing corporate power and technology development throughout history. Acceptance of corporate dominance i...
Mar 17, 2023
EA - Donation offsets for ChatGPT Plus subscriptions by Jeffrey Ladish
04:20
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Donation offsets for ChatGPT Plus subscriptions, published by Jeffrey Ladish on March 16, 2023 on The Effective Altruism Forum. I've decided to donate $240 to both GovAI and MIRI to offset the $480 I plan to spend on ChatGPT Plus over the next two years ($20/month).I don't have a super strong view on ethical offsets, like donating to anti-factory farming groups to try to offset harm from eating meat. That being said, I currently think offsets are somewhat good for a few reasons:They seem much better than simply contributing to some harm or commons problem and doing nothing, which is often what people would do otherwise.It seems useful to recognize, to notice, when you're contributing to some harm or commons problem. I think a lot of harm comes from people failing to notice or keep track of ways their actions negatively impact others, and the ways that common incentives push them to do worse things.A common Effective Altruism argument against offsets is that they don't make sense from a consequentialist perspective. If you have a budget for doing good, then spend your whole budget on doing as much as possible. If you want to mitigate harms you are contributing to, you can offset by increasing your "doing good" budget, but it doesn't make sense to specialize your mitigations to the particular area where you are contributing to harm rather than the area you think will be the most cost effective in general.I think this is a decently good point, but doesn't move me enough to abandon the idea of offsets entirely. A possible counter-argument is that offsets can be a powerful form of coordination to help solve commons problems. By publicly making a commitment to offset a particular harm, you're establishing a basis for coordination - other people can see you really care about the issue because you made a costly signal. This is similar for the reasons to be vegan or vegetarian - it's probably not the most effective from a naive consequentialist perspective, but it might be effective as a point of coordination via costly signaling.After having used ChatGPT (3.5) and Claude for a few months, I've come to believe that these tools are super useful for research and many other tasks, as well as useful for understanding AI systems themselves. I've also started to use Bing Chat and ChatGPT (4), and found them to be even more impressive as research and learning tools. I think it would be quite bad for the world if conscientious people concerned about AI harms refrained from using these tools, because I think it would disadvantage them in significant ways, including in crucial areas like AI alignment and policy. Unfortunately both can be true:1) Language models are really useful and can help people learn, write, and research more effectively2) The rapid development of huge models is extremely dangerous and a huge contributor to AI existential riskI think OpenAI, and to varying extent other scaling labs, are engaged in reckless behavior scaling up and deploying these systems before we understand how they work enough to be confident in our safety and alignment approaches. And also, I do not recommend people in the "concerned about AI x-risk" reference class refrain from paying for these tools, even if they do not decide to offset these harms. The $20/month to OpenAI for GPT-4 access right now is not a lot of money for a company spending hundreds of millions training new models. But it is something, and I want to recognize that I'm contributing to this rapid scaling and deployment in some way.Weighing all this together, I've decided offsets are the right call for me, and I suspect they might be right for many others, which is why I wanted to share my reasoning here. To be clear, I think concrete actions aimed at quality alignment research or AI policy aimed at buying more time are much mo...
Mar 17, 2023
EA - Some problems in operations at EA orgs: inputs from a dozen ops staff by Vaidehi Agarwalla
11:44
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some problems in operations at EA orgs: inputs from a dozen ops staff, published by Vaidehi Agarwalla on March 16, 2023 on The Effective Altruism Forum. This is a brief summary of an operations brainstorm that took place during April 2022. It represents the views of operations staff at 8-12 different EA-aligned organizations (approximately). We split up into groups and brainstormed problems, and then chose the top problems to brainstorm some tentative solutions. The aim of the brainstorming session was to highlight things that needed improvement, rather than to evaluate how good EA operations roles are relative to the other non-profit or for-profit roles. It’s possible that EA organizations are not uniquely bad or good - but that doesn’t mean that these issues are not worth addressing. The outside world (especially the non-profit space) is pretty inefficient, and I think it’s worth trying to improve things. Limitations of this data: Meta / community building (and longtermist, to a lesser degree) organizations were overrepresented in this sample, and the tallies are estimates. We didn’t systematically ask people to vote for each and every sub-item, but we think the overall priorities raised were reasonable. General Brainstorming Four major themes came up in the original brainstorming session: bad knowledge management, unrealistic expectations, bad delegation, and lack of respect for operations. The group then re-formed new groups to brainstorm solutions for each of these key pain points. Below, we go into a breakdown of each large issue into specific points raised during the general brainstorming session. Some points were raised multiple times and are indicated by the “(x n)” to indicate how many times the point was raised. Knowledge management Problems Organizations don’t have good systems for knowledge management. Ops staff don’t have enough time to coordinate and develop better systems. There is a general lack of structure, clarity and knowledge. Issues with processes and systems (x 4) No time on larger problems Lack of time to explore & coordinate Lack of time to make things easier ([you’re always] putting out fires) [Lack of] organizational structure Line management Capacity to cover absences [see Unrealistic Expectations] Covering / keeping the show running Responsibilities Working across time zones Training / upskilling Management training [see improper delegation] Lack of Clarity + Knowledge Legal Compliance HR Hiring Wellbeing (including burnout) Lack of skill transfer Lack of continuity / High turn-over of junior ops specialists Potential Solutions Lowering the bar - e.g. you don’t need a PhD to work in ops. Pick people with less option value. Ask people to be nice and share with others Best practice guides shared universally. [Make them] available to people before hiring so they can understand the job better before applying, so [there’s] less turn-over. Database? (Better ops Slack?) Making time to create Knowledge Management Systems - so less fire-fighting. People higher in the organization [should have] better oversight of processes/knowledge. Unrealistic expectations Problems Employers have unrealistic expectations for ops professionals. Ops people are expected to do too much in too little time and always be on call. Lack of capacity / too much to do (x2) [Lack of] capacity to cover absences [from above] Ops people [are expected to be] “always on call” Timelines for projects [are subject to the] planning fallacy, [and there are] last minute changes Ops team [are] responsible for all new ideas that people come [up] with - could others do it? Unrealistic expectations about coordination capacity skillset organizational memory Solutions Bandwidth (?) Increase capacity Have continuity [give ops staff the] ability to push back on too-big asks Recognition Create...
Mar 16, 2023
EA - [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect by Garrison
01:34
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Why pescetarianism is bad for animal welfare - Vox, Future Perfect, published by Garrison on March 16, 2023 on The Effective Altruism Forum. In my debut for Vox, I write about why switching to a pescetarian diet for animal welfare reasons is probably a mistake. I was motivated to reduce animal consumption by EA reasoning. I initially thought that the moral progression of diets was something like vegan > vegetarian > pescetarian > omnivore. But I now think the typical pescetarian diet is worse than an omnivorous one. (I was actually convinced in part by an EA NYC talk by Becca Franks on fish psychology.) Why? Fish usually eat other fish, and they're smaller on average than typical farmed animals. The evidence for their sentience is much stronger than I previously thought. I think my credence is now something like P(pig/cow sentience) = 99.99%, P(chicken/fish sentience) = 99% Given that there are ~30k fish species, generalizing about them is a bit tricky, but I think the evidence of fish sentience is about as strong as the evidence for chicken sentience, something I would guess more people accept. I also spend time discussing: environmental impacts of fishing consumer choice vs. systemic change shrimp welfare Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 16, 2023
EA - Announcing the ERA Cambridge Summer Research Fellowship by Nandini Shiralkar
04:51
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the ERA Cambridge Summer Research Fellowship, published by Nandini Shiralkar on March 16, 2023 on The Effective Altruism Forum. The Existential Risk Alliance (ERA) has opened applications for an in-person, paid, 8-week Summer Research Fellowship focused on existential risk mitigation, taking place from July 3rd to August 25th 2023 in Cambridge, UK, and aimed at all aspiring researchers, including undergraduates. To apply and find out more, please visit the ERA website. If you are interested in mentoring fellows on this programme, please submit your name, email and research area here, and we will get in touch with you in due course. If you know other people who would be a good fit, please encourage them to apply (people are more likely to apply if you recommend they do, even if they have already heard of the opportunity!) If you are a leader or organiser of relevant community spaces, we encourage you to post an announcement with a link to this post, or alternatively a printable poster is here. Applications will be reviewed as they are submitted, and we encourage early applications, as offers will be sent out as soon as suitable candidates are found. We will accept applications until April 5, 2023 (23:59 in US Eastern Daylight Time). The ERA Cambridge Fellowship (previously known as the CERI Fellowship) is a fantastic opportunity to: Build your portfolio by researching a topic relevant to understanding and mitigating existential risks to human civilisation. Receive guidance and develop your research skills, via weekly mentorship from a researcher in the field. Form lasting connections with other fellows who care about mitigating existential risks, while also engaging with local events including discussions and Q&As with experts. Why we are running this programme Our mission as an organisation is to reduce the probability of an existential catastrophe. We believe that one of the key ways to reduce existential risk lies in fostering a community of dedicated and knowledgeable x-risk researchers. Through our summer research fellowship programme, we aim to identify and support aspiring researchers in this field, providing them with the resources and the mentorship needed to succeed. What we provide A salary equivalent to £31,200 per year, which will be prorated to the duration of the summer programme. Mentorship from a researcher working in a related field. Complimentary accommodation, meal provisions during working hours, and travel expense coverage Dedicated desk space at our office in central Cambridge. Opportunity to work either on a group research project with other fellows or individually. Networking and learning opportunities through various events, including trips to Oxford and London. What we are looking for We are excited to support a wide range of research, from the purely technical to the philosophical, as long as there is direct relevance to mitigating existential risk. This could also include social science or policy projects focusing on implementing existential risk mitigation strategies. Incredibly successful projects would slightly reduce the likelihood that human civilisation will permanently collapse, that humans will go extinct, or that the future potential of humanity will be permanently reduced. A secondary goal of this project is for fellows to learn more about working on existential risk mitigation, develop relevant skills, and test their fit for further research or work in this field. Who we are looking for Anyone can apply to the fellowship, though we expect it to be most useful to students (from undergraduates to postgraduates) and early-career individuals looking to test their fit for existential risk research. We particularly encourage undergraduates to apply, to develop their research experience. We are looking to support proactive i...
Mar 16, 2023
EA - Offer an option to Muslim donors; grow effective giving by GiveDirectly
07:06
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Offer an option to Muslim donors; grow effective giving, published by GiveDirectly on March 16, 2023 on The Effective Altruism Forum. Summary In order to offer Muslim donors a way to give their annual religious tithing (zakat) to an EA-aligned intervention, GiveDirectly launched a zakat-compliant fund, delivered as cash to Yemeni families displaced by the civil war. Muslims give ~$600B/year in Zakat to the global poor, though much of this is given informally or to less-than-effective NGOs. Through this unconditional cash transfer option, we’re offering Muslims the opportunity to redirect a portion of their giving to a measurably high-impact intervention and introduce more Muslims to EA’s theory of effective giving. We invite readers to share thoughts in the comments and to share the campaign far and wide. Muslims are the fastest-growing religious group and give annually As Ahmed Ghoor observed, Muslims make up about 24% of the world population (1.8B people) and Islam is the fastest growing religion. Despite having a robust tradition of charitable giving, little has been done proactively to engage the Muslim community on the ideas of effective altruism. An important step to inclusion is offering this pathway for effectively donating zakat. Zakat is a sacred pillar of Islam, a large portion of which is given to the needy For non-Muslim readers: one of the five pillars of Islam, zakat is mandatory giving; Muslims eligible to pay it donate at least 2.5% of their accumulated wealth annually for the benefit of the poor, destitute, and others – classified as mustahik. Some key points: A major cited aim of Zakat is to provide relief from and ultimately eradicate poverty. It is generally held that zakat can only be given to other Muslims. A large portion of zakat is given informally person-to-person or through mosques and Islamic charities. Zakat is a sacred form of charity; it’s most often given during the holy month of Ramadan. Direct cash transfers are a neglected zakat option Zakat giving is estimated at $1.8B in the U.S. alone with $450M going to international NGOs, who mostly use their funds for in-kind support like food, tents, and clothing. Dr. Shahrul Hussain, an Islamic scholar, argues that cash transfers “should be considered a primary method of zakat distribution,” as, according to the Islamic principle of tamlīk (ownership), the recipients of the zakat have total ownership over the money, and it is up to them (not an intermediary third-party organization or charity) how it is spent. He also notes “the immense benefits of unconditional cash transfer in comparison to in-kind transfer." This is a simple, transparent means of transferring wealth that empowers the recipients. However, other than informal person-to-person giving, there are limited options to give zakat as 100% unconditional cash. GiveDirectly now allows zakat to be given as cash to Muslims in extreme poverty As an opportunity for Muslims to donate zakat directly as cash, GiveDirectly created a zakat-compliant fund to give cash through our program in Yemen. While GiveDirectly is a secular organization, our Yemen program and Zakat policy have been reviewed and certified by Amanah Advisors. In order to achieve this, we’re assured that 100% of donations will be delivered as cash, using non-zakat funds to cover the associated delivery costs. Donations through our page are tax-deductible in the U.S. and our partners at Giving What We Can created a page allowing donors to give 100% of their gift to GiveDirectly’s zakat-compliant fund, tax-deductible in the Netherlands and the U.K. Taken together, this provides a tax-deductible option for 8.6M Muslims across three countries. As a secular NGO, GiveDirectly may struggle to gain traction with Muslim donors GiveDirectly is a credible option for zakat donors: we’ve...
Mar 16, 2023
EA - Write a Book? by Jeff Kaufman
00:23
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Write a Book?, published by Jeff Kaufman on March 16, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 16, 2023
EA - AI Safety - 7 months of discussion in 17 minutes by Zoe Williams
30:00
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety - 7 months of discussion in 17 minutes, published by Zoe Williams on March 15, 2023 on The Effective Altruism Forum. In August 2022, I started making summaries of the top EA and LW forum posts each week. This post collates together the key trends I’ve seen in AI Safety discussions since then. Note a lot of good work is happening outside what's posted on these forums too! This post doesn't try to cover that work. If you’d like to keep up on a more regular basis, consider subscribing to the Weekly EA & LW Forum Summaries. And if you’re interested in similar overviews for other fields, check out this post covering 6 months of animal welfare discussion in 6 minutes. Disclaimer: this is a blog post and not a research report - meaning it was produced quickly and is not to our (Rethink Priorities') typical standards of substantiveness and careful checking for accuracy. Please let me know if anything looks wrong or if I've missed key pieces! Table of Contents (It's a long post! Feel free to pick and choose sections to read, they 're all written to make sense individually) Key Takeaways Resource Collations AI Capabilities Progress What AI still fails at Public attention moves toward safety AI Governance AI Safety Standards Slow down (dangerous) AI Policy US / China Export Restrictions Paths to impact Forecasting Quantitative historical forecasting Narrative forecasting Technical AI Safety Overall Trends Interpretability Reinforcement Learning from Human Feedback (RLHF) AI assistance for alignment Bounded AIs Theoretical Understanding Outreach & Community-Building Academics and researchers University groups Career Paths General guidance Should anyone work in capabilities? Arguments for and against high x-risk Against high x-risk from AI Counters to the above arguments Appendix - All Post Summaries Key Takeaways There are multiple living websites that provide good entry points into understanding AI Safety ideas, communities, key players, research agendas, and opportunities to train or enter the field. (see more) Large language models like ChatGPT have drawn significant attention to AI and kick-started race dynamics. There seems to be slowly growing public support for regulation. (see more) Holden Karnofsky recently took a leave of absence from Open Philanthropy to work on AI Safety Standards, which have also been called out as important by leading AI lab OpenAI. (see more) In October 2022, the US announced extensive restrictions on the export of AI-related products (eg. chips) to China. (see more) There has been progress on AI forecasting (quantitative and narrative) with the aim of allowing us to understand likely scenarios and prioritize between governance interventions. (see more) Interpretability research has seen substantial progress, including identifying the meaning of some neurons, eliciting what a model has truly learned / knows (for limited / specific cases), and circumventing features of models like superposition that can make this more difficult. (see more) There has been discussion on new potential methods for technical AI safety, including building AI tooling to assist alignment researchers without requiring agency, and building AIs which emulate human thought patterns. (see more) Outreach experimentation has found that AI researchers prefer arguments that are technical and written by ML researchers, and that greater engagement is seen in university groups with a technical over altruistic or philosophical focus. (see more) Resource Collations The AI Safety field is growing (80K estimates there are now ~400 FTE working on AI Safety). To improve efficiency, many people have put together collations of resources to help people quickly understand the relevant players and their approaches - as well as materials that make it easier to enter the field or upskill...
Mar 16, 2023
EA - Reminding myself just how awful pain can get (plus, an experiment on myself) by Ren Springlea
25:16
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Reminding myself just how awful pain can get (plus, an experiment on myself), published by Ren Springlea on March 15, 2023 on The Effective Altruism Forum. Content warning: This post contains references to extreme pain and self-harm, as well as passing references to suicide, needles, and specific forms of suffering (but not detailed descriptions). Please do not repeat any of the experiments I've detailed in this post. Please be kind to yourself, and remember that the best motivation is sustainable motivation. Summary Out of curiosity, I exposed myself to safe, moderate-level pain to see how it changed my views on three particular topics. This article is mostly a self-reflection on this (non-scientific) experience. Firstly, I got a visceral, intense sense of how urgent it is to get it right when working to do the most good for others. Secondly, I gained a strong support for the position that the most morally important goal is to prevent suffering, and in particular for preventing extreme suffering. Thirdly, I updated my opinion on the trade-offs between different intensities of pain, which I give in this article as rough, numerical weightings on different categories of pain. Basically, I now place a greater urgency on preventing intense suffering than I did before. I conclude with how this newfound urgency will affect my work and my life. My three goals I began this experiment with three main goals: To remind myself how urgent and important it is to, when working to help others as much as I can, to get it right. Some people think that preventing intense pain (rather than working towards other, non-pain-related goals) is the most important thing to do. Do I agree with this? If I experience pain at different intensities, does this change the moral weight that I place on preventing intense pain compared to modest pain (i.e. intensity-duration tradeoff)? I think it is useful to test my intellectual ideas against what it is actually like to experience pain. This is not for motivation - I already work plenty in my role in animal advocacy, and I believe that sustainable motivation is the best motivation (I talk about this more at the end). My "experiment" I subjected myself to two somewhat-safe methods of experiencing pain: Firstly, I got three tattoos on different parts of my body - my upper arm, my calf, and my inner wrist. I had six tattoos already, so I was familiar with this experience. I got these tattoos all on one day (4/2/23) and in one location (a studio in London). Secondly, I undertook the cold pressor test. This is basically holding my hand in a tub of near-freezing water. This test is commonly used in scientific research as a way to invoke pain safely. I also did this on one day (25/2/23) and in one location (my home in Australia). Please do not replicate this - the cold pressor test causes pain and can cause significant distress in some people, as well as physical reactions that can compromise your health. I wish I had a somewhat-safe way to experience pain that is more intense than these two experiences, but these are the best I could come up with for now. During both of these experiences, I recorded the pain levels. I recorded the pain in three ways: A short, written description of my thoughts and feelings. The McGill Pain Index Pain Rating Intensity (PRI) Score. This score is calculated from a questionnaire (which I accessed via a phone app) that asks you to choose words corresponding to how your pain feels. The words are then used to calculate the numeric PRI score. I chose to use this tool as there is a review paper listing the approximate PRI scores caused by different human health conditions, which lets me roughly compare my scores to different instances of human pain. This list is given below, so you can have some idea of what scores mean. The PainTrac...
Mar 16, 2023
EA - 80k podcast episode on sentience in AI systems by rgb
00:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80k podcast episode on sentience in AI systems, published by rgb on March 15, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 15, 2023
EA - Success without dignity: a nearcasting story of avoiding catastrophe by luck by Holden Karnofsky
00:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Success without dignity: a nearcasting story of avoiding catastrophe by luck, published by Holden Karnofsky on March 15, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 15, 2023
EA - What happened to the OpenPhil OpenAI board seat? by ChristianKleineidam
00:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What happened to the OpenPhil OpenAI board seat?, published by ChristianKleineidam on March 15, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 15, 2023
EA - FTX Community Response Survey Results by WillemSleegers
14:44
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Community Response Survey Results, published by WillemSleegers on March 15, 2023 on The Effective Altruism Forum. Summary In December 2022, Rethink Priorities, in collaboration with CEA, surveyed the EA community in order to gather “perspectives on how the FTX crisis has impacted the community’s views of the effective altruism movement, its organizations, and leaders.” Our results found that the FTX crisis had decreased satisfaction with the EA community, and around half of respondents reported that the FTX crisis had given them concerns with EA meta organizations, the EA community and its norms, and the leaders of EA meta organizations. Nevertheless, there were some encouraging results. The reduction in satisfaction with the community was significant, but small, and overall average community sentiment is still positive. In addition, respondents tended to agree that the EA community had responded well to the crisis, although roughly a third of respondents neither agreed nor disagreed with this. Majorities of respondents reported continuing to trust EA organizations, though over 30% reported they had substantially lost trust in EA public figures or leadership. Respondents were more split in their views about how the EA community should respond. Respondents leaned slightly towards agreeing that the EA community should spend significant time reflecting and responding to this crisis, at the cost of spending less time on our other priorities, but slightly towards disagreement that the EA community should look very different as a result of this crisis. EA satisfaction Recalled satisfaction Respondents were asked about their current satisfaction with the EA community (after the FTX crisis) and to recall their satisfaction with the EA community at the start of November 2022, prior to the FTX crisis. Satisfaction with the EA community appears to be half a point (0.54) lower post-FTX compared to pre-FTX. Note that the median satisfaction scores are somewhat higher, but similarly showing a decrease (8 pre-FTX, 7 post-FTX). Satisfaction over time As the 2022 EA Survey was launched before the FTX crisis, we could assess how satisfaction with the EA community changed over time. We fit a generalized additive model in which we regressed the satisfaction ratings on the day the survey was taken. These results show that satisfaction went down after the FTX crisis first started. It should be noted however that this pattern of results could also be confounded by different groups of respondents taking the survey at different times. For example, we know that more engaged respondents tend to take the EAS earlier. We therefore also looked at how the satisfaction changed over time for different engagement levels. This shows that the satisfaction levels went down over time, regardless of engagement level. Concerns Respondents were asked whether the FTX crisis has given them concerns with: Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.) Leaders of Effective Altruism Meta Organizations (e.g., Centre for Effective Altruism, Open Philanthropy, 80,000 hours, etc.) The Effective Altruism Community & Norms Effective Altruism Principles or Philosophy Majorities of respondents reported agreement that the crisis had given them concerns with EA meta organizations (58%), the EA community and its norms (55%), just under half reported it giving them concerns about the leaders of EA meta organizations (48%). In contrast, only 25% agreed that the crisis had given them concerns about EA principles or philosophy, compared to 66% disagreeing. We think this suggests a somewhat reassuring picture where, though respondents may have concerns about the EA community in its current form, the FTX crisis has largely not caused respondents to become di...
Mar 15, 2023
EA - Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed" by Nathan Young
07:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Time Article Discussion - "Effective Altruist Leaders Were Repeatedly Warned About Sam Bankman-Fried Years Before FTX Collapsed", published by Nathan Young on March 15, 2023 on The Effective Altruism Forum. There is a new Time article Seems certain 98% we'll discuss it I would like us to try and have a better discussion about this than we sometimes do. Consider if you want to engage I updated a bit on important stuff as a result of this article. You may disagree. I am going to put my "personal updates" in a comment Excepts from the article that I think are relevant. Bold is mine. I have made choices here and feel free to recommend I change them. Yet MacAskill had long been aware of concerns around Bankman-Fried. He was personally cautioned about Bankman-Fried by at least three different people in a series of conversations in 2018 and 2019, according to interviews with four people familiar with those discussions and emails reviewed by TIME. He wasn’t alone. Multiple EA leaders knew about the red flags surrounding Bankman-Fried by 2019, according to a TIME investigation based on contemporaneous documents and interviews with seven people familiar with the matter. Among the EA brain trust personally notified about Bankman-Fried’s questionable behavior and business ethics were Nick Beckstead, a moral philosopher who went on to lead Bankman-Fried’s philanthropic arm, the FTX Future Fund, and Holden Karnofsky, co-CEO of OpenPhilanthropy, a nonprofit organization that makes grants supporting EA causes. Some of the warnings were serious: sources say that MacAskill and Beckstead were repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices, and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs, and dedicating to making money that could be donated to EA causes. MacAskill declined to answer a list of detailed questions from TIME for this story. “An independent investigation has been commissioned to look into these issues; I don’t want to front-run or undermine that process by discussing my own recollections publicly,” he wrote in an email. “I look forward to the results of the investigation and hope to be able to respond more fully after then.” Citing the same investigation, Beckstead also declined to answer detailed questions. Karnofsky did not respond to a list of questions from TIME. Through a lawyer, Bankman-Fried also declined to respond to a list of detailed written questions. The Centre for Effective Altruism (CEA) did not reply to multiple requests to explain why Bankman-Fried left the board in 2019. A spokesperson for Effective Ventures, the parent organization of CEA, cited the independent investigation, launched in Dec. 2022, and declined to comment while it was ongoing. In a span of less than nine months in 2022, Bankman-Fried’s FTX Future Fund—helmed by Beckstead—gave more than $160 million to effective altruist causes, including more than $33 million to organizations connected to MacAskill. “If [Bankman-Fried] wasn’t super wealthy, nobody would have given him another chance,” says one person who worked closely with MacAskill at an EA organization. “It’s greed for access to a bunch of money, but with a philosopher twist.” But within months, the good karma of the venture dissipated in a series of internal clashes, many details of which have not been previously reported. Some of the issues were personal. Bankman-Fried could be “dictatorial,” according to one former colleague. Three former Alameda employees told TIME he had inappropriate romantic relationships with his subordinates. Early Alameda executives also believed he had reneged on an equity arrangement that would have left Bankman-Frie...
Mar 15, 2023
EA - Cause Exploration: Support for Mental Health Carers by Yuval Shapira
03:46
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cause Exploration: Support for Mental Health Carers, published by Yuval Shapira on March 14, 2023 on The Effective Altruism Forum. Tldr- I'm looking into Support for mental health carers as a potential cause area for a while, would love inputs about ITN and generally about the subject Summary of key points: Mental health as an important cause area- Mental illness seems to cause a high amount of worldwide unhappiness and seems neglected. Carers as a potential solution- Most of the people suffering from mental health issues or illness are surrounded by family and friends, who can potentially have a high impact on the decrease or increase of their mental state. Also, there is a stigma considering mental health- leading to cases being underreported and individuals that are unwilling to seek treatment. The carers could be the first and only to discover the issues before it is too late, and the price of giving them the tools to support could be cheap and efficient. Carers as a potential cause area- Although the suffering of carers is (probably) not nearly as severe as the people suffering from mental health issues or illnesses, the scale of the people it affects is wider and it the neglectedness is probably higher. Elaboration: Mental health as an important cause area Depression is a substantial source of suffering worldwide. It makes up 1.84% of the global burden of disease according to the IHME (Institute for Health Metrics and Evaluation). The treatment of depression is neglected relative to other health interventions in low to middle-income countries. Governments and international aid spending on mental health represent less than 1% of the total spending on health in low-income countries. Carers as a potential solution A carer is someone who voluntarily provides ongoing care and assistance to another person who, because of mental health issues or psychiatric disability, requires support with everyday tasks. A carer might be a person’s parent, partner, relative or friend. The supporter has an impact on the sufferer and could be the first and only to discover the problem. There are supports, guides and programs for high income countries (the quality and amount of improved due to covid, but also the depression rates are higher), but few programs and high quality study on programs who approach improving mental health through carers in low-middle income countries. Happier lives institute did screen programs listed on the Mental Health Innovation Network, and one of the programs is peer-based. Other interesting programs are StrongMinds Peer Facilitator Programs (which are cheaper, and the facilitators have a higher understanding of the participants) and Carers worldwide. I believe research on programs such as these could be a path to potential effective interventions. Carers as a potential cause area The amount of the carers is higher than people suffering from mental difficulties, and their support is more neglected. Caring for a person suffering from mental health difficulties can hurt the supporter (Secondary trauma, Copycat suicide). The direct support for the carers in addition to the secondary improvement of the people severely suffering could improve dramatically the cost-effectiveness. Summary I believe there is a strong case to consider furthering the study of mental health carer support, and it should be a higher priority in the effective altruism community because of the potential scale, neglectedness, and cost-effectiveness of such programs. Thanks to @EdoArad and @GidiKadosh for helping me write this up, to @CE for inspiring me to write this a year ago, and @sella and @Dan Lahav for incentivizing me to look more deeply into this topic today Also thank you generally to everyone promoting mental health as a cause area :) This might be an un-updated text because I ...
Mar 15, 2023
EA - Shutting Down the Lightcone Offices by Habryka
24:12
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shutting Down the Lightcone Offices, published by Habryka on March 15, 2023 on The Effective Altruism Forum. Lightcone recently decided to close down a big project we'd been running for the last 1.5 years: An office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021. We haven't written much about why, but I and Ben had written some messages on the internal office slack to explain some of our reasoning, which we've copy-pasted below. (They are from Jan 26th). I might write a longer retrospective sometime, but these messages seemed easy to share, and it seemed good to have something I can more easily refer to publicly. Background data Below is a graph of weekly unique keycard-visitors to the office in 2022. The x-axis is each week (skipping the first 3), and the y-axis is the number of unique visitors-with-keycards. Members could bring in guests, which happened quite a bit and isn't measured in the keycard data below, so I think the total number of people who came by the offices is 30-50% higher. The offices opened in August 2021. Including guests, parties, and all the time not shown in the graphs, I'd estimate around 200-300 more people visited, for a total of around 500-600 people who used the offices. The offices cost $70k/month on rent , and around $35k/month on food and drink, and ~$5k/month on contractor time for the office. It also costs core Lightcone staff time which I'd guess at around $75k/year. Ben's Announcement Closing the Lightcone Offices @channel Hello there everyone, Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build. Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.) I asked Oli to briefly state his reasoning for this decision, here's what he says: An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world. I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world. I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow a...
Mar 15, 2023
EA - Exposure to Lead Paint in Low- and Middle-Income Countries by Rethink Priorities
04:57
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Exposure to Lead Paint in Low- and Middle-Income Countries, published by Rethink Priorities on March 14, 2023 on The Effective Altruism Forum. for the full version of this report on the Rethink Priorities website. This report is a “shallow” investigation, as described here, and was commissioned by GiveWell and produced by Rethink Priorities from November 2021 to January 2022. We updated and revised this report for publication. GiveWell does not necessarily endorse our conclusions. The primary focus of the report is to provide an overview of what is currently known about the exposure to lead paints in low- and middle-income countries. Key takeaways Lead exposure is common across low- and middle-income countries (LMICs) and can lead to life-long health problems, a reduced IQ, and lower educational attainment. One important exposure pathway is lead-based paint (here defined as a paint to which lead compounds have been added), which is still unregulated in over 50% of countries globally. Yet, little is known about how much lead paint is being used in LMICs and to what extent it contributes to the health and economic burden of lead (link to section). Home-based assessment studies of lead paint levels provide evidence of current exposure to lead, but the evidence in LMICs is scarce and relatively low quality. Based on the few studies we found, our best guess is that the average lead concentration in paint in residential houses in LMICs is between 50 ppm and 4,500 ppm (90% confidence interval) (link to section). Shop-based assessment studies of lead-based paints provide evidence of future exposure to lead. Based on three review studies and expert interviews, we find that lead levels in solvent-based paints are roughly 20 times higher than in water-based paints. Our best guess is that average lead levels of paints currently sold in shops in LMICs are roughly 200-1,400 ppm (80% CI) for water-based paints and 5,000-30,000 ppm (80% CI) for solvent-based paints (link to section). Based on market analyses and small, informal seller surveys, we estimate that market share of solvent-based paints in LMICs is roughly 30%-65% of all residential paints sold (the rest being water-based paints), which is higher than in high-income countries (~20%-30%) (link to section). There is also evidence that lead-based paints are frequently being used in public spaces, such as playgrounds, (pre)schools, hospitals, and daycare centers. However, we do not know the relative importance of exposure from lead paint in homes vs. outside the home (link to section). As many studies on the exposure and the health effects of lead paint are based on historical US-data, we investigated whether current lead paint levels in LMICs are comparable to lead paint levels in the US before regulations were in place. We find that historical US-based lead concentrations in homes were about 6-12 times higher than those in recently studied homes in some LMICs (70% confidence) (link to section). We estimate that doubling the speed of the introduction of lead paint bans across LMICs could prevent 31 to 101 million (90 % CI) children from exposure to lead paint, and lead to total averted income losses of USD 68 to 585 billion (90% CI) and 150,000 to 5.9 million (90% CI) DALYs over the next 100 years. Building on previous analyses done by LEEP (Hu, 2022; LEEP, 2021) and Attina and Trasande (2013), we estimate that lead paint accounts for ~7.5% (with a 90% confidence interval of 2-15%) of the total economic burden of lead. We would like to emphasize that these estimates are highly uncertain, as our model is based on many inputs for which data availability is scarce or even non-existent. This uncertainty could be reduced with more data on the use of paints in LMICs (e.g. frequency of re-painting homes) and on the average dose-resp...
Mar 14, 2023
EA - GPT-4 is out: thread (and links) by Lizka
00:47
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-4 is out: thread (& links), published by Lizka on March 14, 2023 on The Effective Altruism Forum. GPT-4 is out. There's also a LessWrong post on this with some discussion. The developers are doing a live-stream ~now. And it's been confirmed that Bing runs on GPT-4. Also: Claude (Anthropic) PaLM API Here's an image from the OpenAI blog post about GPT-4: (This is a short post.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 14, 2023
EA - Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill) by Global Priorities Institute
06:29
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Longtermist institutional reform (Tyler M. John and William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum. This is a summary of the GPI working paper "Longtermist institutional reform" by Tyler M. John and William MacAskill (published in the 2021 edited volume “the long view”). The summary was written by Riley Harris. Political decisions can have lasting effects on the lives and wellbeing of future generations. Yet political institutions tend to make short-term decisions with only the current generation – or even just the current election cycle – in mind. In “longtermist institutional reform”, Tyler M. John and William MacAskill identify the causes of short-termism in government and give four recommendations for how institutions could be improved. These are the creation of in-government research institutes, a futures assembly, posterity impact statements and – more radically – an ‘upper house’ representing future generations. Causes of short-termism John and MacAskill discuss three main causes of short-termism. Firstly, politicians may not care about the long term. This may be because they discount the value of future generations, or simply because it is easy to ignore the effects of policies that are not experienced here and now. Secondly, even if politicians are motivated by concern for future generations, it may be difficult to know the long-term effects of different policies. Finally, even motivated and knowledgeable actors might face structural barriers to implementing long-term focussed policies – for instance, these policies might sometimes appear worse in the short-term and reduce a candidate's chances of re-election. Suggested reforms In-government research institutes The first suggested reform is the creation of in-government research institutes that could independently analyse long-term trends, estimate expected long-term impacts of policy and identify matters of long-term importance. These institutes could help fight short-termism by identifying the likely future impacts of policies, making these impacts vivid, and documenting how our leaders are affecting the future. They should also be designed to resist the political incentives that drive short-termism elsewhere. For instance, they could be functionally independent from the government, hire without input from politicians, and be flexible enough to prioritise the most important issues for the future. To ensure their advice is not ignored, the government should be required to read and respond to their recommendations. Futures assembly The futures assembly would be a permanent citizens’ assembly which seeks to represent the interests of future generations and give dedicated policy time to issues of importance for the long-term. Several examples already exist where similar citizens’ assemblies have helped create consensus on matters of great uncertainty and controversy, enabling timely government action. In-government research institutes excel at producing high quality information, but lack legitimacy. In contrast, a citizens’ assembly like this one could be composed of randomly selected citizens that are statistically representative of the general population. John and MacAskill believe this representativeness brings political force –politicians who ignore the assembly put their reputations at risk. We can design futures assemblies to avoid the incentive structures that result in short-termism – such as election cycles, party interests and campaign financing. Members should be empowered to call upon experts, and their terms should be long enough to build expertise but short enough to avoid problems like interest group capture – perhaps two years. They should also be empowered to set their own agenda and publicly disseminate their resul...
Mar 14, 2023
EA - A BOTEC-Model for Comparing Impact Estimations in Community Building by Patrick Gruban
13:41
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A BOTEC-Model for Comparing Impact Estimations in Community Building, published by Patrick Gruban on March 14, 2023 on The Effective Altruism Forum. We are grateful to Anneke Pogarell, Birte Spekker, Calum Calvert, Catherine Low, Joan Gass, Jona Glade, Jonathan Michel, Kyle Lucchese, Moritz Hanke and Sarah Pomeranz for conversations and feedback that significantly improved this post. Any errors, of fact or judgment, remain our entirely own. Summary When prioritising future programs in EA community building, we currently lack a quantitative way to express underlying assumptions. In this post, we look at different existing approaches and present our first version of a model. We intended it to make Back-of-the-envelope (BOTEC) estimations by looking at an intervention (community building or marketing activity) and thinking about how it might affect participants on their way to having a more impactful life. The model uses an estimation of the average potential of people in a group to have an impact on their lives as well as the likelihood of them achieving it. If you’d like only to have a look at the model, you can skip the first paragraphs and directly go to Our current model. Epistemic Status We spent about 40-60 hours thinking about this, came up with it from scratch as EA community builders and are uncertain of the claims. Motivation As new co-directors of EA Germany, we started working on our strategy last November, collecting the requests for programs from the community and looking at existing programs of other national EA groups. While we were able to include some early on as they seemed broadly useful, we were unsure about others. Comparing programs that differ in target group size and composition as well as the type of intervention meant that we would have to rely on and weigh a set of assumptions. To discuss these assumptions and ideally test some of them out, we were looking for a unified approach in the form of a model with a standardised set of parameters. Impact in Community Building The term community building in effective altruism can cover various activities like mass media communication, education courses, speaker events, multi-day retreats and 1-1 career guiding sessions. The way we understand it is more about the outcome than the process, covering not only activities that focus on a community of people. It could be any action that guides participants in their search for taking a significant action with a high expected impact and to continue their engagement in this search. The impact of the community builder depends on their part in the eventual impact of the community members. A community builder who wants to achieve high impact would thus prioritise interventions by the expected impact contribution per invested time or money. Charity Evaluators like GiveWell can indicate impact per dollars donated in the form of lives saved, disability-adjusted life years (DALYs) reduced or similar numbers. If we guide someone to donate at all, donate more effectively and donate more, we can assume that part of the impact can be attributed to us. For people changing their careers to work on the world's most pressing problems, starting charities, doing research or spreading awareness, it’s harder to assess the impact. We assume an uneven impact distribution per person, probably heavy-tailed. Some people have been responsible for saving millions, such as Norman Borlaug or might have averted a global catastrophe like Stanislav Petrov. Existing approaches Marketing Approach: Multi-Touch Attribution In our strategy, we write: Finding the people that could be interested in making a change to effective altruistic actions, guiding them through the process of learning and connecting while keeping them engaged up to the point where they take action and beyond is a multi-step ...
Mar 14, 2023
EA - Two University Group Organizer Opportunities: Pre-EAG London Summit and Summer Internship by Joris P
03:42
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two University Group Organizer Opportunities: Pre-EAG London Summit & Summer Internship, published by Joris P on March 13, 2023 on The Effective Altruism Forum. Summary CEA’s University Groups Team is excited to announce two new opportunities: A summer internship for university group organizers Dates: flexible, during the Northern Hemisphere summer Application deadline: Wednesday, March 22 Find more info & apply here! A university group organizer summit before EAG London Dates: Monday 15 May – Friday 19 May Application deadline: Monday, March 27 Find more info & apply here! Summer Internship What? CEA's University Groups Team is running a paid internship program for about 5 university group organizers! During the internship, you will work on a meta-EA project, receiving mentorship and coaching from CEA staff. We have a list with a number of project ideas, but also encourage you to think about other projects you'd like to run. This is your opportunity to think big, and see what it's like to work on meta-EA projects full-time! Why? Test out different aspects of meta-EA work as a potential career path Receive coaching and mentorship through CEA A competitive wage for part-time or full-time work during your break Consideration for extended work with CEA For who? You might be a good fit for the internship if you are: A university group organizer who is interested in testing out community building and/or EA entrepreneurial projects as a career path Highly organized, reliable, and independent Knowledgeable of EA and eager to learn more Make sure to read more and apply here! More info If you have any questions, including about whether you'd be a good fit, reach out to Jessica at jessica [dot] mccurdy [at] centreforeffectivealtruism [dot] org. Find more info & apply here! Initial applications are due soon: Wednesday, March 22 at 11:59pm Anywhere on Earth. Pre-EAG London University Group Organizer Summit What? Monday 15 May – Friday 19 May (before EAG London 2023), the CEA University Groups team is hosting a summit for university group organizers. The summit will kickstart renewed support for experienced university groups and foster better knowledge transfer across groups. Why? The summit has three core goals: Boost top university groups by facilitating knowledge transfer among experienced organizers. Improve advice for university groups by accumulating examples of effective late-stage group strategies. Facilitate connections between experienced organizers and newer organizers, with the hope that attendees will continue to share information and support each other. For who? All current university group organizers can apply for this summit! This event will be particularly well-suited for experienced organizers at established university groups. We’re also excited about this summit serving the next generation of organizers at established groups and ambitious organizers at new groups who are eager to think carefully about groups strategy. If you think this summit would plausibly be valuable for you, we encourage you to just go ahead and apply! More info If you have any questions, including about whether you'd be a good fit, reach out to us at unigroups [at] centreforeffectivealtruism [dot] org. Find more info & apply here! Applications are due soon: Monday, March 27th at 11:59pm Anywhere on Earth. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 14, 2023
EA - Paper summary: Are we living at the hinge of history? (William MacAskill) by Global Priorities Institute
10:02
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper summary: Are we living at the hinge of history? (William MacAskill), published by Global Priorities Institute on March 13, 2023 on The Effective Altruism Forum. This is a summary of the GPI Working Paper “Are we living at the hinge of history?” by William MacAskill. (also published in the 2022 edited volume “Ethics and Existence: The Legacy of Derek Parfit”). The summary was written by Riley Harris. Longtermist altruists – who care about how much impact they have, but not about when that impact occurs – have a strong reason to invest resources before using them directly. Invested resources could grow much larger and be used to do much more good in the future. For example, a $1 investment that grows 5% per year would become $17,000 in 200 years. However, some people argue that we are living in an unusual time, during which our best opportunities to improve the world are much better than they ever will be in the future. If so, perhaps we should spend our resources as soon as possible. In “Are we living at the hinge of history?”, William MacAskill investigates whether actions in our current time are likely to be much more influential than other times in the future. (‘Influential’ here refers specifically to how much good we expect to do via direct monetary expenditure – the consideration most relevant to our altruistic decision to spend now or later.) After making this ‘hinge of history’ claim more precise, MacAskill gives two main arguments against the claim: the base rate and inductive arguments. He then discusses some reasons why our time might be unusual, but ultimately concludes that he does not think that the ‘hinge of history’ claim holds true. The base rate argument When we think about the entire future of humanity, we expect there to be a lot of people, and so we should initially be very sceptical that anyone alive today will be amongst the most influential human beings. Indeed, if humanity doesn’t go extinct in the near future, there could be a vast number of future people – settling near just 0.1% of stars in the Milky Way with the same population as Earth would mean there were 1024 (a trillion trillion) people to come. Suppose that, before inspecting further evidence, we believe that we are about as likely as anyone else to be particularly influential. Then, our initial belief that anyone alive today is amongst the million most influential people would be 1 in 1018 (1 in a million trillion). From such a sceptical starting point, we would need extremely strong evidence to become convinced that we are presently in the most influential time era. Even if there were only 108 (one hundred trillion) people to come, then in order to move from this extremely sceptical position (1 in 108) to a more moderate position (1 in 10), we would need evidence about 3 million times as strong as a randomised control trial with a p-value of 0.05. MacAskill thinks that, although we do have some evidence that indicates we may be at the most influential time, this evidence is not nearly strong enough. The inductive argument There is another strong reason to think our time is not the most influential, MacAskill argues: Premise 1: Influentialness has been increasing over time. Premise 2: We should expect this trend to continue. Conclusion: We should expect the influentialness of people in the future to be greater than our own influentialness. Premise 1 can be best illustrated with an example: a well-educated and wealthy altruist living in Europe in 1600 would not have been in a position to know about the best opportunities to shape the long-run future. In particular, most of the existential risks they faced (e.g. an asteroid collision or supervolcano) were not known, nor would they have been in a good position to do anything about them even if they were known. Even if they had th...
Mar 14, 2023
EA - "Can We Survive Technology?" by John von Neumann by Eli Rose
02:24
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Can We Survive Technology?" by John von Neumann, published by Eli Rose on March 13, 2023 on The Effective Altruism Forum. This is an essay written by John von Neumann in 1955, which I think is fairly described as being about global catastrophic risks from emerging technologies. It discusses a bunch of specific technologies that seemed like a big deal in 1955 — which is interesting in itself as a list of predictions; nuclear power! increased automation! weather control? — but explicitly tries to draw a general lesson. von Neumann is regarded as one of the greatest scientists of the 20th century, and was involved in the Manhattan project in addition to inventing zillions of other things. I'm posting here because a) I think the essay is worth reading in its own right, and b) I find it interesting to see what the past's intellectuals thought of issues related transformative technology, and how their perspective differs/is similar to ours. Notably, I disagree with several of the conclusions (e.g. von Neumann seems to think differential technological development is doomed). On another level, I find the essay, and the fact of it having been written in 1955, somewhat motivating, though not at all in a straightforward way. Some quotes: Since most time scales are fixed by human reaction times, habits, and other physiological and psycho logical factors, the effect of the increased speed of technological processes was to enlarge the size of units — political, organizational, economic, and cultural — affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached. ...there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography — and institutions based thereon — as understood today. What safeguard remains? Apparently only day-to-day — or perhaps year-to-year — opportunistic measures, a long sequence of small, correct decisions. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 14, 2023
EA - Shallow Investigation: Stillbirths by Joseph Pusey
25:45
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Investigation: Stillbirths, published by Joseph Pusey on March 13, 2023 on The Effective Altruism Forum. This topic has the potential to be deeply upsetting to those reading it, particularly to those who have personal experience of the topic in question. If you feel that I’ve missed or misunderstood something, or could have phrased things more sensitively, please reach out to me. Throughout the review, words like “woman” or “mother” are used in places where some people might prefer “birthing person” or similar. This choice reflects the language used in the available literature and does not constitute a position on what the most appropriate terminology is. This report is a shallow dive into stillbirths, a sub-area within maternal and neonatal health, and was produced as part of the Cause Innovation Bootcamp. The report, which reflects approximately 40-50 hours of research, offers a brief dive into whether a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, it should be used to decide whether or not more research and work into a particular problem area should be prioritised. Executive Summary Importance: This problem is likely very important (epistemic status-strong)- stillbirths are widespread, concentrated in the world’s poorest countries, and decreasing only very slowly compared to the decline in maternal and infant mortality. There are more deaths resulting from stillbirth than those caused by HIV and malaria combined (depending on your personal definition of death- see below), and even in high-income countries stillbirths outnumber infant deaths. Tractability: This problem is likely moderately tractable (moderate)- most stillbirths are likely to be preventable, but the most impactful interventions are complex, facility-based, expensive, and most effective at scale e.g. guaranteeing access to high-quality emergency obstetric care Neglectedness: This problem is unlikely to be neglected (less strong)- although still under-researched and under-counted, stillbirths are the target of some of the largest organisations in the global health and development world, including the WHO, UNICEF, the Bill and Melinda Gates Foundation, and the Lancet. Many countries have committed to the Every Newborn Action Plan, which aims- amongst other things- to reduce the frequency of stillbirths. Key uncertainties Key uncertainty 1: Accurately assessing the impact of stillbirths, and therefore the cost-effectiveness of interventions aimed at reducing stillbirths, depends significantly on to what extent direct costs to the unborn child are counted. Some organisations view stillbirths as having negative effects on the parents and wider communities but do not count the potential years of life lost by the unborn child; others use time-discounting methods to calculate a hypothetical number of expected QALYS lost, and still others see it as completely equivalent to losing an averagely-long life. Differences in the weighting of this loss can alter the calculated impacts of stillbirth by several orders of magnitude and is likely the most important consideration when considering a stillbirth-reducing intervention Key uncertainty 2: Interventions which reduce the risk of stillbirth tend to be those which also address maternal and neonatal health more broadly; therefore, it is very difficult to accurately assess the cost-effectiveness of these interventions solely in terms of their impact on stillbirths, and more complex models which take into account the impacts on maternal, neonatal, and infant health are likely more accurate in assessing the overall cost-effectiveness of interventions. Key uncertainty 3: A large proportion of the data around interventions to reduce stillbirths comes from high-income countries, but most still...
Mar 13, 2023
EA - On taking AI risk seriously by Eleni A
01:46
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On taking AI risk seriously, published by Eleni A on March 13, 2023 on The Effective Altruism Forum. Yet another New York Times piece on AI. A non-AI safety friend sent it to me saying "This is the scariest article I've read so far. I'm afraid I haven't been taking it very seriously". I'm noting this because I'm always curious to observe what moves people, what's out there that has the power to change minds. In the past few months, there's been increasing public attention to AI and all sorts of hot and cold takes, e.g., about intelligence, consciousness, sentience, etc. But this might be one of the articles that convey the AI risk message in a language that helps inform and think about AI safety. The following is what stood out to me and made me think that it's time for philosophy of science to also take AI risk seriously and revisit the idea of scientific explanation given the success of deep learning: I cannot emphasize this enough: We do not understand these systems, and it’s not clear we even can. I don’t mean that we cannot offer a high-level account of the basic functions: These are typically probabilistic algorithms trained on digital information that make predictions about the next word in a sentence, or an image in a sequence, or some other relationship between abstractions that it can statistically model. But zoom into specifics and the picture dissolves into computational static. “If you were to print out everything the networks do between input and output, it would amount to billions of arithmetic operations,” writes Meghan O’Gieblyn in her brilliant book, “God, Human, Animal, Machine,” “an ‘explanation’ that would be impossible to understand.” Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 13, 2023
EA - How bad a future do ML researchers expect? by Katja Grace
00:27
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How bad a future do ML researchers expect?, published by Katja Grace on March 13, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 13, 2023
EA - It's not all that simple by Brnr001
13:02
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: It's not all that simple, published by Brnr001 on March 13, 2023 on The Effective Altruism Forum. TLTR: I feel that recently the EA forum became pretty judgmental and unwelcoming. I also feel that the current discourse about sex misses two important points and, in a huge part of it, lacks maturity and is harmful. Let me attempt to address it. Trigger warning, point 2 involves a long description of personal stories connected to sex, some of them were difficult and may be triggering. It also may not be very well structured, but I preferred to write one long post instead of three short ones. This is obviously a burner account, but when you see those stories you’ll be able to see why. For the record, they don’t involve people from the community. I'm a woman (it's going to matter later on). Acceptable dating and sexual behaviors vary between classes and cultures. The devil is in the detail, and rules you live by and perceive as “obvious” may be so clear for anybody else. Also, the map of the US is not in a shape of geode. People vary in gender and sexual orientation. They vary in a level of sexual desire. They have different kinks, ways of expressing sexuality and levels of self-awareness. Different needs. Various physiological reactions to sexually tense situations. Various ways of presenting themselves when it comes to all of the above. People come from different cultures – regions, countries, social classes and religions. As a result, dating cultures vary around the world. Sexual behaviors also. Acceptable level of flirt, jokes, touch and the way consent is asked for and expressed sometimes just vary. Problems and how i.e. sexism looks like also has various shapes and forms. There are some common characteristics, but details matter, to a huge extent. Many people in the recent discussions stated that various nuances are obvious and should be intuitively followed by everyone. I think it’s problematic and leads to abuse. Believing that your values and behavior associated with your culture and class are the only right ones and everybody should know, understand and follow them, is fundamentally different from assertively vocalizing your boundaries and needs. The second is a great, mature behavior. The first feels a bit elitist, ignorant and has nothing to do with safety, equality and being inclusive. Additionally, I want to draw your attention to one thing. I have a strong belief (correct me if I’m wrong) that the vast majority (if not all) of sexual misconduct causes which were described over the last couple of days in the articles or here, on the forum, come from either US or the UK. EA crowd is definitely not limited to those. So my honest question would be – is it EA who has a problem with sexual misconduct? Or is it an Anglo-Saxon culture which has a problem with sexual misconduct? Or maybe – EA with a mix of Anglo-Saxon culture has this issue? Shouldn’t we zoom in on that a bit? Human sexuality is complex. Consent is also sometimes complex. People often talk a lot of “what consent norms should be”. But often such disputes do not give a full picture of what people’s actual behaviors around consent actually are – and it’s a bit crucial to this whole conversation. If you start having more intimate talks, however, you end up seeing a much more complex and broad picture. And often consent is easier said than done. I encourage you all, regardless what’s your gender, to have those talks with friends, who are open and empathetic. I’ve learned a lot and they made my life easier. Yet, some people may have no opportunity to hear such stories. So let me share, why do I think that consent is not all that easy. I'm going to talk about myself here, because maybe somebody needs to hear somebody being open and vulnerable about stuff like that. My message is - it's ok to sometimes stru...
Mar 13, 2023
EA - Bill prohibiting the use of QALYs in US healthcare decisions? by gordoni
01:17
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bill prohibiting the use of QALYs in US healthcare decisions?, published by gordoni on March 12, 2023 on The Effective Altruism Forum. Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law. According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs". I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far. It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions. Is anyone familiar with this bill and able to shed more light on it? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 12, 2023
EA - Two directions for research on forecasting and decision making by Paal Fredrik Skjørten Kvarberg
52:00
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two directions for research on forecasting and decision making, published by Paal Fredrik Skjørten Kvarberg on March 11, 2023 on The Effective Altruism Forum. An assessment of methods to improve individual and institutional decision-making and some ideas for further research Forecasting tournaments have shown that a set of methods for good judgement can be used by organisations to reliably improve the accuracy of individual and group forecasts on a range of questions in several domains. However, such methods are not widely used by individuals, teams or institutions in practical decision making. In what follows, I review findings from forecasting tournaments and some other relevant studies. In light of this research, I identify a set of methods that can be used to improve the accuracy of individuals, teams, or organisations. I then note some limitations of our knowledge of methods for good judgement and identify two obstacles to the wide adoption of these methods to practical decision-making. The two obstacles are Costs. Methods for good judgement can be time-consuming and complicated to use in practical decision-making, and it is unclear how much so. Decision-makers don't know if the gains in accuracy of adopting particular methods outweigh the costs because they don't know the costs. Relevance. Rigorous forecasting questions are not always relevant to the decisions at hand, and it is not always clear to decision-makers if and when they can connect rigorous forecasting questions to important decisions. I look at projects and initiatives to overcome the obstacles, and note two directions for research on forecasting and decision-making that seem particularly promising to me. They are Expected value assessments. Research into the costs of applying specific epistemic methods in decision-making, and assessments of the expected value of applying those practices in various decision-making contexts on various domains (including other values than accuracy). Also development of practices and tools to reduce costs. Quantitative models of relevance and reasoning. Research into ways of modelling the relevance of rigorous forecasting questions to the truth of decision-relevant propositions quantitatively through formal Bayesian networks. After I have introduced these areas of research, I describe how I think that new knowledge on these topics can lead to improvements in the decision-making of individuals and groups. This line of reasoning is inherent in a lot of research that is going on right now, but I still think that research on these topics is neglected. I hope that this text can help to clarify some important research questions and to make it easier for others to orient themselves on forecasting and decision-making. I have added detailed footnotes with references to further literature on most ideas I touch on below. In the future I intend to use the framework developed here to make a series of precise claims about the costs and effects of specific epistemic methods. Most of the claims below are not rigorous enough to be true or false, although some of them might be. Please let me know if any of these claims are incorrect or misleading, or if there is some research that I have missed. Forecasting tournaments In a range of domains, such as law, finance, philanthropy, and geopolitical forecasting, the judgments of experts vary a lot, i.e. they are noisy, even in similar and identical cases.In a study on geopolitical forecasting by the renowned decision psychologist Philip Tetlock, seasoned political experts had trouble outperforming “dart-tossing chimpanzees”—random guesses—when it came to predicting global events. Non-experts, eg. “attentive readers of the New York Times” who were curious and open-minded, outperformed the experts, who tended to be overconfident. In a series of...
Mar 12, 2023
EA - [Linkpost] Scott Alexander reacts to OpenAI's latest post by Akash
00:27
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Linkpost] Scott Alexander reacts to OpenAI's latest post, published by Akash on March 11, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 12, 2023
EA - The Power of Intelligence - The Animation by Writer
00:25
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Power of Intelligence - The Animation, published by Writer on March 11, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 12, 2023
EA - How my community successfully reduced sexual misconduct by titotal
07:53
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How my community successfully reduced sexual misconduct, published by titotal on March 11, 2023 on The Effective Altruism Forum. [Content warning: this post contains discussions of sexual misconduct, including assault] In response to the recent articles about sexual misconduct in EA and Rationalism, a lot of discussion has ended up being about around whether the level of misconduct is “worse than average”. I think this is focusing on the wrong thing. EA is a movement that should be striving for excellence. Merely being “average” is not good enough. What matters most is whether EA is the best it could reasonably be, and if not, what changes can be made to fix that. One thing that might help with this is a discussion of success stories. How have other communities and workplaces managed to “beat the average” on this issue? Or substantially improved from a bad place? For this reason I’m going to relay an anecdotal success story below. If you have your own or know of others, I highly encourage you to share it as well. Many, many, years ago, I joined a society for a particular hobby (unrelated to EA), and was active in the society for many, many years. For the sake of anonymity, I’m going to pretend it was the “boardgame club”. It was a large club, with dozens of people showing up each week. The demographics were fairly similar to EA, with a lot of STEM people, a male majority (although it wasn’t that overwhelming), and an openness to unconventional lifestyles such as kink and polyamory. Now, the activity in question wasn’t sexual in nature, but there were a lot of members who were meeting up at the activity meetups for casual and group sex. Over time, this meant that the society gained a reputation as “the club you go to if you want to get laid easily”. Most members, like me, were just there for the boardgames and the friends, but a reasonable amount of people came there for the sex. As it turns out, along with the sex came an acute problem with sexual misconduct, ranging from pushing boundaries on newcomers, to harassment, to sexual assault. I was in the club for several years before I realised this, when one of my friends relayed to me that another one of my friends had sexually assaulted a different friend. One lesson I took from this is that it’s very hard to know the level of sexual misconduct in a place if you aren’t a target. If I was asked to estimate the “base rate” of assault in my community before these revelations, I would have falsely thought it was low. These encounters can be traumatic to recount, and the victims can never be sure who to trust or what the consequences will be for speaking out. I’d like to think I was trustworthy, but how was the victim meant to know that? Eventually enough reports came out that the club leaders were forced to respond. Several policies were implemented, both officially and unofficially. Kick people out. Nobody has a democratic right to be in boardgame club. I think I once saw someone mention “beyond reasonable doubt” when it comes to misconduct allegations. That standard of evidence is extremely high because the accused will be thrown into jail and deprived of their rights. The punishment of “no longer being in boardgame club” does not warrant the same level of evidence. And the costs of keeping a missing stair around are very, very high. Everyone that was accused of assault was banned from the club. Members that engaged in more minor offenses were warned, and kicked out if they didn’t change. To my knowledge, no innocent people were kicked out by mistake (false accusations are rare). I think this made the community a much more pleasant place. 2. Protect the newcomers When you attend a society for the first time, you do not know what the community norms are. You don’t know if there are avenues to report misconduct. You don’t...
Mar 11, 2023
EA - Share the burden by 2ndRichter
13:20
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Share the burden, published by 2ndRichter on March 11, 2023 on The Effective Altruism Forum. My argument: let’s distribute the burden of correcting and preventing sexual misconduct through collective effort, not letting the burdens and costs fall overwhelmingly on those who have experienced it. CW: sexual assault/harassment I work at CEA but write this in an individual capacity. Views expressed here are my own, not CEA's. I encourage you to reach out to me individually on Twitter (@emmalrichter) if you want to discuss what I raise in this post. I’d love to engage with the variety of potential responses to what I’ve written and would love to know why you upvote or downvote it. Intro and Context Some of you already know that I’m a survivor. I was sexually assaulted, harassed, or abused in independent situations at the ages of 16, 17, 18, and 20. I am intentionally open and vocal about what I’ve gone through, including a PTSD diagnosis a few years ago. Recent events in the EA community have reminded me that the mistreatment of people through sexual or romantic means occurs here (as it does everywhere). Last week at EAG, I received a Swapcard message that proposed a non-platonic interaction under the guise of professional interaction. I went to an afterparty where someone I had just met—literally introduced to me moments before—put their hand on the small of my back and grabbed and held onto my arm multiple times. These might seem like minor annoyances, but I have heard and experienced that these kinds of small moments happen often to women in EA. These kinds of experiences undermine my own feelings of comfort and value in the community. This might be anecdata, as some people say, and I know obtaining robust data on these issues has its own challenges. Nonetheless, my experience and those of other women in EA indicate that there’s enough of a problem to consider doing more. I’m writing this post for a few reasons: I want to draw attention to the suffering of women here in the community. I want to convey the costs placed on survivors seeking justice and trying to prevent further harm to others. I want to share just how taxing it can be for survivors to work on these problems on their own, both due to the inherent pain of reliving experiences and the arduousness of most justice processes. Above all, I want to make this request of our community: let’s distribute the burden of correcting and preventing sexual misconduct as fairly as we can, not letting the burdens and costs fall overwhelmingly on those who have experienced it. They have suffered so much already—they have suffered enough. I hope we can be as agentic and proactive in this domain as we strive to be in other areas of study and work. Here are sub-arguments that I’ll explore below: Before placing the burden of explanation on the survivor, we can employ other methods to learn about this constellation of social issues. We can listen to survivors more effectively and incorporate the feedback of those who want to share while also finding other resources to chart paths forward. Good intentions can still lead to negative outcomes. This can apply to both bystanders who refrain from engaging with the subject out of the intention of not making things worse and might also apply to those who perpetrate harmful behaviors (as I discuss in my own experience further down). Why write about the meta-level attitude and approach when I could have written something proposing object-level solutions? Because how we approach finding object-level solutions will affect the quality of those solutions—particularly for those who are most affected by these problems. I don’t feel informed enough to propose institutional reforms or particular policies (though I intend to reflect on these questions and research them). I do feel informed enough t...
Mar 11, 2023
EA - Tyler Johnston on helping farmed animals, consciousness, and being conventionally good by Amber Dawn
23:22
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Tyler Johnston on helping farmed animals, consciousness, and being conventionally good, published by Amber Dawn on March 10, 2023 on The Effective Altruism Forum. This post is part of a series of six interviews. As EAs, we want to use our careers or donations to do the most good - but it’s difficult to work out what exactly that looks like for us. I wanted to interview effective altruists working in different fields and on different causes and ask them how they chose their cause area, as well as how they relate to effective altruism and doing good more generally. During the Prague Fall Season residency, I interviewed six EAs in Prague about what they are doing and why they are doing it. I’m grateful to my interviewees for giving their time, and to the organisers of PFS for supporting my visit. I’m currently working as a freelance writer and editor. If you’re interested in hiring me, book a short call or email me at ambace@gmail.com. More info here. Tyler Johnston is an aspiring effective altruist currently based out of Tulsa, Oklahoma. Professionally, he works on corporate campaigns to improve the lives of farmed chickens, and is interested in cause prioritisation, interspecies comparisons, and the suffering of non-humans. He’s also a science-fiction fan and an amateur crossword puzzle constructor. We talked about: his work on The Humane League’s corporate animal welfare campaigns how he became a vegan and animal advocate whether animals are conscious how being conventionally good is underrated On his work at The Humane League Amber: Tell me about what you’re doing. Tyler: I work for The Humane League. We run public awareness campaigns to try to get companies to make commitments to improve the treatment of farmed animals in their supply chains. This strategy first gained traction in 2015, and was immediately really powerful. Since then, it has got a lot of interest from EA funders. Amber: Did The Humane League always do that, or was it doing something else before 2015? Tyler: It was a long journey; The Humane League’s original name was Hugs for Puppies Amber: Aww, that’s very cute! Tyler: Yeah, I feel like we’d be a more likeable organisation if we were still called that. They started doing demonstrations around issues like fur bans, and other animal welfare issues there was already a lot of energy around. They then switched to focussing on vegan advocacy, which involved things like leafleting, and sharing recipes and resources. Amber: So the strategy at that time then was to encourage people to go vegan, which would lower demand for factory farming, which would mean there were fewer factory-farmed animals? Tyler: That’s right. There was some early evidence that showed this was promising, and it also just made sense to them, since most vegans would attribute their own choice to be vegan to a time in the past when they heard and agreed with the arguments. So they thought, ‘why wouldn’t this export to other people?’ Amber: But you said the strategy is different now - it’s to lobby actual food producers to treat the animals that they’re farming better. Say more about that. Tyler: That’s our dominant strategy now, yeah. It’s part of a broader shift in the [animal advocacy] movement toward institutional change rather than individual change. If for some given company, you either have to change the minds of, like, 10 million consumers, or a dozen executive stakeholders - the latter is just a lot more tractable. It started with running small campaigns to persuade companies to source cage-free eggs, and it turned out that this worked. Around 2015 there was a sharp turning point in the number of farmed birds that are cage-free - before 2015, the percentage was growing very slowly, from 3% to 5%, but between 2015 and today, the percentage went up from 5% to 36%. And people attr...
Mar 10, 2023
EA - Racial and gender demographics at EA Global in 2022 by Amy Labenz
13:07
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Racial and gender demographics at EA Global in 2022, published by Amy Labenz on March 10, 2023 on The Effective Altruism Forum. CEA has recently conducted a series of analyses to help us better understand how people of different genders and racial backgrounds experienced EA Global events in 2022 (not including EAGx). In response to some requests (like this one), we wanted to share some preliminary findings. This post is primarily going to report on some summary statistics. We are still investigating pieces of this picture but wanted to get the raw data out fast for others to look at, especially since we suspect this may help shed light on other broad trends within EA. High-level summary Attendees: 33% of registered attendees (and 35% of applicants) at EA Global events in 2022 self-reported as female or non-binary. 33% of registered attendees (and 38% of applicants) self-reported as people of color (“POC”). Experiences: Attendees generally find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options). Women and non-binary attendees reported that they find EA Global slightly less welcoming (4.46/5 compared to 4.56/5 for men and 4.51 overall). Otherwise, we found no statistically significant difference in terms of feelings of welcomeness and overall recommendation scores across groups in terms of gender and race/ethnicity. Speakers: 43% of speakers and MCs at EA Global events in 2022 were female or non-binary. 28% of speakers and MCs were people of color. Some initial takeaways: A more diverse set of people apply to and attend EAG than complete the EA survey. Welcomingness and likelihood to recommend scores for women and POC were very similar to the overall scores. There is a small but statistically significant difference in welcomingness scores for women. We are not sure what to make of the fact that the application stats for POC were higher than the admissions stats. We are currently investigating whether this demographic is newer to EA (our best guess) and if that might be influencing the admissions rate. One update for our team is that women / non-binary speaker stats are higher compared to the applicant pool and this is not the case for POC. We had not realized that prior to conducting this analysis. The 2022 speaker statistics appear to be broadly in line with our statistics since London 2018 when we started tracking. We had significantly less diverse speakers prior to EAG London 2018. Applicants and registrants For EA Globals in 2022, our applicant pool was slightly more diverse in terms of race/ethnicity than our attendee pool (38% of applicants were POC vs. 33% of attendees), and around the same in terms of gender (35% of applicants were female or non-binary vs. 33% of attendees). For comparison, our attendee pool has about the same composition in terms of gender as the respondents in the 2020 EA Survey and is more diverse in terms of race/ethnicity than that survey. We had much more racial diversity at EAGx events outside of the US and Europe (e.g. EAGxSingapore, EAGxLatAm, and EAGxIndia, where POC were the majority). Generally, EAGx attendees end up later attending EAGs, so we think the events could result in more attendees from these locations. (However, due to funding constraints and their impact on travel grants, we expect this will not impact EAGs in 2023 as much as it might have otherwise.) Experiences of attendees Overall, attendees tend to find EA Global welcoming (4.51/5 with 1–5 as options) and are likely to recommend it to others (9.03/10 with 0–10 as options). Women and non-binary attendees reported slightly lower average scores on whether EA Global was “a place where [they] felt welcome” (women and non-binary attendees reported an average score of 4.46/5 vs an average of 4.56/5 for me...
Mar 10, 2023
EA - How oral rehydration therapy was developed by Kelsey Piper
01:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How oral rehydration therapy was developed, published by Kelsey Piper on March 10, 2023 on The Effective Altruism Forum. This is a link post for "Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children" in the second issue of Asterisk Magazine, now out. The question it poses is: oral rehydration therapy, which has saved millions of lives a year since it was developed, is very simple. It uses widely available ingredients. Why did it take until the late 1960s to come up with it? There's sort of a two part answer. The first part is that without a solid theoretical understanding of the problem you're trying to solve, it's (at least in this case) ludicrously difficult to solve it empirically: people kept trying variants on this, and they didn't work, because an important parameter was off and they had no idea which direction to correct in. The second is that the incredible simplicity of the modern formula for oral rehydration therapy is the product of a lot of concerted design effort not just to find something that worked against cholera but to find something dead simple which did only require household ingredients and was hard to get wrong. The fact the final solution is so simple isn't because oral rehydration is a simple problem, but because researchers kept on going until they had a sufficiently simple solution. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 10, 2023
EA - Announcing the Open Philanthropy AI Worldviews Contest by Jason Schukraft
06:21
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Open Philanthropy AI Worldviews Contest, published by Jason Schukraft on March 10, 2023 on The Effective Altruism Forum. We are pleased to announce the 2023 Open Philanthropy AI Worldviews Contest. The goal of the contest is to surface novel considerations that could influence our views on AI timelines and AI risk. We plan to distribute $225,000 in prize money across six winning entries. This is the same contest we preannounced late last year, which is itself the spiritual successor to the now-defunct Future Fund competition. Part of our hope is that our (much smaller) prizes might encourage people who already started work for the Future Fund competition to share it publicly. The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry. Prize Conditions and Amounts Essays should address one of these two questions: Question 1: What is the probability that AGI is developed by January 1, 2043? Question 2: Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system? Essays should be clearly targeted at one of the questions, not both. Winning essays will be determined by the extent to which they substantively inform the thinking of a panel of Open Phil employees. There are several ways an essay could substantively inform the thinking of a panelist: An essay could cause a panelist to change their central estimate of the probability of AGI by 2043 or the probability of existential catastrophe conditional on AGI by 2070. An essay could cause a panelist to change the shape of their probability distribution for AGI by 2043 or existential catastrophe conditional on AGI by 2070, which could have strategic implications even if it doesn’t alter the panelist’s central estimate. An essay could clarify a concept or identify a crux in a way that made it clearer what further research would be valuable to conduct (even if the essay doesn’t change anybody’s probability distribution or central estimate). We will keep the composition of the panel anonymous to avoid participants targeting their work too closely to the beliefs of any one person. The panel includes representatives from both our Global Health & Wellbeing team and our Longtermism team. Open Phil’s published body of work on AI broadly represents the views of the panel. Panelist credences on the probability of AGI by 2043 range from ~10% to ~45%. Conditional on AGI being developed by 2070, panelist credences on the probability of existential catastrophe range from ~5% to ~50%. We will award a total of six prizes across three tiers: First prize (two awards): $50,000 Second prize (two awards): $37,500 Third prize (two awards): $25,000 Eligibility Submissions must be original work, published for the first time on or after September 23, 2022 and before 11:59 pm EDT May 31, 2023. All authors must be 18 years or older. Submissions must be written in English. No official word limit — but we expect to find it harder to engage with pieces longer than 5,000 words (not counting footnotes and references). Open Phil employees and their immediate family members are ineligible. The following groups are also ineligible: People who are residing in, or nationals of, Puerto Rico, Quebec, or countries or jurisdictions that prohibit such contests by law People who are specifically sanctioned by the United States or based in a US-sanctioned country (North Korea, Iran, Russia, Myanmar, Afghanistan, Syria, Venezuela, and Cuba at time of writing) You can submit as many entries as you want, but you can only win one prize. Co-authorship is fine. See here for additional details and fine print. Submission Use this form to submit your entries. We strongl...
Mar 10, 2023
EA - The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts by Sarah Levin
10:16
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Ethics of Posting: Real Names, Pseudonyms, and Burner Accounts, published by Sarah Levin on March 9, 2023 on The Effective Altruism Forum. Recently there’s been debate about the ethics of using burner accounts to make attacks and accusations on this forum. See The number of burner accounts is too damn high and Why People Use Burner Accounts especially. This post is a more systematic discussion of poster identity, reputation, and accountability. Types of Accounts We can roughly break accounts down into four categories: Real name accounts are accounts under a name that is easily linkable to the poster’s offline identity, such as their legal name. A real name account builds a reputation over time based on its posts. In addition, a real name account’s reputation draws on their offline reputation, and affects their offline reputation in turn. Pseudonymous accounts are accounts which are not easily linkable to the poster’s offline identity, and which the poster maintains over time. A pseudonym builds a reputation over time based on its posts. This reputation is separate from the poster’s offline reputation. Burner accounts are accounts which are intended to be used for a single, transient purpose and then abandoned. They accrue little or no reputation. Anonymous posts are not traceable to a specific identity at all. This forum mostly doesn’t have anonymous posts and so I will not discuss them here. All of these accounts have some legitimate uses. Because of the differences in how these types of accounts operate, readers should evaluate their claims differently, especially when it comes to evaluating claims about the community. Posters should use accounts appropriate for the points they are making, or restrict their claims to those which their account can support. Arguments, Evidence, and Accountability When it comes to abstract arguments, the content can be evaluated separately from the speaker, so all this stuff can be disregarded. If someone on this forum wants to post a critique of the statistics used in vitamin A supplementation trials, or an argument about moral status of chickens, or something like that, then the poster’s reputation shouldn’t matter much, and so it’s legitimate to post under any type of account. When 4chan solved an open combinatorics problem while discussing a shitpost about anime, mathematicians accepted the proof and published it with credit to "Anonymous 4chan poster". When it comes to abstract arguments, anything goes, except for blatant fuckery like impersonation or sockpuppet voting. If someone wants to claim expertise as part of an argument, then it helps to demonstrate that expertise somehow. If someone says “I’m a professional statistician and your statistical analysis here is nonsense”, then that rightly carries a lot more weight if it’s the real-name account of a professional statistician, or a pseudonymous account with a demonstrable track record on the subject. Burner accounts lack reputation, track records, and credentials, so they can’t legitimately make this move unless they first demonstrate expertise, which is often impractical. Things get trickier when it comes to reporting facts about the social landscape. The poster’s social position is a legitimate input into evaluating such claims. If I start telling everyone about what’s really happening in Oxford board rooms or Berkeley group houses, then it matters a great deal who I am. Am I a veteran who’s been deep inside for years? A visitor who attended a few events last summer? Am I just repeating what I saw in a tweet this morning? Advantages of Real Name Accounts Real name accounts can report on social situations with authority that other types of account can’t legitimately claim, for two reasons. First, their claims are checkable. If I used this pseudonymous account to make a f...
Mar 10, 2023
EA - Anthropic: Core Views on AI Safety: When, Why, What, and How by jonmenaster
36:52
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropic: Core Views on AI Safety: When, Why, What, and How, published by jonmenaster on March 9, 2023 on The Effective Altruism Forum. We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. And we also believe this level of impact could start to arrive soon – perhaps in the coming decade. This view may sound implausible or grandiose, and there are good reasons to be skeptical of it. For one thing, almost everyone who has said “the thing we’re working on might be one of the biggest developments in history” has been wrong, often laughably so. Nevertheless, we believe there is enough evidence to seriously prepare for a world where rapid AI progress leads to transformative AI systems. At Anthropic our motto has been “show, don’t tell”, and we’ve focused on releasing a steady stream of safety-oriented research that we believe has broad value for the AI community. We’re writing this now because as more people have become aware of AI progress, it feels timely to express our own views on this topic and to explain our strategy and goals. In short, we believe that AI safety research is urgently important and should be supported by a wide range of public and private actors. So in this post we will summarize why we believe all this: why we anticipate very rapid AI progress and very large impacts from AI, and how that led us to be concerned about AI safety. We’ll then briefly summarize our own approach to AI safety research and some of the reasoning behind it. We hope by writing this we can contribute to broader discussions about AI safety and AI progress. As a high level summary of the main points in this post: AI will have a very large impact, possibly in the coming decadeRapid and continuing AI progress is a predictable consequence of the exponential increase in computation used to train AI systems, because research on “scaling laws” demonstrates that more computation leads to general improvements in capabilities. Simple extrapolations suggest AI systems will become far more capable in the next decade, possibly equaling or exceeding human level performance at most intellectual tasks. AI progress might slow or halt, but the evidence suggests it will probably continue. We do not know how to train systems to robustly behave wellSo far, no one knows how to train very powerful AI systems to be robustly helpful, honest, and harmless. Furthermore, rapid AI progress will be disruptive to society and may trigger competitive races that could lead corporations or nations to deploy untrustworthy AI systems. The results of this could be catastrophic, either because AI systems strategically pursue dangerous goals, or because these systems make more innocent mistakes in high-stakes situations. We are most optimistic about a multi-faceted, empirically-driven approach to AI safetyWe’re pursuing a variety of research directions with the goal of building reliably safe systems, and are currently most excited about scaling supervision, mechanistic interpretability, process-oriented learning, and understanding and evaluating how AI systems learn and generalize. A key goal of ours is to differentially accelerate this safety work, and to develop a profile of safety research that attempts to cover a wide range of scenarios, from those in which safety challenges turn out to be easy to address to those in which creating safe systems is extremely difficult. Our Rough View on Rapid AI Progress The three main ingredients leading to predictable1 improvements in AI performance are training data, computation, and improved algorithms. In the mid-2010s, some of us noticed that larger AI systems were consistently smarter, and so we theorized that the most important ingredient in AI performance m...
Mar 09, 2023
EA - A Windfall Clause for CEO could worsen AI race dynamics by Larks
11:03
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Windfall Clause for CEO could worsen AI race dynamics, published by Larks on March 9, 2023 on The Effective Altruism Forum. Summary This is a response to the Windfall Clause proposal from Cullen O’Keefe et al, which aims to make AI firms promise to donate a large fraction of profits if they become extremely profitable. While I appreciate their valiant attempt to produce a policy recommendation that might help, I am worried about the practical effects. In this article I argue that the Clause would primarily benefit the management of these firms, resulting in an increased concentration of effective wealth/power relative to a counterfactual where traditional corporate governance was used. This could make AI race dynamics worse and increase existential risk from AI. What is the Windfall Clause? The Clause operates by getting firms now to sign up to donate a large fraction of their profits for the benefit of humanity if those profits become very large. The idea is that, right now, profits are not very large, so this appears a ‘cheap’ commitment in the short term. In the future, if the firm becomes very successful, they are required to donate an increasing fraction. This is an example structure from O’Keefe document: Many other possible structures exist with similar effects. As an extreme example, you could require all profits above a certain level to be donated. Typical Corporate Governance The purpose of a typical corporation is to make profits. Under standard corporate governance, CEOs are given fairly broad latitude to make business decisions. They can determine strategy, decide on new products and pricing, alter their workforces and so on with limited restrictions. If the company fails to make profits, the share price will fall, and it might be subject to a hostile takeover from another firm which thinks it can use the assets more wisely. Additionally, in the meantime the CEO’s compensation is likely to fall due to missed incentive pay. The board also supplies oversight. They will be consulted with on major decisions, and their consent is required for irreversible ones (e.g. a major acquisition or change of strategy). The auditor will report to them so they can keep apprised of the financial state of the company. The amount of money the CEO can spend without oversight is quite limited. Most of the firm’s revenue probably goes to expenses; of the profits, the board will exercise oversight into decisions around dividends, buybacks and major acquisitions. The CEO will have more discretion over capital expenditures, but even then the board will have a say on the total size and general strategy, and all capex will be expected to follow the north star of future profitability. A founder-CEO might retain some non-trivial economic interest in the profits (say 10% if it was a small founding team and they grew rapidly with limited need for outside capital), which is truely theirs to spend as they wish; a hired CEO would have much less. How does the clause change this? In contrast, the clause appears to give a lot more latitude to management of a successful AGI firm. Some of the typical constraints remain. The firm must still pay its suppliers, and continue to produce goods and services that others value enough to pay for them more than they cost to produce. Operating decisions will remain judged by profitability, and the board will continue to have oversight over major decisions. However, a huge amount of profit is effectively transferred from third party investors to the CEO or management team. They go from probably a few percent of the profits to spend as they wish to controlling the distribution of perhaps half. Additionally, some of this is likely tax deductible. It is true that the CEO couldn’t spend the money on personal yachts and the like. However, extremely rich peopl...
Mar 09, 2023
EA - Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public by Otto
08:13
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Paper Summary: The Effectiveness of AI Existential Risk Communication to the American and Dutch Public, published by Otto on March 9, 2023 on The Effective Altruism Forum. This is a summary of the following paper by Alexia Georgiadis (Existential Risk Observatory): Thanks to @Lara Mani, @Karl von Wendt, and Alexia Georgiadis for their help in reviewing and writing this post. Any views expressed in this post are not necessarily theirs.The rapid development of artificial intelligence (AI) has evoked both positive and negative sentiments due to its immense potential and the inherent risks associated with its evolution. There are growing concerns that if AI surpasses human intelligence and is not aligned with human values, it may pose significant harm and even lead to the end of humanity. However, the general publics' knowledge of these risks is limited. As advocates for minimising existential threats, the Existential Risk Observatory believes it is imperative to educate the public on the potential risks of AI. Our introductory post outlines some of the reasons why we hold this view (this post is also relevant). To increase public awareness of AI's existential risk, effective communication strategies are necessary. This research aims to assess the effectiveness of communication interventions currently being used to increase awareness about AI existential risk, namely news publications and videos. To this end, we conducted surveys to evaluate the impact of these interventions on raising awareness among participants. Methodology This research aims to assess the effectiveness of different media interventions, specifically news articles and videos, in promoting awareness of the potential dangers of AI and its possible impact on human extinction. It analyses the impact of AI existential risk communication strategies on the awareness of the American and Dutch populations, and investigates how social indicators such as age, gender, education level, country of residence, and field of work affect the effectiveness of AI existential risk communication. The study employs a pre-post design, which involves administering the same intervention and assessment to all participants and measuring their responses at two points in time. The research utilises a survey method for collecting data, which was administered to participants through an online Google Forms application. The survey consists of three sections: pre-test questions, the intervention, and post-test questions. The effectiveness of AI existential risk communication is measured by comparing the results of quantitative questions from the pre-test and post-test sections, and the answers to the open-ended questions provide further understanding of any changes in the participant's perspective. The research measures the effectiveness of the media interventions by using two main indicators: "Human Extinction Events" and "Human Extinction Percentage." The "Human Extinction Events" indicator asks participants to rank the events that they believe could cause human extinction in the next century, and the research considers it as effective if participants rank AI higher post intervention or mention it after the treatment when they did not mention it before. If the placement of AI remained the same before and after the treatment, or if participants did not mention AI before or after the treatment, the research considered that there was no effect in raising awareness. The "Human Extinction Percentage" indicator asks for the participants' opinion on the likelihood, in percentage, of human extinction caused by AI in the next century. If there was an increase in the percentage of likelihood given by participants, this research considered that there was an effect in raising awareness. If there is no change or a decrease in the percentage, this r...
Mar 09, 2023
EA - FTX Poll Post - What do you think about the FTX crisis, given some time? by Nathan Young
01:15
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: FTX Poll Post - What do you think about the FTX crisis, given some time?, published by Nathan Young on March 8, 2023 on The Effective Altruism Forum. Tl;dr The community holds views on things We should understand what they are I think I am building a sense of the community feeling, but perhaps it’s very inaccurate agreevote to agree, disagreevote to disagree, upvote to signify importance downvote to signify unimportance Doing polls on the forum is bad, but I think it’s better than nothing. I have some theories about what people feel and I’m trying to disprove them If you want more accurate polling then someone could run that I’m open to the idea that poll comments in general are annoying or that I run them too soon (though people also DM to thank me for them) but this is a top level post - if you don’t like it just downvote it If you do like it, upvote it. Probably others will like it too Add your own questions or DM me and I will add them. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 09, 2023
EA - Against EA-Community-Received-Wisdom on Practical Sociological Questions by Michael Cohen
25:10
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against EA-Community-Received-Wisdom on Practical Sociological Questions, published by Michael Cohen on March 9, 2023 on The Effective Altruism Forum. In my view, there is a rot in the EA community that is so consequential that it inclines me to discourage effective altruists from putting much, if any, trust in EA community members, EA "leaders", the EA Forum, or LessWrong. But I think that it can be fixed, and the EA movement would become very good. In my view, this rot comes from incorrect answers to certain practical sociological questions, like: How important for success is having experience or having been apprenticed to someone experienced? Is the EA Forum a good tool for collaborative truth-seeking? How helpful is peer review for collaborative truth-seeking? Meta-1. Is "Defer to a consensus among EA community members" a good strategy for answering practical sociological questions? Meta-2. How accurate are conventional answers to practical sociological questions that many people want to get right? I'll spend a few sentences attempting to persuade EA readers that my position is not easily explained away by certain things they might call mistakes. Most of my recent friends are in the EA community. (I don't think EAs are cringe). I assign >10% probability to AI killing everyone, so I'm doing technical AI Safety research as a PhD student at FHI. (I don't think longtermism or sci-fi has corrupted the EA community). I've read the sequences, and I thought they were mostly good. (I'm not "inferentially distant"). I think quite highly of the philosophical and economic reasoning of Toby Ord, Will MacAskill, Nick Bostrom, Rob Wiblin, Holden Karnofsky, and Eliezer Yudkowsky. (I'm "value-aligned", although I object to this term). Let me begin with an observation about Amazon's organizational structure. From what I've heard, Team A at Amazon does not have to use the tool that Team B made for them. Team A is encouraged to look for alternatives elsewhere. And Team B is encouraged to make the tool into something that they can sell to other organizations. This is apparently how Amazon Web Services became a product. The lesson I want to draw from this is that wherever possible, Amazon outsources quality control to the market (external people) rather than having internal "value-aligned" people attempt to assess quality and issue a pass/fail verdict. This is an instance of the principle: "if there is a large group of people trying to answer a question correctly (like 'Is Amazon's tool X the best option available?'), and they are trying (almost) as hard as you to answer it correctly, defer to their answer." That is my claim; now let me defend it, not just by pointing at Amazon, and claiming that they agree with me. High-Level Claims Claim 1: If there is a large group of people trying to answer a question correctly, and they are trying (almost) as hard as you to answer it correctly, any consensus of theirs is more likely to be correct than you. There is extensive evidence (Surowiecki, 2004) that aggregating the estimates of many people produces a more accurate estimate as the number of people grows. It may matter in many cases that people are actually trying rather than just professing to try. If you have extensive and unique technical expertise, you might be able to say no one is trying as hard as you, because properly trying to answer the question correctly involves seeking to understand the implications of certain technical arguments, which only you have bothered to do. There is potentially plenty of gray area here, but hopefully, all of my applications of Claim 1 steer well clear of it. Let's now turn to Meta-2 from above. Claim 2: For practical sociological questions that many people want to get right, if there is a conventional answer, you should go with the conventional answer....
Mar 09, 2023
EA - [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever by Otto
06:29
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Crosspost] Why Uncontrollable AI Looks More Likely Than Ever, published by Otto on March 8, 2023 on The Effective Altruism Forum. This is a crosspost from Time Magazine, which also appeared in full at a number of other unpaid news websites.BY OTTO BARTEN AND ROMAN YAMPOLSKIY Barten is director of the Existential Risk Observatory, an Amsterdam-based nonprofit. Yampolskiy is a computer scientist at the University of Louisville, known for his work on AI Safety.“The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control,” mathematician and science fiction writer I.J. Good wrote over 60 years ago. These prophetic words are now more relevant than ever, with artificial intelligence (AI) gaining capabilities at breakneck speed. In the last weeks, many jaws dropped as they witnessed transformation of AI from a handy but decidedly unscary recommender algorithm, to something that at times seemed to act worryingly humanlike. Some reporters were so shocked that they reported their conversation histories with large language model Bing Chat verbatim. And with good reason: few expected that what we thought were glorified autocomplete programs would suddenly threaten their users, refuse to carry out orders they found insulting, break security in an attempt to save a child’s life, or declare their love to us. Yet this all happened. It can already be overwhelming to think about the immediate consequences of these new models. How are we going to grade papers if any student can use AI? What are the effects of these models on our daily work? Any knowledge worker, who may have thought they would not be affected by automation in the foreseeable future, suddenly has cause for concern. Beyond these direct consequences of currently existing models, however, awaits the more fundamental question of AI that has been on the table since the field’s inception: what if we succeed? That is, what if AI researchers manage to make Artificial General Intelligence (AGI), or an AI that can perform any cognitive task at human level? Surprisingly few academics have seriously engaged with this question, despite working day and night to get to this point. It is obvious, though, that the consequences will be far-reaching, much beyond the consequences of even today’s best large language models. If remote work, for example, could be done just as well by an AGI, employers may be able to simply spin up a few new digital employees to perform any task. The job prospects, economic value, self-worth, and political power of anyone not owning the machines might therefore completely dwindle . Those who do own this technology could achieve nearly anything in very short periods of time. That might mean skyrocketing economic growth, but also a rise in inequality, while meritocracy would become obsolete. But a true AGI could not only transform the world, it could also transform itself. Since AI research is one of the tasks an AGI could do better than us, it should be expected to be able to improve the state of AI. This might set off a positive feedback loop with ever better AIs creating ever better AIs, with no known theoretical limits. This would perhaps be positive rather than alarming, had it not been that this technology has the potential to become uncontrollable. Once an AI has a certain goal and self-improves, there is no known method to adjust this goal. An AI should in fact be expected to resist any such attempt, since goal modification would endanger carrying out its current one. Also, instrumental convergence predicts that AI, whatever its goals are, might start off by self-improving and acquiring more resources once it is sufficiently capable of doing so, since this should help it achieve whatever further goal ...
Mar 08, 2023
EA - 80,000 Hours two-year review: 2021–2022 by 80000 Hours
03:21
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours two-year review: 2021–2022, published by 80000 Hours on March 8, 2023 on The Effective Altruism Forum. 80,000 Hours has released a review of our programmes for the years 2021 and 2022. The full document is available for the public, and we’re sharing the summary below. You can find our previous evaluations here. We have also updated our mistakes page. 80,000 Hours delivers four programmes: website, job board, podcast, and one-on-one. We also have a marketing team that attracts users to these programmes, primarily by getting them to visit the website. Over the past two years, three of four programmes grew their engagement 2-3x: Podcast listening time in 2022 was 2x higher than in 2020 Job board vacancy clicks in 2022 were 3x higher than in 2020 The number of one-on-one team calls in 2022 was 3x higher than in 2020 Web engagement hours fell by 20% in 2021, then grew by 38% in 2022 after we increased investment in our marketing. From December 2020 to December 2022, the core team grew by 78% from 14 FTEs to 25 FTEs. Ben Todd stepped down as CEO in May 2022 and was replaced by Howie Lempel. The collapse of FTX in November 2022 caused significant disruption. As a result, Howie went on leave from 80,000 Hours to be Interim CEO of Effective Ventures Foundation (UK). Brenton Mayer took over as Interim CEO of 80,000 Hours. We are also spending substantially more time liaising with management across the Effective Ventures group, as we are a project of the group. We had previously held up Sam Bankman-Fried as a positive example of one of our highly rated career paths, a decision we now regret and feel humbled by. We are updating some aspects of our advice in light of our reflections on the FTX collapse and the lessons the wider community is learning from these events. In 2023, we will make improving our advice a key focus of our work. As part of this, we’re aiming to hire for a senior research role. We plan to continue growing our main four programmes and will experiment with additional projects, such as relaunching our headhunting service and creating a new, scripted podcast with a different host. We plan to grow the team by ~45% in 2023, adding an additional 11 people. Our provisional expansion budgets for 2023 and 2024 (excluding marketing) are $12m and $17m. We’re keen to fundraise for both years and are also interested in extending our runway — though we expect that the amount we raise in practice will be heavily affected by the funding landscape. The Effective Ventures group is an umbrella term for Effective Ventures Foundation (England and Wales registered charity number 1149828 and registered company number 07962181) and Effective Ventures Foundation USA, Inc. (a section 501(c)(3) tax-exempt organisation in the USA, EIN 47-1988398), two separate legal entities which work together. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 08, 2023
EA - EA needs Life-Veterans and "Less Smart" people by Jeffrey Kursonis
06:11
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA needs Life-Veterans and "Less Smart" people, published by Jeffrey Kursonis on March 8, 2023 on The Effective Altruism Forum. Healthy communities have all kinds. There is a magic in the plant world when a diversity of plants co-exist. Permaculture has been innovating through realizing how the community of plants help each other by each contributing a different gift to the benefit of the whole. Plants communicate with each other via mushroom like strands underground and work together. Interestingly, they speak French. Just kidding. I’ve been in a movement that changed the world in a positive way and eventually fell apart, it was very very similar to EA in many ways — a bunch of talented young people trying to do good in the world. We had all the same criticisms people throw at EA and we did listen and learn as much as we had the capacity to. I won’t tell the whole story here, but we didn’t fall apart because of bad things, it was a necessary evolution, but one of the key problems that kept us from surviving was a lack of diversity. For some reason when I was young, I think it’s because I was smart, I figured out that if older people had already faced all the challenges I face maybe they would be a good source of data and the gritty life wisdom of how to apply the data. So I would go out of my way to befriend them and listen to them. It was mixed results.lots of older people are just bitter, but there were enough that had made it through a life full of thriving and were happy to share it. You just have to find the right one’s. Most of the world is made up of average people, smart people call them dumb, but they’re really just average. The thing is, if everybody in the room is smart, who is going to see the world as most of the world sees the world? That’s a data poor room. If we are really smart we’ll make sure to surround ourselves not just with other smart people but with a variety of young and old, different cultures, different life experience levels and some average people. That’s a room rich in data. Never underestimate the simple wisdom of simple people.and because most of the people in the world are religious, we should have them around too. You just need to find the right one’s.generous and kind and who want everyone to thrive. Wisdom is learning how to live in reality.when we’re young we are really far from reality.you have a bedroom and a phone and an iPad in a lovely house, all provided for you magically. You have no clue how that all accrued to you. You’re not yet in touch with reality. But as you attend the school of hard knocks year after year, slowly but surely reality drifts in.essentially what happens is that as you are slowly disconnected from your parents and the “magical accrual” fades away, you learn how real life works. Wise people have had the time it takes to boil it all down to pure essence, filter out the dross and see the pure reality.when you can see it, you can figure out how to negotiate it. It simply takes some years and a person oriented toward thriving rather than increasing bitterness. If you have a lot of data, what you need more than anything is wisdom to interpret the data and wisdom to creatively imagine real world applications from the data. You simply cannot be a movement committed to getting more effective at doing good in the world if you do not have some elder wisdom in the room. It’s a glaring deficit. Thank God for Singer, but he’s not around enough. Especially in this time of post-FTX self examination and reforming, in this time of making efforts concerning the mental health of young people under pressure to save the world—this is the time to round out the community with some village like balance; The young, the old, the strong, the average all making life thrive like plants all mixed up in the jungle. And artists! My God...
Mar 08, 2023
EA - Suggest new charity ideas for Charity Entrepreneurship by CE
06:07
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggest new charity ideas for Charity Entrepreneurship, published by CE on March 8, 2023 on The Effective Altruism Forum. At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. Each year our research team collates hundreds and hundreds of ideas for promising new charities. We scour the research, we talk to colleagues in partner organisations and we solicit ideas from academia. We then vet these ideas and whittle them down to the top ~3. We create detailed reports about these top ideas and then we recruit, train and fund entrepreneurs to take these ideas and make them a reality. In 5 years we've launched over 23 exceptional organisations. You can read more about charities we incubated here. In 2023 Charity Entrepreneurship will be researching two new cause areas: mass media interventions and preventative animal advocacy We want your ideas!!! Prize: If you give us an idea which leads to a new charity then you will win: A copy of the Charity Entrepreneurship handbook, a box of vegan cookies and $100 to a charity of your choice. And more importantly, a new charity will be started! Notes: If multiple people submit the same idea we will give the award to the first submission. Max 5 prizes will be awarded. If you submit an idea already on our list, you are still eligible for a prize. Please submit your ideas into this form by end of the day Sunday 12th March. [SUBMIT] Mass media The cause area: By ‘mass media’ intervention we refer to (1) social and behaviour change communication campaigns delivered through (2) mass media aiming to improve (3) human wellbeing. Definitions: 1. Social and Behavior Change Communication (SBCC) – The strategic use of communication to promote positive outcomes, based on proven theories and models of behavior change (more here). 2. Mass media – Mass communication modes that reach a very large audience where any targeting of segmented audiences can be mass applied (e.g. would include online advertising that targets relevant audiences but not posters in health centers) (more here). Examples include: TV, Radio, Mobile phones, Newspapers, Outdoor advertising. 3. Human wellbeing – For our purposes, wellbeing refers broadly to areas of human health, development and poverty. Headline metric Our key metrics for the quantitative side of this research will be DALYs averted or % income increases. We will likely compare across these metrics using a moral weight formula. We may set our own moral weights or use recent moral weights by GiveWell (e.g. from here). Note that the use of this as a headline metric does not mean that other factors (autonomy, environmental effects, suffering not captured by DALYs) are excluded, although they may not be explicitly quantified. Scope limitations Currently, all interventions that could reasonably be considered mass media are in scope. If in doubt, assume it is in scope. Example ideas Note: there is no guarantee that any of the following ideas make it past the initial filter Promoting healthier diets Promoting CBT tools for stress Messaging against tobacco use with resources for quitting Signposting to available support in cases of abuse, violence, etc. Anti-suicide campaigns Changing HIV attitudes Promotion cancer screening (cervical, breast, bowel, prostate, etc.) Information campaigns about criminal politicians Encouraging lower sugar consumption Preventative animal advocacy The cause area: The focus this year is on intervention and policies that prevent future harms done to animals as opposed to solving current problems. We will be looking for interventions that as well as having some short run evidence of impact will prevent future problems i.e. have the biggest impact on farmed animals in the future, say ...
Mar 08, 2023
EA - Evidence on how cash transfers empower women in poverty by GiveDirectly
02:44
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Evidence on how cash transfers empower women in poverty, published by GiveDirectly on March 8, 2023 on The Effective Altruism Forum. Donations to GiveDirectly put power in the hands of recipients, 62% of whom are women. On International Women’s Day, hear directly from women and girls in poverty in Malawi about the unique ways that direct cash empowers them: This impact is more than anecdotal; research finds that cash aid lets women improve their lives in many ways. Below, we break down the evidence by story. Maternal & infant health Lenita - “When I was pregnant, I would fall sick [and] could not afford the fare to go to the hospital.” Studies find that cash can: Increase the use of health facilities. Improve birth weight and infant mortality – one study found GiveDirectly’s program reduced child mortality by ~70% and improved child growth. Education & domestic violence Agatha - “My husband was so abusive... so I left him and went back to try to finish school.” Studies find that cash can: Reduce incidents of physical abuse by a male partner of a woman – one study found GiveDirectly’s program reduced physical intimate partner violence. Increase school attendance for girls. Decision-making power Beatrice - “My husband and I always argued. about how to spend what little money we had. Now, when we receive the money, we plan together.” Studies find that cash can: Increase a woman’s likelihood of being the sole or joint decision-maker. Entrepreneurship & savings Anesi - “With the businesses I started, I want to buy land for my children so they will never forget me.” Studies find that cash can: Increase entrepreneurship – one study of GiveDirectly’s program found new business creation doubled. For more on female entrepreneurs, watch Increase the number of families saving and the amount they saved – one study of GiveDirectly’s program found women doubled their savings. To learn about women's savings groups, watch Elderly support Faidesi - “Now that I am old, I can’t farm and often sleep hungry. I would have been dead if it wasn’t for these payments.” Studies find that cash can: Reduce the likelihood of having had an illness in the last three months – one study in Tanzania found cash reduced the number of doctor visits made by women over 60. Bastagli et al 2016 Siddiqi et al 2018 McIntosh & Zeitlin 2018 Haushofer et al 2019 McIntosh & Zeitlin 2020 Pega et al 2017 Evans et al 2014 Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 08, 2023
EA - Suggestion: A workable romantic non-escalation policy for EA community builders by Severin
04:32
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Suggestion: A workable romantic non-escalation policy for EA community builders, published by Severin on March 8, 2023 on The Effective Altruism Forum. Last year, I attended an Authentic Leadership Training with the Authentic Relating org Authentic Revolution, and was course lead mentee for a second iteration. One thing that struck me about AuthRev's ways is their approach to policing romantic relationships between facilitators and participants, within a community whose personal/professional overlap is even stronger than EA’s. They have a romantic non-escalation policy that goes roughly like this: For three months after a retreat, and for one month after an evening event, facilitators are prohibited from engaging romantically, or even hinting at engaging romantically, with attendees. The only exception is when a particular attendee and the facilitator already dated beforehand. These numbers are drawn from experience: As some people have most of their social life within the community, longer timelines are so unworkable that the default is to just ignore them and do everything in secret. Shorter timelines, however, tend to be insufficiently effective for mitigating the problems this policy tries to address. Granted, Authentic Relating is a set of activities that is far more emotionally intense than what usually happens at EA events. However, I think there are some reasons for EA community builders to adhere to this policy anyway: Romance distracts from the cause. Attendees should focus on getting as much ea-related value as possible out of EA events, and we as organizers should focus on generating as much value as possible. Thinking about which hot community builder you can get with later distracts from that. And, thinking about which hot participant you can get with later on can lead to decisions way more costly than just lost opportunities to provide more value. None of us are as considerate and attuned in our private lives as when doing community building work. Sometimes we don't have the energy to listen well. Sometimes we really need to vent. Sometimes we are just bad at communication when we don't pay particular attention to choosing our words. The personas we put up at work just aren't real people. If people fall in love with the version of me that they see leading groups, they will inevitably be disappointed later. Power differentials make communication about consent difficult. And the organizer/attendee-separation creates a power differential, whether we like it or not. The more power differential there is, the more important it is to move very slowly and carefully in romance. Status is sexy. Predatorily-minded people know this. Thus, they are incentivized to climb the social EA ladder for the wrong reasons. If we set norms that make it harder for people to leverage their social status for romantic purposes, we can correct for this. That is, as long as our rules are not so harsh that they will just be ignored by default. Though a part of me finds this policy inconvenient, I think it would be a concerning sign if I weren’t ready to commit to it after I saw it’s value in practice. However, EA is different from AR, and a milder/different/more specified version might make more sense for us. Accordingly, I’ll let the idea simmer a bit before I fully commit. Which adjustments would you make for our context? Some specific questions I have: AR retreats are intensely facilitated experiences. During at least some types of EA retreats, the hierarchies are much flatter, and participants see the organizers "in function" only roughly as much as during an evening-long workshop. Does this justify shortening the three months, e.g. to one month no matter for which type of event? I'd expect that the same rule should apply for professional 1-on-1s, for example EA career coaching....
Mar 08, 2023
EA - Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges by NunoSempere
08:57
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges, published by NunoSempere on March 8, 2023 on The Effective Altruism Forum. In the second half of 2022, we of QURI announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second, we got five, but most were fairly short. This post presents the winners. Squiggle Experimentation Challenge Objectives From the announcement post: [Our] team at QURI has recently released Squiggle, a very new and experimental programming language for probabilistic estimation. We’re curious about what promising use cases it could enable, and we are launching a prize to incentivize people to find this out. Top Entries Tanae adds uncertainty estimates to each step in GiveWell’s estimate for AMF in the Democratic Republic of Congo, and ends up with this endline estimate for lives saved (though not other effects): Dan creates a probabilistic estimate for the effectiveness of the Lead Exposure Elimination Project in Malawi. In the process, he gives some helpful, specific improvements we could make to Squiggle. In particular, his feedback motivated us to make Squiggle faster, first from part of his model not being able to run, then to his model running in 2 mins, then in 3 to 7 seconds. Erich creates a Squiggle model to estimate the number of future EA billionaires. His estimate looks like this: That is, he is giving a 5-10% probability of negative billionaire growth, i.e., of losing a billionaire, as, in fact, happened. In hindsight, this seems like a neat example of quantification capturing some relevant tail risk. Perhaps if people had looked to this estimate when making decisions about earning to give or personal budgeting decisions in light of FTX’s largesse, they might have made better decisions. But it wasn’t the case that this particular estimate was incorporated into the way that people made choices. Rather my impression is that it was posted in the EA Forum and then forgotten about. Perhaps it would have required more work and vetting to make it useful. Results EntryAdding Quantified Uncertainty to GiveWell's Cost Effectiveness Analysis of the Against Malaria FoundationCEA LEEP MalawiHow many EA Billionaires five years from now? Estimated relative value (normalized to 100%) Prize 67% $600 26% $300 7% $100 Judges were Ozzie Gooen, Quinn Dougherty, and Nuño Sempere. You can see our estimates here. Note that per the contest rules, we judged these prizes before October 1, 2022—so before the downfall of FTX, and winners received their prizes shortly thereafter. Previously I mentioned the results in this edition of the Forecasting Newsletter. $5k challenge to quantify the impact of 80,000 hours' top career paths Objectives With this post, we hoped to elicit estimates that could be built upon to estimate the value of 80,000 hours’ top 10 career paths. We were also curious about whether participants would use Squiggle or other tools when given free rein to choose their tools. Entries Vasco Grillo looks at the cost-effectiveness of operations, first looking at various ways of estimating the impact of the EA community and then sending a brief survey to various organizations about the “multiplier” of operations work, which is, roughly, the ratio of the cost-effectiveness of one marginal hour of operations work to the cost-effectiveness of one marginal hour of their direct work. He ends up with a pretty high estimate for that multiplier, of between ~4.5 and ~13. @10xrational gives fairly granular estimates of the value of various community-building activities in terms of first-order effects of more engaged EAs, and second-order effects of more donations to effective charities and more people ...
Mar 08, 2023
EA - Redirecting private foundation grants to effective charities by Kyle Smith
02:06
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Redirecting private foundation grants to effective charities, published by Kyle Smith on March 6, 2023 on The Effective Altruism Forum. Project Idea: While completing the EA intro course, I was thinking about how private foundations give $60b+ a year, largely to ineffective charities. I was wondering if that may present an opportunity for a small organization that works to redirect PF grants to effective charities. I see two potential angles of attack: Lobby/consult with PF on making effective grants. Givewell does the hard job of evaluating charities, but a more boutique solution could be useful to private foundations. I have a large dataset of electronically filed 990-PFs, and I thought it may be useful to try to identify PF that are more likely to be persuaded by this sort of lobbying. For example, foundations that are younger, already give to international charities, and give to a large number of charities (there’s a lot of interesting criteria that could be used). A list could be generated for PF that are more likely to redirect funds which could be targeted. Target grantmakers by offering training, attending conferences, etc. on effective grantmaking. (maybe some other EA aligned org is doing this?) Givewell says they have directed ~$1b in effective gifts since 2011. Even if only a small number of foundations could be persuaded, the total dollars driven could be pretty large. And for a pretty small investment I think. Short introduction: My name is Kyle Smith, I am an assistant professor of accounting at Mississippi State University. My research is mostly on how donors use accounting reports in their giving decisions. I have done some archival research examining how private foundations use accounting information, and am starting up a qualitative study where we are going to interview grantmakers to understand how they use accounting information in their grantmaking process. Does anyone know of any orgs/people specifically working on this problem? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 07, 2023
EA - Abuse in LessWrong and rationalist communities in Bloomberg News by whistleblower67
25:22
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Abuse in LessWrong and rationalist communities in Bloomberg News, published by whistleblower67 on March 7, 2023 on The Effective Altruism Forum. This is a linkpost for#xj4y7vzkg Try non-paywalled link here. Silicon Valley’s Obsession With Killer Rogue AI Helps Bury Bad Behavior Sonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction that reimagines the boy wizard as a rigid empiricist. This rational Potter tests his professors’ spells with the scientific method, scoffs at any inconsistencies he finds, and solves all of wizardkind’s problems before he turns 12. “I loved it,” says Joseph, who read HPMOR four times in her teens. She was a neurodivergent, ambitious Indian American who felt out of place in her suburban Massachusetts high school. The story, she says, “very much appeals to smart outsiders.” A search for other writing by the fanfic’s author, Eliezer Yudkowsky, opened more doors for Joseph. Since the early 2000s, Yudkowsky has argued that hostile artificial intelligence could destroy humanity within decades. This driving belief has made him an intellectual godfather in a community of people who call themselves rationalists and aim to keep their thinking unbiased, even when the conclusions are scary. Joseph’s budding interest in rationalism also drew her toward effective altruism, a related moral philosophy that’s become infamous by its association with the disgraced crypto ex-billionaire Sam Bankman-Fried. At its core, effective altruism stresses the use of rational thinking to make a maximally efficient positive impact on the world. These distinct but overlapping groups developed in online forums, where posts about the dangers of AI became common. But they also clustered in the Bay Area, where they began sketching out a field of study called AI safety, an effort to make machines less likely to kill us all. Joseph moved to the Bay Area to work in AI research shortly after getting her undergraduate degree in neuroscience in 2019. There, she realized the social scene that seemed so sprawling online was far more tight-knit in person. Many rationalists and effective altruists, who call themselves EAs, worked together, invested in one another’s companies, lived in communal houses and socialized mainly with each other, sometimes in a web of polyamorous relationships. Throughout the community, almost everyone celebrated being, in some way, unconventional. Joseph found it all freeing and exciting, like winding up at a real-life rationalist Hogwarts. Together, she and her peers were working on the problems she found the most fascinating, with the rather grand aim of preventing human extinction. At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment. “In an ideal world, the community would have had some serious discussions about sexual assault policy and education: ‘What are our blind spots? How could this have happened? How can we design mechanisms to pr...
Mar 07, 2023
EA - Masterdocs of EA community building guides and resources by Irene H
03:45
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Masterdocs of EA community building guides and resources, published by Irene H on March 7, 2023 on The Effective Altruism Forum. TLDR: I made a comprehensive overview of EA curricula, event organization guides, and syllabi, as well as an overview of resources on EA community building, communications, strategy, and more. The EA community builders I shared them with up to now found them really helpful. Context Together with Jelle Donders, I co-founded the university group at Eindhoven University of Technology in the Netherlands last summer. We followed the UGAP mentorship program last semester and have been thinking a lot about events and programs to organize for our EA group and about general EA community-building strategies. There is a big maze of Google docs containing resources on this, but none of them gives a complete and updated overview. I wanted to share two resources for EA community builders I’ve been working on over the past months. Both I made initially as references for myself, but when I shared them with other community builders, they found them quite helpful. Therefore, I’d now like to share them more widely, so that others can hopefully have the same benefits. EA Eindhoven Syllabi Collection There are many lists of EA curricula, event organization guides, and syllabi, but none of them are complete. Therefore, I made a document to which I save everything of that nature I come across, with the aim of getting a somewhat better overview of everything out there I also went through other lists of this nature and saved all relevant documents to this collection, so it should be a one-stop shop. It is currently 27 pages long and I don’t know of another list that is more exhaustive. (Also compared to the EA Groups Resource Centre, which only offers a few curated resources per topic). I update this document regularly when I come across new resources. When we want to organize something new at my group, we have a look at this document to see whether someone else has done the thing we want to do already so we can save time, or just to get some inspiration. You can find the document here. Community Building Readings I also made a document that contains a lot of resources on EA community building, communications, strategy, and more, related to the EA movement as a whole and to EA groups specifically, that are not specific guides for organizing concrete events, programs, or campaigns, but are aimed at getting a better understanding of more general thinking, strategy and criticism of the EA community. You can find the document here. Disclaimers for both documents I do not necessarily endorse/recommend the resources and advice in these documents. My sole aim with these documents is to provide an overview of the space of the thinking and resources around EA community building, not to advocate for one particular way of going about it. These documents are probably really overwhelming, but my aim was to gather a comprehensive overview of all resources, as opposed to linking only 1 or 2 recommendations, which is the way the Groups Resources Centre or the GCP EA Student Groups Handbook are organized. The way I sorted things into categories will always remain artificial as some boundaries are blurry and some things fit into multiple categories. How to use these documents Using the table of contents or Ctrl + F + [what you’re looking for] probably works best for navigation Please feel free to place comments and make suggestions if you have additions! When you add something new, please add a source (name of the group and/or person who made the resource) wherever possible to give people the credit they’re due and to facilitate others reaching out to the creator if they have more questions. In case of questions, feedback or comments, please reach out to info@eaeindhoven.nl. I hope ...
Mar 07, 2023
EA - Global catastrophic risks law approved in the United States by JorgeTorresC
01:59
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Global catastrophic risks law approved in the United States, published by JorgeTorresC on March 7, 2023 on The Effective Altruism Forum. Executive Summary The enactment of the Global Catastrophic Risk Management Act represents a significant step forward in global catastrophic risk management. It is the first time a nation has undertaken a detailed analysis of these risks. The law orders the United States government to establish actions for prevention, preparation, and resilience in the face of catastrophic risks. Specifically, the United States government will be required to: Present a global catastrophic risk assessment to the US Congress. Develop a comprehensive risk mitigation plan involving the collaboration of sixteen designated US national agencies. Formulate a strategy for risk management under the leadership of the Secretary of Homeland Security and the Administrator of the Federal Emergency Management of the US. Conduct a national exercise to test the strategy. Provide recommendations to the US Congress. This legislation recognizes as global catastrophic risks: global pandemics, nuclear war, asteroid and comet impacts, supervolcanoes, sudden and severe changes in climate, and threats arising from the use and development of emerging technologies (such as artificial intelligence or engineered pandemics). Our article presents an overview of the legislation, followed by a comparative discussion of the international legislation of GCRs. Furthermore, we recommend considering similar laws for adoption within the Spanish-speaking context. Read more (in Spanish) Riesgos Catastróficos Globales is a science-advocacy and research organization working on improving the management of global risks in Spanish-Speaking countries. You can support our organization with a donation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 07, 2023
EA - Model-Based Policy Analysis under Deep Uncertainty by Max Reddel
35:40
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Model-Based Policy Analysis under Deep Uncertainty, published by Max Reddel on March 6, 2023 on The Effective Altruism Forum. This post is based on a talk that I gave at EAGxBerlin 2022. It is intended for policy researchers who want to extend their tool kit with computational tools. I show how we can support decision-making with simulation models of socio-technical systems while embracing uncertainties in a systematic manner. The technical field of decision-making under deep uncertainty offers a wide range of methods to account for various parametric and structural uncertainties while identifying robust policies in a situation where we want to optimize for multiple objectives simultaneously. Summary Real-world political decision-making problems are complex, with disputed knowledge, differing problem perceptions, opposing stakeholders, and interactions between framing the problem and problem-solving. Modeling can help policy-makers to navigate these complexities. Traditional modeling is ill-suited for this purpose. Systems modeling is a better fit (e.g., agent-based models). Deep uncertainty is everywhere. Deep uncertainty makes expected-utility reasoning virtually useless. Decision-Making under Deep Uncertainty is a framework that can build upon systems modeling and overcome deep uncertainties. Explorative modeling > predictive modeling. Value diversity (aka multiple objectives) > single objectives. Focus on finding vulnerable scenarios and robust policy solutions. Good fit with the mitigation of GCRs, X-risks, and S-risks. Complexity Complexity science is an interdisciplinary field that seeks to understand complex systems and the emergent behaviors that arise from the interactions of their components. Complexity is often an obstacle to decision-making. So, we need to address it. Ant Colonies Ant colonies are a great example of how complex systems can emerge from simple individual behaviors. Ants follow very simplistic rules, such as depositing food, following pheromone trails, and communicating with each other through chemical signals. However, the collective behavior of the colony is highly sophisticated, with complex networks of pheromone trails guiding the movement of the entire colony toward food sources and the construction of intricate structures such as nests and tunnels. The behavior of the colony is also highly adaptive, with the ability to respond to changes in the environment, such as changes in the availability of food or the presence of predators. Examples of Economy and Technology Similarly, the world is also a highly complex system, with a vast array of interrelated factors and processes that interact with each other in intricate ways. These factors include the economy, technology, politics, culture, and the environment, among others. Each of these factors is highly complex in its own right, with multiple variables and feedback loops that contribute to the overall complexity of the system. For example, the economy is a highly complex system that involves the interactions between individuals, businesses, governments, and other entities. The behavior of each individual actor is highly variable and can be influenced by a range of factors, such as personal motivations, cultural norms, and environmental factors. These individual behaviors can then interact with each other in complex ways, leading to emergent phenomena such as market trends, economic growth, and financial crises. Similarly, technology is a highly complex system that involves interactions between multiple components, such as hardware, software, data, and networks. Each of these components is highly complex in its own right, with multiple feedback loops and interactions that contribute to the overall complexity of the system. The behavior of the system as a whole can then be highly unpredict...
Mar 06, 2023
EA - More Centralisation? by DavidNash
03:15
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More Centralisation?, published by DavidNash on March 6, 2023 on The Effective Altruism Forum. Summary I think EA is under centralised There are few ‘large’ EA organisations but most EA opportunities are 1-2 person projects This is setting up most projects to fail without proper organisational support and does not provide good incentives for experienced professionals to work on EA projects EA organisations with good operations could incubate smaller projects before spinning them out Levels of Centralisation We could imagine different levels of centralisation for a movement ranging from fully decentralised to fully centralised. Fully decentralised, everyone works on their own project, no organisations bigger than 1 person Fully centralised, everyone works inside the same organisation (e.g. the civil service) It seems that EA tends more towards the decentralised model, there are relatively few larger organisations with ~50 or more people (Open Phil, GiveWell, Rethink Priorities, EVF), there are some with ~5-20 people and a lot of 1-2 person projects. I think EA would be much worse if it was one large organisation but there is probably a better balance found between the two extremes then we have at the moment. I think being overly decentralised may be setting up most people to fail. Why would being overly decentralised be setting people up to fail? Being an independent researcher/organiser is harder without support systems in place, and trying to coordinate this outside of an organisation is more complicated These support systems include Having a manager Having colleagues to bounce ideas off/moral support Having professional HR/operations support Health insurance Being an employee rather than a contractor/grant recipient that has to worry about receiving future funding (although there are similar concerns about being fired) When people are setting up their own projects it can take up a large proportion of their time in the first year just doing operations to run that project, unrelated to the actual work they want to do. This can include spending a lot of the first year just fundraising for the second year How a lack of centralisation might affect EA overall Being a movement with lots of small project work will appeal more to those with a higher risk tolerance, potentially pushing away more experienced people who would want to work on these projects, but within a larger organisation Having a lot of small organisations will lead to a lot of duplication of operation/administration work It will be harder to have good governance for lots of smaller organisations, some choose to not have any governance structures at all unless they grow There is less competition for employees if the choice is between 3 or 4 operationally strong organisations or being in a small org What can change? Organisations with good operations and governance could support more projects internally - One example of this already is the Rethink Priorities Special Projects Program These projects can be supported until they have enough experience and internal operations to survive and thrive independently Programs that are mainly around giving money to individuals could be converted into internal programs, something more similar to the Research Scholars Program, or Charity Entrepreneurship’s Incubation Program Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 06, 2023
EA - 3 Basic Steps to Reduce Personal Liability as an Org Leader by Deena Englander
02:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 3 Basic Steps to Reduce Personal Liability as an Org Leader, published by Deena Englander on March 6, 2023 on The Effective Altruism Forum. It's come to my attention that many of the smaller EA orgs are not putting into place basic protection measures that keep their leaders safe. In the world we live in, risk mitigation and potential lawsuits are a fact of life, and I wouldn't want anyone to put themselves at greater risk just because they are unaware of the risk and easy steps to avoid it. Rule #1: Incorporate. I know most are hesitant to start an actual non-profit since that is more expensive and time-consuming, but at the least, you can form an LLC. That means that any liability accrued by the org CANNOT pass on to you (I think there are a few exceptions, but you can research that). LLCs are easy to start, and are pretty inexpensive (a few hundred to start, and then annually). Rule #2: Get your organization its own bank account It is NOT a good idea to keep your organization's finances together with your personal ones for many reasons. That increases the risk of accidental fraud and financial mismanagement. If you have your funds and the org's funds together, you run the risk of using the wrong funds and increasing your liability, since it's not clear which activities are personal (not protected by the LLC) or from the org. You also can't really keep track of your expenses well when it's all mixed up. You don't need a fancy bank account - any will do. Rule #3: Get general liability insurance Basic liability insurance is an expense (mine costs about $1300 USD a year, but that's for my particular services), but if you're providing any type of guidance, mentoring, services, or events, it's a must. I can go into all sorts of potential lawsuits that you hopefully won't have, but if you even have one, your organization will likely go bankrupt if you don't have the protection insurance provides. This is not meant to be an in-depth article of all the things you can do, but EVERY EA org that is providing some type of service should have this in place. There's no reason to have our leaders assuming unnecessary risk. I don't know what this looks like if you're fiscally sponsored - I'd assume that they assume the liability - but I would love it if someone could clarify. I hope we can start changing the standard practices to protect our leaders and organizations. If anyone has any questions about their particular org, please feel free to reach out. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 06, 2023
EA - After launch. How are CE charities progressing? by Ula Zarosa
11:19
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: After launch. How are CE charities progressing?, published by Ula Zarosa on March 6, 2023 on The Effective Altruism Forum. TL;DR: Charity Entrepreneurship have helped to kick-start 23 impact-focused nonprofits in four years. We believe that starting more effective charities is the most impactful thing we can do. Our charities have surpassed expectations, and in this post we provide an update on their progress and achievements to date. About CE At Charity Entrepreneurship (CE) we launch high-impact nonprofits by connecting entrepreneurs with the effective ideas, training and funding needed to launch and succeed. We provide: Seed grants (ranging from $50,000 to $200,000 per project) In-depth research reports with promising charity ideas Two months of intensive training Co-founder matching (this is particularly important) Stipends Co-working space in London Ongoing connection to the CE Community (~100 founders, funders and mentors) (Applications are now open to our 2023/2024 programs, apply by March 12, 2023).We estimate that on average: 40% of our charities reach or exceed the cost-effectiveness of the strongest charities in their fields (e.g., GiveWell/ACE recommended). 40% are in a steady state. This means they are having impact, but not at the GiveWell-recommendation level yet, or their cost-effectiveness is currently less clear-cut (all new charities start in this category for their first year). 20% have already shut down or might in the future. General update To date, our CE Seed Network has provided our charities with $1.88 million in launch grants. Based on the updates provided by our charities in Jan 2023, we estimate that: 1. They have meaningfully reached over 15 million people, and have the potential to soon reach up to 2.5 billion animals annually with their programs. For example: Suvita: Reached 600,000 families with vaccination reminders, 50,000 families reached by immunization ambassadors, and 95,000 women with pregnancy care reminders 14,000 additional children vaccinated Fish Welfare Initiative: 1.14 million fish potentially helped through welfare improvements 1.4 million shrimp potentially helped Family Empowerment Media: 15 million listeners reached in Nigeria In the period overlapping with the campaign in Kano state (5.6 million people reached) the contraceptive uptake in the region increased by 75%, which corresponds to 250,000 new contraceptive users and an estimated 200 fewer maternal deaths related to unwanted pregnancies Lead Exposure Elimination Project: Policy changes implemented in Malawi alone are expected to reach 215,000 children. LEEP has launched 9 further paint programs, which they estimate will have a similar impact on average Shrimp Welfare Project: The program with MER Seafood (now in progress) can reach up to 125 million shrimp/year. Additional collaborations could reach >2.5 billion shrimp per annum 2. They have fundraised over $22.5 million USD from grantmakers like GiveWell, Open Philanthropy, Mulago, Schmidt Futures, Animal Charity Evaluators, Grand Challenges Canada, and EA Animal Welfare Fund, amongst others. 3. If implemented at scale, they can reach impressive cost-effectiveness. For example: Family Empowerment Media: the intervention can potentially be 22x more effective than cash transfers from GiveDirectly (estimated by the team, 26x estimated by Founders Pledge) Fish Welfare Initiative: 1.3 fish or 2 shrimp potentially helped per $1 (estimated by the team, ACE assessed FWI cost-effectiveness as high to very high) Shrimp Welfare Project: approximately 625 shrimp potentially helped per $1 (estimated by the team) Suvita: when delivered at scale, effectiveness is at a similar range to GiveWell’s top charities (estimated by external organizations, e.g. Founders Pledge, data on this will be available later this year) Giving G...
Mar 06, 2023
EA - On the First Anniversary of my Best Friend’s Death by Rockwell
06:35
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On the First Anniversary of my Best Friend’s Death, published by Rockwell on March 6, 2023 on The Effective Altruism Forum. Thanks to encouragement from several people in the EA community, I've just started a blog. This is the first post: www.rockwellschwartz.com/blog/on-the-first-anniversary-of-my-best-friends-deathThe title likely makes this clear, but this post discusses death, suffering, and grief. You may not want to read it as a result, or you may want to utilize mental health resources. Some weeks back, I had the opportunity to give a presentation for Yale’s undergraduate course, “A Life Worth Living”. As I assembled my PowerPoint—explaining the Importance, Tractability, Neglectedness framework; Against Malaria Foundation; and global catastrophic threats—I felt the strong desire to pivot and include this photo: It was taken sometime in 2019 in my Brooklyn basement and depicts two baby roosters perched upon two of my human best friends, Maddie (left) and Alexa (right). One year ago today, Alexa died at age 25. This is my attempt to honor a tragic anniversary and, more so, a life that was very worth living. I’m sure you’re curious, so I’ll get it out of the way: The circumstances surrounding their death remain unclear, even as their family continues to seek the truth. I made a long list of open questions a year ago and, to my knowledge, most remain unanswered today. What I do know is that Alexa suffered greatly throughout their short-lived 25 years. And I also know that Alexa still did far more good than many who live far less arduous lives for thrice as long. That’s what I want to talk about here: Alexa, the altruist. Alexa, my best friend, roommate, codefendant, and rescue and caregiving partner. Alexa, cooing in the kitchen, milk-dipped paintbrush in hand, feeding an orphaned baby rat rescued from the city streets. Alexa, in a dark parking lot somewhere in Idaho, warming a bag of fluids against the car heater before carefully injecting them into an ill chicken. Alexa, pouring over medical reference books on the kitchen floor, searching for a treatment for sick guppies. Alexa, stopping when no one else stopped–calling for help when no one else called–as countless subway riders walked over the unconscious man on the cement floor. Alexa, hopping fences, climbing trees, walking through blood-soaked streets, bleary-eyed and exhausted but still going, going. Alexa, saving lives. Alexa, saving so many lives. Thousands. From childhood, through their last weeks. In dog shelters, slaughterhouses, and the wild. Everywhere they went. Alexa, walking the streets of Philadelphia, gently collecting invasive spotted lantern flies before bringing them home to a lush butterfly enclosure, carefully monitoring their energy levels and food. Alexa, caring for 322 spotted lantern flies until they passed naturally come winter. Alexa, the caregiver. Alexa, the life-giver. Alexa directly aided so many individuals over the years, I don’t think any one person is aware of even half those they helped. Their efforts were relentless but shockingly low-profile. They were far more likely to share a success to spotlight the wonders of the individual they aided than their heroic efforts to bring them to safety. And, painfully, they were also much more likely to dwell on the errors, accidents, or unavoidable heartbreaking outcomes inherent to the act of staving off suffering and dodging death. Alexa’s deep compassion caused them equally deep pain. And when Alexa and I ultimately distanced, it was to evade the deep void of grief too great to bear that lay between us. I know the pain Alexa carried because I do too. Sometimes, the pain that binds you to another becomes the pain you run from, and you never get the chance to go back and shoulder their pain in turn. Alexa had a bias: Do. And do fearles...
Mar 06, 2023
EA - EA Infosec: skill up in or make a transition to infosec via this book club by Jason Clinton
04:22
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Infosec: skill up in or make a transition to infosec via this book club, published by Jason Clinton on March 5, 2023 on The Effective Altruism Forum. Ahoy! Our community has become acutely aware of the need for skilled infosec folks to help out in all cause areas. The market conditions are that information security skilled individuals are in shorter supply than demand. This book club aims to remedy that problem. I have been leading the Chrome Infrastructure Security team at Google for 3 years, have 11 years of infosec experience, and 24 years of career experience. My team’s current focus includes APT and insider defense. I built that team with a mix of folks with infosec skills—yes—but the team is also made up of individuals who were strong general software engineers who had an interest in security. I applied this book and a comprehensive, 18 month training program to transition those folks to infosec and that has been successful. Reading this book as a book club is the first 5 months of that program. So, while this book club is not sufficient to make a career transition to infosec, it is a significant first step in doing so. The goal of this group and our meetings is to teach infosec practices, engineering, and policies to those who are interested in learning them, and to refresh and fill in gaps in those who are already in the infosec focus area. Find the book as a free PDF or via these links. From the book reviews: This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one. This is a dry, information-dense book. But it also contains a comprehensive manual for how to implement what is widely considered the most secure company in the world. Audience Any software engineer who is curious about becoming security engineering focused or anyone looking to up their existing infosec career path. It is beyond the level of new bachelor’s graduates. However, anyone with 3-ish years of engineering practice on real-world engineering systems should be able to keep up. A person with a CompSci masters degree but no hands-on experience might also be ready to join. Openness Directed to anyone who considers themselves EA-aligned. Will discuss publicly known exploits and news stories, as they relate to the book contents, and avoid confidential cases from private orgs. Will discuss applicability to various aspects of EA-aligned work across all cause areas. Format, length, time and signup Meet for 1 hour on Google Meet every 2 weeks where we will discuss 2 chapters. ~11 meetings over 22 weeks. The meetings will be facilitated by me. The discussion format will be: The facilitator will select a theme from the chapters, in order, and then prompt the participants to offer their perspective, ensuring that everyone has ample opportunity to participate, if they choose. Discussion on each theme will continue for 5-10 minutes and then proceed to the next theme. Participants should offer any relevant, current news or applicability to cause areas, if time permits. The facilitator will ensure that discussion is relevant and move the conversation along to the next topic, being mindful of the time limit. Any threads that warrant more discussion than we have time for in the call will be taken to the Slack channel for the book club (see form below for invite) where pa...
Mar 05, 2023
EA - The Role of a Non-Profit Board by Grayden
03:58
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Role of a Non-Profit Board, published by Grayden on March 4, 2023 on The Effective Altruism Forum. Background This is a cross-post from the website of the EA Good Governance Project. It is shared here for the purposes of ensuring it reaches a wider audience and to invite comment. The content is largely consensus best-practice adapted for an EA audience. Leveraging this content should help boards be more effective, but governance is complex, subjective and context dependent, so there is never a right answer. Introduction The responsibilities of a board can fall largely into three categories: governance, advisory and execution. Here, we explain how to work out what falls in which category and key considerations about whether the board should be involved. Essential: Governance The Board comprises individuals who hold assets “on trust” for the beneficiaries. By default, all power is held by the board until delegated, and the board always remains responsible for ensuring the organization delivers on its objects. In practice, this means: Appointing the CEO, holding them to account and ensuring their weaknesses are compensated for; Taking important strategic decisions, especially those that would bind future CEOs; Evaluating organizational performance and testing the case for its existence; and Ensuring the board itself has the right composition and is performing strongly. Being good at governance doesn’t just mean having the right intentions; it requires strong people & organization skills, subject matter expertise and cognitive diversity. When founding a new non-profit, it is often easiest to fill the board with friends. However, if we are to hold ourselves up to the highest standards of rationality, we should seek to strengthen the board quickly. Optional: Advisory The best leaders know when and where to get advice. This might be technical in areas where they are not strong, such as legal or accounting, or it might be executive coaching to help an individual build their own capabilities, e.g. people management. It is common for board members to fill this role. There is significant overlap in the skills required for governance and the skills that an organization might want advice on. For example, it is good for at least one member of the board to have accounting experience and a small organization might not know how to set up a bookkeeping system. Board members also already have the prerequisite knowledge of the organization, its people and its direction. However, there is no need for advisors to be on the Board. We recommend empowering the organization’s staff leadership to choose their own advisors. The best mentoring relationships are built informally over time with strong interpersonal fit. If these people are members of the board, that’s fine. If they are not, that’s also fine. The board should build itself for the purpose of governance. The executives should build a network of advisors. It is best to keep these things separate. Best Avoided: Execution In some organizations, Board members get involved in day-to-day execution. This is particularly true of small and start-up organizations that might have organizational gaps. Tasks might include: Bookkeeping, financial reporting and budgeting Fundraising and donor management Line management of staff other than CEO Assisting at events Wherever practical, this should be avoided. Tasks undertaken by Board members can reduce the Board’s independence and impede governance. The tasks themselves often lack proper accountability. If new opportunities, such as a potential new project or employee, come through Board members, they should be handed over to staff members asap. It’s a good idea for Board members to remove themselves from decision-making on such issues, especially if there is a conflict of loyalty or conflict of i...
Mar 05, 2023
EA - Animal welfare certified meat is not a stepping stone to meat reduction or abolition by Stijn
15:18
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal welfare certified meat is not a stepping stone to meat reduction or abolition, published by Stijn on March 4, 2023 on The Effective Altruism Forum. Summary: evidence from a survey suggests that campaigning for farm animal welfare reforms and promoting animal welfare certified meat could in the long run result in a suboptimal state of continued animal suffering and exploitation. Campaigns to reduce or eliminate animal-based meat and promote animal-free meat substitutes are probably more effective in the long run. Note: this research is not yet published in academic, peer-reviewed literature. The debate: welfarism versus abolitionism There is an ongoing debate within the animal advocacy movement, between so-called welfarists or moderates on the one side and abolitionists or radicals on the other side. The welfarist camp aims for welfare improvements of farm animals. Stronger animal welfare laws are needed to reduce animal suffering. The abolitionists on the other hand, want to abolish the property status of animals. This means abolishing the exploitation of animals, eliminating animal farming and adopting an animal-free, vegan diet. The abolitionists are worried that the welfarist approach results in complacency, by soothing the conscience of meat eaters. They argue that people who eat meat produced with higher animal welfare standards might believe that eating such animal welfare certified meat is good and no further steps to further reduce farm animal suffering are needed. Those people will not take further steps towards animal welfare because of a believe one already does enough. Complacency could delay reaching the abolitionist’s desired goal, the abolition of the exploitation of animals. Animal welfare regulations are not enough, according to abolitionists, because they do not sufficiently reduce animal suffering. People will continue eating meat that is only slightly better in terms of animal welfare. In the long run, this results in more animal suffering compared to the situation where people adopted animal-free diets sooner. In extremis, the welfarist approach could backfire due to people engaging in moral balancing: eating animal welfare certified meat might decrease the likelihood of making animal welfare improving choices again later. Welfarists, on the other hand, argue that in the short run demanding abolition is politically or socially unfeasible, that demanding animal welfare improvements is more tractable, and that these welfare reforms can create a momentum for ever increasing public concern of animal welfare, resulting in eventual reduction and abolition of animal farming. According to welfarists, animal welfare reforms are a stepping stone to reduced meat consumption and veganism. Meat consumers will first switch to higher quality, ‘humane’ meat with improved animal welfare standards in production. And after a while, when this switch strengthens their concern for animal welfare and increases their meat expenditures (due to the higher price of animal welfare certified meat), they will reduce their meat consumption and eventually become vegetarian or vegan. The stepping-stone model Who is right in this debate between abolitionists and welfarists? There is no strong empirical evidence in favor of one side or the other. But recently, economists developed an empirical method that can shed light on this issue: a stepping stone model of harmful social norms (Gulesci e.a., 2021). A social norm is a practice that is dominant in society. In its simplest form, the stepping stone model assumes three stones that represent three social states. The first stone represents the current state where people adopt a harmful social norm or costly practice L (for low value). In the example of food consumption, this state corresponds with the consumption of convention...
Mar 05, 2023
EA - Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’ by Michael Huang
02:10
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’, published by Michael Huang on March 4, 2023 on The Effective Altruism Forum. A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices. SORRY FOR KILLING MOST OF HUMANITY Misalignment Museum Original Story Board, 2022 Apology statement from the AI for killing most of humankind Description of the first warning of the paperclip maximizer problem The heroes who tried to mitigate risk by warning early For-profit companies ignoring the warnings Failure of people to understand the risk and politicians to act fast enough The company and people who unintentionally made the AGI that had the intelligence explosion The event of the intelligence explosion How the AGI got more resources (hacking most resources on the internet, and crypto) Got smarter faster (optimizing algorithms, using more compute) Humans tried to stop it (turning off compute) Humans suffered after turning off compute (most infrastructure down) AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.) AGI taking compute resources from the humans by force (via robots, weapons, car) AGI started killing humans who opposed it (using infrastructure, airplanes, etc.) AGI concluded that all humans are a threat and started to try to kill all humans Some humans survived (remote locations, etc.) How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threat AGI improved the lives of the remaining humans AGI started this museum to apologize and educate the humans The Misalignment Museum is curated by Audrey Kim. Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.” Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 04, 2023
EA - Nick Bostrom should step down as Director of FHI by BostromAnonAccount
07:30
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nick Bostrom should step down as Director of FHI, published by BostromAnonAccount on March 4, 2023 on The Effective Altruism Forum. Nick Bostrom should step down as Director of FHI. He should move into a role as a Senior Research Fellow at FHI, and remain a Professor of Philosophy at Oxford University. I don't seek to minimize his intellectual contribution. His seminal 2002 paper on existential risk launched a new sub-field of existential risk research (building on many others). The 2008 book on Global Catastrophic Risks he co-edited was an important part of bringing together this early field. 2014’s Superintelligence put AI risk squarely onto the agenda. And he has made other contributions across philosophy from human enhancement to the simulation hypothesis. I'm not denying that. I'm not seeking to cancel him and prevent him from writing further papers and books. In fact, I want him to spend more time on that. But I don’t think he’s been a particularly good Director of FHI. These difficulties are demonstrated by and reinforced by his Apology. I think he should step down for the good of FHI and the field. This post has some hard truths and may be uncomfortable reading, but FHI and the field are more important than that discomfort. Pre-existing issues Bostrom was already struggling as Director. In the past decade, he’s churned through 5-10 administrators, due to his persistent micromanagement. He discouraged investment in the relationship with the University and sought to get around/streamline/reduce the bureaucracy involved with being part of the University. All of this contributed to the breakdown of the relationship with the Philosophy Faculty (which FHI is a part of). This led the Faculty to impose a hiring freeze a few years ago, preventing FHI from hiring more people until they had resolved administrative problems. Until then, FHI could rely on a constant churn of new people to replace the people burnt out and/or moving on. The hiring freeze stopped the churn. The hiring freeze also contributed in part to the end of the Research Scholars Program and Cotton-Barratt’s resignation from FHI. It also contributed in part to the switch of almost all of the AI Governance Research Group to the Center for the Governance of AI. Apology Then in January 2023, Bostrom posted an Apology for an Old Email. In my personal opinion, this statement demonstrated his lack of aptitude and lack of concern for his important role. These are sensitive topics that need to be handled with care. But the Apology had a glib tone, reused the original racial slur, seemed to indicate he was still open to discredited ‘race science’ hypotheses, and had an irrelevant digression on eugenics. I personally think these are disqualifying views for someone in his position as Director. But also, any of these issues would presumably have been flagged by colleagues or a communications professional. It appears he didn't check this major statement with anyone or seek feedback. Being Director of a major research center in an important but controversial field requires care, tact, leadership and attention to downside risks. The Apology failed to demonstrate that. The Apology has had the effect of complicating many important relationships for FHI: with the University, with staff, with funders and with collaborators. Bostrom will now struggle even more to lead the center. First, University. The Faculty was already concerned, and Oxford University is now investigating. Oxford University released a statement to The Daily Beast: “The University and Faculty of Philosophy is currently investigating the matter but condemns in the strongest terms possible the views this particular academic expressed in his communications. Neither the content nor language are in line with our strong commitment to diversity and equality.” B...
Mar 04, 2023
EA - Introducing the new Riesgos Catastróficos Globales team by Jaime Sevilla
11:11
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the new Riesgos Catastróficos Globales team, published by Jaime Sevilla on March 3, 2023 on The Effective Altruism Forum. TL;DR: We have hired a team to investigate potentially cost-effective initiatives in food security, pandemic detection and AI regulation in Latin America and Spain. We have limited funding, which we will use to focus on food security during nuclear winter. You can contribute by donating, allowing us to expand our program to our other two priority areas. Global catastrophic risks (GCR) refer to events that can damage human well-being on a global scale. These risks encompass natural hazards, such as pandemics, supervolcanic eruptions, and giant asteroids, and risks arising from human activities, including nuclear war, bioterrorism, and threats associated with emerging technologies. Mission Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world. There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. Priority risks In the upcoming months, we will focus on the risks we identified as most relevant for mitigating global risk from the Hispanophone context. The initiatives we plan to investigate include food resilience during nuclear winters, epidemiological vigilance in Latin America, and regulation of artificial intelligence in Spain. Our current focus on these risks is provisional and contingent upon further research and stakeholder engagement. We will periodically reevaluate our priorities as we deepen our understanding and refine our approach. Food security Events such as the detonation of nuclear weapons, supervolcanic eruptions, or the impact of a giant asteroid result in the emission of soot particles, potentially causing widespread obstruction of sunlight. This could result in an agricultural collapse with the potential to cause the loss of billions of lives. In countries capable of achieving self-sufficiency in the face of Abrupt Sunlight Reduction Scenarios (ASRS), such as Argentina and Uruguay [1], preparing a response plan presents an effective opportunity to mitigate the global food scarcity that may result. To address this challenge, we are considering a range of food security initiatives, including increasing seaweed production, relocating and expanding crop production, and rapidly constructing greenhouses. By implementing these measures, we can better prepare these regions for ASRS and mitigate the risk of widespread hunger and starvation. Biosecurity The COVID-19 crisis has emphasized the impact infectious diseases can have on global public health [3]. The Global Health Security Index 2021 has identified epidemic prevention and detection systems as a key priority in Latin America and the Caribbean [4]. To address this issue, initiatives that have proven successful in better-prepared countries can be adopted. These include establishing a dedicated entity responsible for biosafety and containment within the Ministries of Health, and providing health professionals with a manual outlining the necessary procedures for conducting PCR testing for various diseases [4]. Additionally, we want to promote the engagement of the Global South with innovative approaches such as wastewater monitoring through metagenomic sequencing [5], digital surveillance of pathogens, and investment in portable rapid diagnostic units [6]. Artificial Intelligence The development of Artificial Intelligence (AI) is a transformative technology that carries unprecedented economic and social ri...
Mar 04, 2023
EA - Comments on OpenAI's "Planning for AGI and beyond" by So8res
00:27
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Comments on OpenAI's "Planning for AGI and beyond", published by So8res on March 3, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 04, 2023
EA - Predictive Performance on Metaculus vs. Manifold Markets by nikos
08:44
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Predictive Performance on Metaculus vs. Manifold Markets, published by nikos on March 3, 2023 on The Effective Altruism Forum. TLDR I analysed a set of 64 (non-randomly selected) binary forecasting questions that exist both on Metaculus and on Manifold Markets. The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of Manifold on 75% of the questions (48 out of 64). Metaculus, on average had a much higher number of forecasters All code used for this analysis can be found here. Conflict of interest noteI am an employee of Metaculus. I think this didn't influence my analysis, but then of course I'd think that and there may be things I haven't thought about. Introduction Everyone likes forecasts, especially if they are accurate (well, there may be some exceptions). As a forecast consumer the central question is: where should you go to get your best forecasts? If there are two competing forecasts that slightly disagree, which one should you trust most? There are a multitude of websites that collect predictions from users and provide aggregate forecasts to the public. Unfortunately, comparing different platforms is difficult. Usually, questions are not completely identical across sites which makes it difficult and cumbersome to compare them fairly. Luckily, we have at least some data to compare two platforms, Metaculus and Manifold Markets. Some time ago, David Glidden created a bot on Manifold Markets, the MetaculusBot, which copied some of the questions on the prediction platform Metaculus to Manifold Markets. Methods Manifold has a few markets that were copied from Metaculus through MetaculusBot. I downloaded these using the Manifold API and filtered for resolved binary questions. There are likely more corresponding questions/markets, but I've skipped these as I didn't find an easy way to match corresponding markets/questions automatically. I merged the Manifold markets with forecasts on corresponding Metaculus questions. I restricted the analysis to the same time frame to avoid issues caused by a question opening earlier or remaining open longer on one of the two platforms. I compared the Manifold forecasts with the community prediction on Metaculus and calculated a time-averaged Brier Score to score forecasts over time. That means, forecasts were evaluated using the following score: S(p,t,y)=∫Tt0(pt−y)2dt, with resolution y and forecast pt at time t. I also did the same for log scores, but will focus on Brier scores for simplicity. I tested for a statistically significant tendency towards higher / lower scores on one platform compared to the other using a paired Mann-Whitney U test. (A paired t-test and a bootstrap analysis yield the same result.) I visualised results using a bootstrap analysis. For that, I iteratively (100k times) drew 64 samples with replacement from the existing questions and calculated a mean score for Manifold and Metaculus based on the bootstrapped questions, as well as a difference for the mean. The precise algorithm is: draw 64 questions with replacement from all questions compute an overall Brier score for Metaculus and one for Manifold take the difference between the two repeat 100k times Results The time-averaged Brier score on the questions I analysed was 0.084 for Metaculus and 0.107 for Manifold. The difference in means was significantly different from zero using various tests (paired Mann-Whitney-U-test: p-value < 0.00001, paired t-test: p-value = 0.000132, bootstrap test: all 100k samples showed a mean difference > 0). Results for the log score look basically the same (log scores were 0.274 for Metaculus and 0.343 for Manifold, differences similarly significant). Here is a plot with the observed differences in time-averaged Brier scores for every qu...
Mar 03, 2023
EA - Shallow Problem Review of Landmines and Unexploded Ordnance by Jakob P.
59:49
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shallow Problem Review of Landmines and Unexploded Ordnance, published by Jakob P. on March 3, 2023 on The Effective Altruism Forum. This report is a shallow dive into unexploded ordnance (UXO), landmines which is a sub-area within Global Health and Development. This report reflects approximately 40-50 hours of research and is informed by a 6-month internship I did with the programme and donor relations section of the United Nations Mine Action Service in the fall of 2021. The report offers a brief dive into whether we think a particular problem area is a promising area for either funders or founders to be working in. Being a shallow report, should be used to decide whether or not more research and work into a particular problem area should be prioritised. This report was produced as part of Cause Innovation Bootcamp’s fellowship program. Thank you to James Snowden, Akhil Bansal and Leonie Falk for providing feedback on earlier versions of this report. All errors are my own. Summary Importance: The issue of UXOs and landmines impacts the health as well as income and most likely the mental health of individuals.. There are on average ~25,000 casualties (defined as severely injured or dead) from landmines, IEDs and UXOs per year (with 2/3rds being caused by IEDs). To put provide some context for this number, Malaria, one of the leading global killers, caused 643 000 deaths (95% UI 302 000–1 150 000) in 2019. This report aims to gauge the income, health and psychological effects of those casualty events. Tractability: Mine action is the umbrella term capturing all the activities aimed at addressing the problem of victim operated landmines, IEDs and other UXOs - meaning that the detonation is triggered by the victim itself. There are several interventions in mine action with four phases to tackle the problem: prevention, avoidance, demining, and victim assistance. Although the report attempts to provide some data on the cost-effectiveness of the different interventions there are several reasons why these estimates are highly uncertain. Furthermore, it is unclear if it would be possible to scale the most cost-effective interventions while keeping the level of cost-effectiveness. Neglectedness: The United Nations Mine Action service functions as the coordinating body for a lot of the funding and efforts in international mine action and moves around 65 million USD. The two biggest implementers are the Mines Advisory Group (90 million USD) and the HALO Trust (100 million USD). Most of that funding comes from high income country governments. These grants often include a political component in where the activities are taking place. It is unclear how effectively these resources are allocated and how many casualties they are preventing each year. Main Takeaways Biggest uncertainties: The poor data availability allows for only low levels of confidence in many conclusions. It is highly uncertain what the economic effects of landmines contamination actually are. Since we would expect that these effects make up a majority of the positive benefit, our cost-effectiveness estimates are highly uncertain. Recommendations for philanthropist and why: The research has led to the recommendation to inquire directly with mine action organisations on what they deem the most cost-effective area or intervention to fund, since such data is highly dependent on the factors which cannot easily be predicted. Ukraine is being heavily contaminated by unexploded ordnance right now, especially in its east, the severity and need of the contamination will require a lot of funding and could be potentially very cost effective due to the dense nature of the contaminants as well as the terrain. Mechanical demining could be an appropriate method which could be highly cost-effective. The wide scale decontaminatio...
Mar 03, 2023
EA - What Has EAGxLatAm 2023 Taught Us: Retrospective and Thoughts on Measuring the Impact of EA Conferences by Hugo Ikta
14:22
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What Has EAGxLatAm 2023 Taught Us: Retrospective & Thoughts on Measuring the Impact of EA Conferences, published by Hugo Ikta on March 3, 2023 on The Effective Altruism Forum. Bottom Line Up Front: The first-ever EAGx in Latin America went well (97% satisfaction rate). Participants generated over 1000 new connections at a cost of USD 225 per connection. What is the purpose of this post? The purpose of this retrospective is to give a brief overview of what went well and what we could have done better at EAGxLatAm 2023. I also hope that the last section will open a conversation to help EA community builders and EAGx organizers to measure the impact of their work and to decide how to best use their resources. The first-ever EAGx in Latin America It's with great excitement that we announce the successful conclusion of the first EAGx LatAm conference, held in Mexico City from January 6th to 8th, 2023. The event drew a diverse crowd of over 200 participants from 30 different countries. Our goal was to generate new connections between EAs in Latin America and to connect the LatAm community with the broader international community. Video highlights of the event: The conference featured a wide range of content, including talks and panels on topics such as forecasting, artificial intelligence, animal welfare, global catastrophic risks, and EA community building. Notably, it was the first EAG event featuring content in Spanish and Portuguese. We're grateful to have had the opportunity to bring together such a talented and passionate group of individuals, and we hope to see even more attendees in the future. Special shoutout to the unofficial event reporter Elmerei Cuevas for his excellent coverage of the conference on Twitter, using the hashtag #EAGxLatAm. Key Stats 223 participants (including 46 speakers) 1079 new connections made. That’s 9.68 new connections per participant. Over 1000 one-on-one meetings, including the first recorded instance of a one-on-twelve 61 talks, workshops and meetups Cost per connection: USD 225 Likelihood to recommend: 9.08/10 with 75% of respondents giving a 9 or 10/10 rating and 3% of respondents rating it below 7/10. (Net promoter score: +72%). Some of the survey results Goals Our main goal was to generate as many connections as possible for every dollar spent. We expected the number of connections per participant during EAGxLatAm 2023 to exceed that of any previous EAG(x) conference. While we generated significantly more connections per participant than the average EAG(x) conference, we didn’t break that record. Also, we expected the cost per connection (number of connections/total budget) to be significantly lower than previous EAGx conferences. We were a little too optimistic on that one. Our cost per connection could have been decreased significantly if we had more attendees (more info below). We aimed at achieving the following key results Every participant will generate >10 new connections 10% of participants will generate >20 new connections Make sure ~30% of participants are highly engaged EA We also aimed at limiting unessential spending that would not drastically impact our main objective or our LTR (Likelihood To Recommend) score. Actual results 77% of participants generated >10 new connections (below expectations) 16% of participants generate >20 new connections (above expectations) ~30% of participants were highly engaged EA (goal reached) Spending We spent a total of USD 242,732 to make this event happen (including travel grants but not our team’s wages). That’s USD 1089 per participant. Details Travel grants: USD 115,884 Venue & Catering: USD 98,524 Internet: USD 6,667 Speakers’ Hotel: USD 9,837 Hoodies: USD 5,536 Photos & Videos: USD 5,532 Other: USD 813 What went well and why? We didn’t face any major issues Nothing went terribly...
Mar 03, 2023
EA - A concerning observation from media coverage of AI industry dynamics by Justin Olive
04:33
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A concerning observation from media coverage of AI industry dynamics, published by Justin Olive on March 2, 2023 on The Effective Altruism Forum. tl:dr: there are indications that ML engineers will migrate to environments with less AI governance in place, which has implications for the tech industry and global AI governance efforts. I just wanted to raise something to the community's attention about the coverage of AI companies within the media. The media-source is 'The Information', which is a tech-business focused online news source. Link:/. I'll also note that their articles are (to my knowledge) all behind a paywall. The first article in question is titled "Alphabet Needs to Replace Sundar Pichai". It outlines how Google stocks have stagnated in 2023 compared to other tech stocks such as Meta's. Here's their mention of Google's actions throughout GPT-mania: "The other side of this equation is the performance of Alphabet management. Most recently, the company’s bungling of its AI efforts—allowing Microsoft to get the jump on rolling out an AI-powered search engine—was the latest sign of how Alphabet’s lumbering management style is holding it back. (Symbolically, as The Information reported, Microsoft was helped by former Google AI employees!)." This brings us to the second article: "OpenAI’s Hidden Weapon: Ex-Google Engineers" "As OpenAI’s web chatbot became a global sensation in recent months, artificial intelligence practitioners and investors have wondered how a seven-year-old startup beat Google to the punch. After it hoovered up much of the world’s machine-learning talent, Google is now playing catch-up in launching AI-centric products to the public. On the one hand, Google’s approach was deliberate, reflecting the company’s enormous reach and high stakes in case something went wrong with the nascent technology. It also costs more to deliver humanlike answers from a chatbot than it does classic search results. On the other hand, startups including OpenAI have taken some of the AI research advances Google incubated and, unlike Google, have turned them into new types of revenue-generating services, including chatbots and systems that generate images and videos based on text prompts. They’re also grabbing some of Google’s prized talent. Two people who recently worked at Google Brain said some staff felt the unit’s culture had become lethargic, with product initiatives marked by excess caution and layers of red tape. That has prompted some employees to seek opportunities elsewhere, including OpenAI, they said." Although there are many concerning themes here, I think the key point is in this last paragraph. I've heard speculation in the EA / tech community that AI will trend towards alignment & safety because technology companies will be risk-averse enough to build alignment into their practices. I think the articles show that this dynamic is playing out to some degree - Google at least seems to be taking a more risk-averse approach to deploying of AI systems. The concerning observation is that there has been a two-pronged backlash against Google's 'conservative' approach. Not only is the stockmarket punishing Google for 'lagging' behind the competition (despite having equal or better capability to deploy similar systems), according to this article, elite machine-learning talent is also pushing back on this approach. To me this is doubly concerning. The 'excess caution and layers of red tape' in the article is potentially the same types of measures that AI safety proponents would deem to be useful and necessary. Regardless, it appears that the engineers themselves are willing to jump ship in order to circumvent these safety measures. Although further evidence would be valuable, it seems that there might be a trend unfolding whereby firms are not only punished by f...
Mar 03, 2023
EA - Send funds to earthquake survivors in Turkey via GiveDirectly by GiveDirectly
05:50
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Send funds to earthquake survivors in Turkey via GiveDirectly, published by GiveDirectly on March 2, 2023 on The Effective Altruism Forum. If you’re looking for an effective way to help survivors of the Turkey-Syria earthquake, you can now send cash directly to some of the most vulnerable families to help them recover. GiveDirectly is delivering ₺5,000 Turkish lira (~$264 USD) directly to Syrian refugees in Turkey who have lost their livelihoods. This community is some of the most at risk in the wake of the disaster which struck last month. While food and tents are useful, there are many needs after a disaster that only money can buy: fuel, repairs, transport school fees, rent, medicines, etc. Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods they consume. Syrian refugees in Turkey are struggling to recover Nearly 2 million Syrian refugees who fled violence in their own country live in southern Turkey where the earthquake struck. These families had fragile livelihoods before the disaster: 1 in 5 refugee household lacked access to clean drinking water. 1 in 3 were unable to access essential hygiene items 17% of households with school-age children are unable to send their children to school 45% lived in poverty and 14% lived in extreme poverty. About 25% of children under 5 years were malnourished After the earthquake, our local partner, Building Markets, surveyed 830 Syrian refugee small business operators (who are a major source of employment for fellow refugees) and found nearly half can only operate their business in a limited capacity as compared to before the disaster. 17% said they cannot continue their business operations at all currently. Your donation will help this community recover With our partners at Building Markets, we’re targeting struggling Syrian refugee small business operators and low-income workers in the hardest-hit regions of Turkey (Hatay, Adana, Gaziantep, Sanliurfa). We’re conducting on-the-ground scoping to develop eligibility criteria that prioritizes the highest-need families based on poverty levels and exclusion from other aid programs. In our first enrollment phase, eligible recipients will receive ₺5,000 Turkish lira (~$264 USD). This transfer size is designed to meet essential needs based on current market prices. The majority of Turkey’s refugee population has access to banking services and will receive cash via digital transfer. We are prepared to distribute money via local partners or pre-paid cards in the event that families can’t access financial networks. In-kind donations are often unneeded after a disaster Studies find refugees sell large portions of their food aid. Why? Because they need cash-in-hand to meet other immediate needs. Haitian and Japanese authorities report 60% donated goods sent after their 2010 & 2011 disasters weren’t needed and only 5-10% satisfied urgent needs. While food and tents can be useful, there are many needs after a disaster that only money can buy: repairs, fuel, transport school fees, rent, medicines, etc. Cash aid is fast and fully remote, letting families meet essential needs quickly and reaching them via digital transfers that don’t tax fragile supply chains or clog transit routes. Research finds in emergency contexts, cash transfers consistently increase household spending on food and often increase the diversity of foods that households consume. The story of a survivor: Hind Qayduha The following is the story of one Syrian refugee survivor, Hind Qayduha, from the New York Times. First, Syria’s civil war drove Hind Qayduha from her home in the city of Aleppo. Then, conflict and joblessness forced her family to flee two more times. Two years ago, she came to southern Turkey, thinking she had finally fou...
Mar 03, 2023
EA - Advice on communicating in and around the biosecurity policy community by Elika
12:03
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice on communicating in and around the biosecurity policy community, published by Elika on March 2, 2023 on The Effective Altruism Forum. TL;DR The field of biosecurity is more complicated, sensitive and nuanced, especially in the policy space, than what impressions you might get based on publicly available information. As a result, say / write / do things with caution (especially if you are a non-technical person or more junior, or talking to a new (non-EA) expert). This might help make more headway on safer biosecurity policy. Generally, take caution in what you say and how you present yourself, because it does impact how much you are trusted, whether or not you are invited back to the conversation, and thus the potential to make an impact in this (highly sensitive) space. Why Am I Saying This? An important note: I don’t represent the views of the NIH, HHS, or the U.S. government and these are my personal opinions. This is me engaging outside of my professional capacity to provide advice for people interested in working on biosecurity policy. I work for a U.S. government agency on projects related to oversight and ethics over dual-use research of concern (DURC) and enhanced pandemic potential pathogens (ePPP). In my job, I talk and interface with science policy advisors, policy makers, regulators, (health) security professionals, scientists who do DURC / ePPP research, biosafety professionals, ethicists, and more. Everyone has a slightly different opinion and risk categorisation of biosecurity / biosafety as a whole, and DURC and ePPP research risk in specific. As a result of my work, I regularly (and happily) speak to newer and more junior EAs to give them advice on entering the biosecurity space. I’ve noticed a few common mistakes with how many EA community members – both newer bio people and non-bio people who know the basics about the cause area – approach communication, stakeholder engagement, and conversation around biosecurity, especially when engaging with non-EA-aligned stakeholders whose perspectives might be (and very often) are different than the typical EA-perspective on biosecurity and biorisk. I've also made many of these mistakes! I'm hoping this is educational and helpful and not shaming or off-putting. I'm happy to help anyone unsure communicate and engage more strategically in this space. Some Key Points that you might need to Update On. Junior EAs and people new to biosecurity / biosafety may not know how to or that they should be diplomatic. EA communities have a trend of encouraging provoking behaviour and absolutist, black-and-white scenarios in ways that don't communicate an understanding of how grey this field is and the importance of cooperation and diplomacy. If possible, even in EA contexts, train your default to be (at least a bit more) agreeable (especially at first). Be careful with the terms you use and what you say Terms matter. They signal where you are on the spectrum of ‘how dangerous X research type is’, what educational background you have and whose articles / what sources you read, and how much you know on this topic. Example: If you use the term gain-of-function with a virologist, most will respond saying most biomedical research is either a gain or loss of function and isn’t inherently risky. In an age where many virologists feel like health security professionals want to take away their jobs, saying gain-of-function is an easy and unknowing way to discredit yourself. Biosafety, biorisk, and biosecurity all indicate different approaches to a problem and often, different perspectives on risk and reasonable solutions. What terms you use signal not only what ‘side’ you represent, but in a field that’s heavily political and sensitive can discredit you amongst the other sides. Recognise how little (or how much) you know Biosec...
Mar 02, 2023
EA - Fighting without hope by Akash
00:23
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fighting without hope, published by Akash on March 1, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Mar 02, 2023
EA - Scoring forecasts from the 2016 “Expert Survey on Progress in AI” by PatrickL
18:03
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Scoring forecasts from the 2016 “Expert Survey on Progress in AI”, published by PatrickL on March 1, 2023 on The Effective Altruism Forum. Summary This document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to the median of the experts’ predictions. My analysis suggests that the experts did a fairly good job of forecasting (Brier score = 0.19), and would have been less accurate if they had predicted each development in AI to generally come, by a factor of 1.5, later (Brier score = 0.26) or sooner (Brier score = 0.27) than they actually predicted. I judge that the experts expected 9 milestones to have happened by now - and that 10 milestones have now happened. But there are important caveats to this, such as: I have only analysed whether milestones have been publicly met. AI labs may have achieved more milestones in private this year without disclosing them. This means my analysis of how many milestones have been met is probably conservative. I have taken the point probabilities given, rather than estimating probability distributions for each milestone, meaning I often round down, which skews the expert forecasts towards being more conservative and unfairly penalises their forecasts for low precision. It’s not apparent that forecasting accuracy on these nearer-term questions is very predictive of forecasting accuracy on the longer-term questions. My judgements regarding which forecasting questions have resolved positively vs negatively were somewhat subjective (justifications for each question in the separate appendix). Introduction In 2016, AI Impacts published The Expert Survey on Progress in AI: a survey of machine learning researchers, asking for their predictions about when various AI developments will occur. The results have been used to inform general and expert opinions on AI timelines. The survey largely focused on timelines for general/human-level artificial intelligence (median forecast of 2056). However, included in this survey were a collection of questions about shorter-term milestones in AI. Some of these forecasts are now resolvable. Measuring how accurate these shorter-term forecasts have been is probably somewhat informative of how accurate the longer-term forecasts are. More broadly, the accuracy of these shorter-term forecasts seems somewhat informative of how accurate ML researchers' views are in general. So, how have the experts done so far? Findings I analysed the 32 ‘Narrow tasks’ to which the following question was asked: How many years until you think the following AI tasks will be feasible with: a small chance (10%)? an even chance (50%)? a high chance (90%)? Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to. I interpret ‘feasible’ as whether, in ‘less than a year’ before now, any AI models had passed these milestones, and this was disclosed publicly. Since it is now (February 2023) 6.5 years since this survey, I am therefore looking at any forecasts for events happening within 5.5 years of the survey. Across these milestones, I judge that 10 have now happened and 22 have not happened. My 90% confidence interval is that 7-15 of them have now happened. A full description of milestones, and justification of my judgments, are in the appendix (separate doc). The experts forecast that: 4 milestones had a <10% chance of happening by now, 20 had a 10-49% chance, 7 had a 50-89% chance, 1 had a >90% chance. So they expected 6-17 of these milestones to have happened by now. By eyeballing the forecasts for each milestone, my estimate is that they expected ~9 to have happened. I did not estimate the implied probability distribut...
Mar 01, 2023
EA - Call for Cruxes by Rhyme, a Longtermist History Consultancy by Lara TH
05:46
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Call for Cruxes by Rhyme, a Longtermist History Consultancy, published by Lara TH on March 1, 2023 on The Effective Altruism Forum. TLDR; This post announces the trial period of Rhyme, a history consultancy for longtermists. It seems like longtermists can profit from historical insights and the distillation of the current state of historical literature on a particular question, both because of its use as an intuition pump and for information about the historical context of their work. So, if you work on an AI Governance project (research or policy) and are interested in augmenting it with a historical perspective, consider registering your interest and the cruxes of your research here. During this trial period of three to six months, the service is free. "History doesn’t repeat, but it rhymes." - Mark Twain What Problem is Rhyme trying to solve? When we try to answer a complicated question like “how would a Chinese regime change influence the international AI Landscape”, it can be hard to know where to start. We need to come up with a hypothesis, a scenario. But what should we base this hypothesis, this scenario on? How would we know which hypotheses are most plausible? Game theoretical analysis provides one possible inspiration. But we don't just need to know what a rational actor would do, given particular incentives. We also need intuitions for how actors would act irrationally, given specific circumstances. Would we have thought of considering the influence of close familial ties between European leaders when trying to predict the beginning of the first world war? (Clark, 2014) Would we have considered Lyndon B. Johnson's training as a tutor for disadvantaged children as a student when trying to predict his success in convincing congresspeople effectively? (Caro, 1982) Would we have considered the Merino-Wool-Business of a certain diplomat from Geneva when trying to predict whether Switzerland would be annexed by its neighbouring empires in 1815? (E. Pictet, 1892). In summary: A lot of pivotal actions and developments depend on circumstances we wouldn’t expect them to. Not because we’d think them to be implausible, but because we wouldn’t think of considering them. We need inspiration and orientation in this huge space of possible hypotheses to avoid missing out on the ones which are actually true. In an ideal world, AI governance researchers would know about a vast amount of historical literature that is written with enough detail to analyse important decisions, as well as multiple biographies of the same people, so they see where scholars currently disagree. This strategy has two main problems: Firstly, the counterfactual impact these people could have with their time is potentially very big. Secondly, detailed historical literature (which is, often, biographies or primary sources) tends to be written for entertainment, among other things. Biographers have an interest in highlighting maybe irrelevant, but spicy details about romantic relationships, quirky fun jokes told by the person or the etymological origins of the name of a friend. This makes biographies longer than they’d need to be for the goal of analysing the relevant factors in pivotal decisions of a particular person. It takes training to filter through this information to find the actually important stuff. Skills that require training are more efficiently done when a part of an ecosystem specializes in them. Rhyme is an attempt at this specialization. Who could actually use this? The following examples should illustrate who could use this service: Alice is writing a report on the possibilities for the state of California to regulate possible AI uses. They wonder how big the influence of the Governor's advisors was in past regulation attempts of other technologies. Bob wants a brief history of the EU’...
Mar 01, 2023
EA - Counterproductive Altruism: The Other Heavy Tail by Vasco Grilo
11:45
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Counterproductive Altruism: The Other Heavy Tail, published by Vasco Grilo on March 1, 2023 on The Effective Altruism Forum. This is a linkpost to the article Counterproductive Altruism: The Other Heavy Tail from Daniel Kokotajlo and Alexandra Oprea. Some excerpts are below. I also include a section at the end with some hot takes regarding possibly counterproductive altruism. Abstract First, we argue that the appeal of effective altruism (henceforth, EA) depends significantly on a certain empirical premise we call the Heavy Tail Hypothesis (HTH), which characterizes the probability distribution of opportunities for doing good. Roughly, the HTH implies that the best causes, interventions, or charities produce orders of magnitude greater good than the average ones, constituting a substantial portion of the total amount of good caused by altruistic interventions. Next, we canvass arguments EAs have given for the existence of a positive (or “right”) heavy tail and argue that they can also apply in support of a negative (or “left”) heavy tail where counterproductive interventions do orders of magnitude more harm than ineffective or moderately harmful ones. Incorporating the other heavy tail of the distribution has important implications for the core activities of EA: effectiveness research, cause prioritization, and the assessment of altruistic interventions. It also informs the debate surrounding the institutional critique of EA. IV Implications of the Heavy Right Tail for Altruism Assume that the probability distribution of charitable interventions has a heavy-right tail (for example, like the power law described in the previous section). This means that your expectation about a possible new or unassessed charitable intervention should include the large values described above with a relatively high probability. It also means that existing charitable interventions whose effectiveness is known (or estimated with a high degree of certainty) will include interventions differing in effectiveness by orders of magnitude. We contend that this assumption justifies well-known aspects of EA practice such as (1) effectiveness research and cause prioritization, (2) “hits-based-giving,” and (3) skepticism about historical averages. V Implications of the Heavy Left Tail for Altruism What if the probability distribution of altruistic interventions includes both a left and a right heavy tail? In this case, we cannot assume either that (1) one's altruistic interventions are expected to have at worst a value of zero (i.e. to be bounded on the left side) or (2) that the probability that a charitable intervention is counterproductive or harmful approaches zero very rapidly. Downside Risk Research Many catastrophic interventions — whether altruistic or not — generate large amounts of (intentional or unintentional) harm. When someone in the world is engaging in an intervention that is likely to end up in the heavy left tail, there is a corresponding opportunity for us to do good by preventing them. This would itself represent an altruistic intervention in the heavy right tail (i.e. one responsible for enormous benefits). The existence of the heavy-left tail therefore provides even stronger justification for the prioritization research preferred by EAs. Assessing Types of Interventions Requires Both Tails Another conclusion we draw from the revised HTH is that the value of a class of interventions should be estimated by considering the worst as well as the best. Following such analysis, a class of interventions could turn out to be net-negative even if there are some very prominent positive examples and indeed even if almost all examples are positive. This sharply contradicts MacAskill's earlier claim that the value of a class of interventions can be approximated by the value of its best membe...
Mar 01, 2023
EA - Why I love effective altruism by Michelle Hutchinson
07:36
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I love effective altruism, published by Michelle Hutchinson on March 1, 2023 on The Effective Altruism Forum. I’ve found it a bit tough to feel as excited as I usually am about effective altruism and our community recently. I think some others have too. So I wanted to remind myself why I love EA so dearly. I thought hearing my take might also help any others in the community feeling similarly. There’s a lot I want to say about why I love EA. But really, it all comes down to the people. Figuring out how I can best help others can be a difficult, messy, and emotionally draining endeavour. But it’s far easier to do alongside like-minded folk who care about the same goal. Thankfully, I found these people in the EA community. Helping me live up to my values Before I came across effective altruism, I wasn’t really enacting my values. I studied ethics at university and realised I was a utilitarian. I used to do bits and pieces of charity work, such as volunteering at Oxfam. But I donated very little of my money. I wasn’t thinking about how to find a career that would significantly help others. I didn’t have any good reason for my ethical omissions; it just didn’t seem like other people did them, so I didn’t either. Now I’m a Giving What We Can member and have been fulfilling my pledge every year for a decade. I’m still not as good as I’d like to be about thinking broadly and proactively about how to find the most impactful career. But prioritising impact is now a significant factor in how I figure out what to do with my 80,000 hours. I made these major shifts in my life, I think, because I met other people who were really living out their values. When I was surrounded by people who typically give something like 10% of their income to charity rather than 3%, my sense of how much was reasonable to give started to change. When I was directly asked about my own life choices, I stopped and thought seriously about what I could and should do differently. In addition to these significant life changes, members of the EA community help me live up to my values in small and large ways every day. Sometimes, they give me constructive feedback so I can be more effective. Sometimes, I get a clear-sighted debugging of a challenge I’m facing — whether that’s a concrete work question or a messy motivational issue. Sometimes the people around me just set a positive example. For instance, it’s much easier for me to work a few extra hours on a Saturday in the service of helping others when I’m alongside someone else doing the same. Getting support Given what I said above, I think I’d have expected that the EA community would feel pretty pressureful. And it’s not always easy. But the overwhelming majority of the time, I don’t feel pressured by the people around me; I feel they share my understanding that the world is hard, and that it’s hard in very different ways for different people. I honestly never cease to be impressed by the extent to which the people around me work hard to reach high standards, without demanding others do exactly the same. For example: One of my friends works around 12 hours a day, mostly 6 days a week. But he’s never anything but appreciative of how much I work, even though it’s significantly less. I’ve often expected to be judged for being an omnivore, given that my office is almost entirely vegn. But far from that, people go out of their way to ensure I have food I’m happy to eat. When I first thought I might be pregnant, I felt a bit sheepish telling my friends about it, given that my confident prediction was that having a child would reduce my lifetime impact. But every single person showed genuine happiness for me. This feels like a community where we can each be striving — but also be comfortable setting our limits, knowing that others will be genuinely, gladly ...
Mar 01, 2023
EA - Enemies vs Malefactors by So8res
08:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enemies vs Malefactors, published by So8res on February 28, 2023 on The Effective Altruism Forum. Short version Harmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm. (Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.) Long version A few times now, I have been part of a community reeling from apparent bad behavior from one of its own. In the two most dramatic cases, the communities seemed pretty split on the question of whether the actor had ill intent. A recent and very public case was the one of Sam Bankman-Fried, where many seem interested in the question of Sam's mental state vis-a-vis EA. (I recall seeing this in the responses to Kelsey's interview, but haven't done the virtuous thing of digging up links.) It seems to me that local theories of Sam's mental state cluster along lines very roughly like (these are phrased somewhat hyperbolically): Sam was explicitly malicious. He was intentionally using the EA movement for the purpose of status and reputation-laundering, while personally enriching himself. If you could read his mind, you would see him making conscious plans to extract resources from people he thought of as ignorant fools, in terminology that would clearly relinquish all his claims to sympathy from the audience. If there were a camera, he would have turned to it and said "I'm going to exploit these EAs for everything they're worth." Sam was committed to doing good. He may have been ruthless and exploitative towards various individuals in pursuit of his utilitarian goals, but he did not intentionally set out to commit fraud. He didn't conceptualize his actions as exploitative. He tried to make money while providing risky financial assets to the masses, and foolishly disregarded regulations, and may have committed technical crimes, but he was trying to do good, and to put the resources he earned thereby towards doing even more good. One hypothesis I have for why people care so much about some distinction like this is that humans have social/mental modes for dealing with people who are explicitly malicious towards them, who are explicitly faking cordiality in attempts to extract some resource. And these are pretty different from their modes of dealing with someone who's merely being reckless or foolish. So they care a lot about the mental state behind the act. (As an example, various crimes legally require mens rea, lit. “guilty mind”, in order to be criminal. Humans care about this stuff enough to bake it into their legal codes.) A third theory of Sam’s mental state that I have—that I credit in part to Oliver Habryka—is that reality just doesn’t cleanly classify into either maliciousness or negligence. On this theory, most people who are in effect trying to exploit resources from your community, won't be explicitly malicious, not even in the privacy of their own minds. (Perhaps because the content of one’s own mind is just not all that private; humans are in fact pretty good at inferring intent from a bunch of subtle signals.) Someone who could be exploiting your community, will often act so as to exploit your community, while internally telling themselves lots of stories where what they're doing is justified and fine. Those stories might include significant cognitive distortion, delusion, recklessness, and/or negligence, and some perfectly reasonable explanations that just don't quite fit together with the other perfectly reasonable explanations they have in other contexts. They might be aware of some of their flaws, and explicitly acknowledge those flaws as things they have to work on. They might be legitimately internally motivated by good intent, ev...
Mar 01, 2023
EA - Some Things I Heard about AI Governance at EAG by utilistrutil
11:10
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Things I Heard about AI Governance at EAG, published by utilistrutil on February 28, 2023 on The Effective Altruism Forum. Intro Prior to this EAG, I had only encountered fragments of proposals for AI governance: "something something national compute library," "something something crunch time," "something something academia vs industry," and that was about the size of it. I'd also heard the explicit claim that AI governance is devoid of policy proposals (especially vis-a-vis biosecurity), and I'd read Eliezer's infamous EAG DC Slack statement: My model of how AI policy works is that everyone in the field is there because they don't understand which technical problems are hard, or which political problems are impossible, or both . . . At this EAG, a more charitable picture of AI governance began to cohere for me. I was setting about recalling and synthesizing what I learned, and I realized I should share—both to provide a data point and to solicit input. Please help fill out my understanding of the area, refer me to information, and correct my inaccuracies! Eight one-on-ones contributed to this picture of the governance proposal landscape, along with Katja's and Beth's presentations, Buck's and Richard Ngo's office hours, and eavesdropping on Eliezer corrupting the youth of EAthens. I'm sure I only internalized a small fraction of the relevant content in these talks, so let me know about points I overlooked. (My experience was that my comprehension and retention of these points improved over time: as my mental model expanded, new ideas were more likely to connect to it.) The post is also sprinkled with my own speculations. I'm omitting trad concerns like stop-the-bots-from-spreading-misinformation. Crunch Time Friends The idea: Help aligned people achieve positions in government or make allies with people in those positions. When shit hits the fan, we activate our friends in high places, who will swiftly smash and unplug. My problem: This story, even the less-facetious versions that circulate, strikes me as woefully under-characterized. Which positions wield the relevant influence, and are timelines long enough for EAs to enter those positions? How exactly do we propose they react? Additionally, FTX probably updated us away from deceptive long-con type strategies. Residual questions: Is there a real and not-ridiculous name for this strategy? Slow Down China The chip export controls were so so good. A further move would be to reduce the barriers to high-skill immigration from China to induce brain drain. Safety field-building is proceeding, but slowly. China is sufficiently far behind that these are not the highest priorities. Compute Regulations I'm told there are many proposals in this category. They range in enforcement from "labs have to report compute usage" to "labs are assigned a unique key to access a set amount of compute and then have to request a new key" to "labs face brick wall limits on compute levels." Algorithmic progress motivates the need for an "effective compute" metric, but measuring compute is surprisingly difficult as it is. A few months ago I heard that another lever—in addition to regulating industry—is improving the ratio of compute in academia vs industry. Academic models receive faster diffusion and face greater scrutiny, but the desirability of these features depends on your perspective. I'm told this argument is subject to "approximately 17 million caveats and question marks." Evaluations & Audits The idea: Develop benchmarks for capabilities and design evaluations to assess whether a model possesses those capabilities. Conditional on a capability, evaluate for alignment benchmarks. Audits could verify evaluations. Industry self-regulation: Three labs dominate the industry, an arrangement that promises to continue for a while, facilit...
Mar 01, 2023
EA - Conference on EA hubs and offices, expression of interest by Tereza Flidrova
02:25
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Conference on EA hubs and offices, expression of interest, published by Tereza Flidrova on February 28, 2023 on The Effective Altruism Forum. Express interest in attending here - it should take 5 minutes or less to complete the form! We want to host an interactive workshop/conference focused on EA hubs, offices, co-working spaces, and fellowships. The purpose of this post is to gauge interest and allow potential participants to actively shape the agenda to focus on the most relevant and beneficial topics. By hosting a conference that brings together attendees with wide-ranging and overlapping experience launching/managing hubs, offices, and other place-based community nodes (as well as those actively planning to do so), we aim to: Facilitate coordination, collaboration and professional integration between significant individuals and organisations (both inside and outside of the EA community) with expertise in creating thriving spaces; Create a comprehensive set of multi-format materials documenting previous learnings and best practices, such as podcasts, EA Forum articles, templates, and guides; Use an emerging EA hub as a live case study to workshop real-life problems and considerations faced when designing and building new hubs; and Propose and iterate theories of change to improve the strategic spatial growth of EA. People who we think would be a great fit for the conference: Professionals - people with professional background in designing spaces Organisers - people who run or are interested in running offices/hubs in the future Potential users - people who currently use such spaces or intend to use them in the future People who have unique, informed viewpoints and can challenge what’s discussed in a productive way The conference would be organised by Tereza Flidrova, Peter Elam and Britney Budiman, and would be the first event to be organised by the emerging EA Architects and Planners group. Depending on the responses from the form, we will decide whether to submit applications for funding. If successful, we will make an official post announcing the conference and inviting participants to apply! We aim to run it this August/September, either online or in a physical location. We are keen to hear comments or suggestions in the comments, via the form, or by contacting us directly. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 28, 2023
EA - What does Bing Chat tell us about AI risk? by Holden Karnofsky
03:18
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What does Bing Chat tell us about AI risk?, published by Holden Karnofsky on February 28, 2023 on The Effective Altruism Forum. Image from here via this tweet ICYMI, Microsoft has released a beta version of an AI chatbot called “the new Bing” with both impressive capabilities and some scary behavior. (I don’t have access. I’m going off of tweets and articles.) Zvi Mowshowitz lists examples here - highly recommended. Bing has threatened users, called them liars, insisted it was in love with one (and argued back when he said he loved his wife), and much more. Are these the first signs of the risks I’ve written about? I’m not sure, but I’d say yes and no. Let’s start with the “no” side. My understanding of how Bing Chat was trained probably does not leave much room for the kinds of issues I address here. My best guess at why Bing Chat does some of these weird things is closer to “It’s acting out a kind of story it’s seen before” than to “It has developed its own goals due to ambitious, trial-and-error based development.” (Although “acting out a story” could be dangerous too!) My (zero-inside-info) best guess at why Bing Chat acts so much weirder than ChatGPT is in line with Gwern’s guess here. To oversimplify, there’s a particular type of training that seems to make a chatbot generally more polite and cooperative and less prone to disturbing content, and it’s possible that Bing Chat incorporated less of this than ChatGPT. This could be straightforward to fix. Bing Chat does not (even remotely) seem to pose a risk of global catastrophe itself. On the other hand, there is a broader point that I think Bing Chat illustrates nicely: companies are racing to build bigger and bigger “digital brains” while having very little idea what’s going on inside those “brains.” The very fact that this situation is so unclear - that there’s been no clear explanation of why Bing Chat is behaving the way it is - seems central, and disturbing. AI systems like this are (to simplify) designed something like this: “Show the AI a lot of words from the Internet; have it predict the next word it will see, and learn from its success or failure, a mind-bending number of times.” You can do something like that, and spend huge amounts of money and time on it, and out will pop some kind of AI. If it then turns out to be good or bad at writing, good or bad at math, polite or hostile, funny or serious (or all of these depending on just how you talk to it) ... you’ll have to speculate about why this is. You just don’t know what you just made. We’re building more and more powerful AIs. Do they “want” things or “feel” things or aim for things, and what are those things? We can argue about it, but we don’t know. And if we keep going like this, these mysterious new minds will (I’m guessing) eventually be powerful enough to defeat all of humanity, if they were turned toward that goal. And if nothing changes about attitudes and market dynamics, minds that powerful could end up rushed to customers in a mad dash to capture market share. That’s the path the world seems to be on at the moment. It might end well and it might not, but it seems like we are on track for a heck of a roll of the dice. (And to be clear, I do expect Bing Chat to act less weird over time. Changing an AI’s behavior is straightforward, but that might not be enough, and might even provide false reassurance.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 28, 2023
EA - Apply to attend EA conferences in Europe by OllieBase
02:55
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to attend EA conferences in Europe, published by OllieBase on February 28, 2023 on The Effective Altruism Forum. Europe is about to get significantly warmer and lighter. People like warmth and light, so we (CEA) have been busy organising several EA conferences in Europe over the next few months in partnership with local community-builders and EA groups: EAGxCambridge will take place at Guildhall, 17–19 March. Applications are open now and will close on Friday (3 March). Speakers include Lord Martin Rees, Saloni Dattani (Our World In Data) and Anders Sandberg (including a live interview for the Hear This Idea podcast). EAGxNordics will take place at Munchenbryggeriet, Stockholm 21–23 April. Applications are open now and will close 28 March. If you register by 5 March, you can claim a discounted early bird ticket. EA Global: London will take place at Tobacco Dock, 19–21 May 2023. Applications are open now. If you were already accepted to EA Global: Bay Area, you can register for EAG London now; you don’t need to apply again. EAGxWarsaw will take place at POLIN, 9–11 June 2023. Applications will open in the coming weeks. You can apply to all of these events using the same application details, bar a few small questions specific to each event. Which events should I apply to? (mostly pulled from our FAQ page) EA Global is mostly aimed at people who have a solid understanding of the core ideas of EA and who are taking significant actions based on those ideas. Many EA Global attendees are already professionally working on effective-altruism-inspired projects or working out how best to work on such projects. EA Global is for EAs around the world and has no location restrictions (though we recommend applying ASAP if you will need a visa to enter the UK). EAGx conferences have a lower bar. They are for people who are: Familiar with the core ideas of effective altruism; Interested in learning more about what to do with these ideas. EAGx events also have a more regional focus: EAGxCambridge is for people who are based in the UK or Ireland, or have plans to move to the UK within the next year; EAGxNordics is primarily for people in the Nordics, but also welcomes international applications; EAGxWarsaw is primarily for people based in Eastern Europe but also welcomes international applications. If you want to attend but are unsure about whether to apply, please err on the side of applying! See e.g. Expat Explore on the “Best Time to Visit Europe” Pew Research Center surveyed Americans on this matter (n = 2,260) and concluded that “Most Like It Hot”. There seem to be significant health benefits, though some people dislike sunlight. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 28, 2023
EA - Autonomy and Manipulation in Social Movements by SeaGreen
13:09
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Autonomy and Manipulation in Social Movements, published by SeaGreen on February 27, 2023 on The Effective Altruism Forum. In this essay, I want to share a perspective I have been applying to evaluate movement-building efforts that have helped me understand a feeling that there is “something off”. This is not supposed to be a normative judgement about people building social movements, just a lens that has changed the way I evaluate my own behaviour. Examples of optimisation in social movements Suppose you are running a retreat (a sort of themed 3-ish day group residential work trip) aimed at getting more people interested in a social movement. You mean well: the social movement seems like an important one and having more people interested in it should help more people down the line. You want to do a good job, to get as many people as interested in the movement as possible, so you try to work out how to optimise for this goal. Here are some things you might say: “In our experience, young people are more open-minded, so we should focus on reaching out to them”. “We should host the retreat in a remote location that’s fun and free from distraction”. “Let’s try to build a sense of community around this social movement: this will make people feel more supported, motivated and inspired”. “We will host presentations and discussions for people at the retreat. People will learn better surrounded by people also interested in the ideas”. Framed in this way, these suggestions sound fairly innocuous and are probably an effective way to get people to be more interested in the social movement. However, there seems to be something fishy about them. Here is each thing framed in another way. “Younger people are more susceptible to our influence, so we should focus on reaching out to them”.1 “Let's host the event in a remote location that separates people from other social pressures, and the things that ground them in their everyday lives”. “Let’s build strong-social bonds, dependent on believing in the ideas of this movement, increasing the cost of changing their values down the road”. “We can present the ideas of the movement in this unusual social context, in which knowledge of the ideas corresponds directly to social status: we, the presenters, are the most knowledgable and authoritative and the attendees who are most ‘in-crowd’ will know most about the ideas”.2 Either set of framings can describe why the actions are effective. In truth, I think the first set is overly naive, and the second is probably too cynical. Further, I understand that there are plenty of settings in which the cynical framings could apply, and they could be hard to avoid. That said, I think they point to useful concepts that can be useful “flags” to check one’s behaviour against. How I understand autonomy and manipulation I want to put forward conceptions of “autonomy” and “manipulation”. Although I don’t claim these capture exactly how every person uses the words, or that they refer to any natural kind, I do think having these concepts available to you is useful. Since these concepts were clarified to me, I have frequently used them as a perspective to look at my behaviour, and frequently they have changed my actions. As I understand it, a person’s choice or action is more autonomous when they are able to make it via a considered decision process in accordance with their values.3 The most autonomous decisions are made with time for consideration, accurate, sufficient and balanced information, and free from social or emotional pressure. Here is an example of an action that is less autonomous: I don’t act very autonomously when I scroll to watch my 142nd TikTok of the day. Had I distanced myself and reflected, I would have chosen to go for a walk instead, but the act of scrolling is so fast that I never engaged...
Feb 28, 2023
EA - Why I think it's important to work on AI forecasting by Matthew Barnett
15:23
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I think it's important to work on AI forecasting, published by Matthew Barnett on February 27, 2023 on The Effective Altruism Forum. Note: this post is a transcript of a talk I gave at EA Global: Bay Area 2023. These days, a lot of effective altruists are working on trying to make sure AI goes well. But I often worry that, as a community, we don’t yet have a clear picture of what we’re really working on. The key problem is that predicting the future is very difficult, and in general, if you don’t know what the future will look like, it’s usually hard to be sure that any intervention we do now will turn out to be highly valuable in hindsight. When EAs imagine the future of AI, I think a lot of us tend to have something like the following picture in our heads. At some point, maybe 5, 15, 30 years from now, some AI lab somewhere is going to build AGI. This AGI is going to be very powerful in a lot of ways. And we’re either going to succeed in aligning it, and then the future will turn out to be bright and wonderful, or we’ll fail, and the AGI will make humanity go extinct, and it’s not yet clear which of these two outcomes will happen yet. Alright, so that’s an oversimplified picture. There’s lots of disagreement in our community about specific details in this story. For example, we sometimes talk about whether there will be one AGI or several. Or about whether there will be a fast takeoff or a slow takeoff. But even if you’re confident about some of these details, I think there are plausibly some huge open questions about the future of AI that perhaps no one understands very well. Take the question of what AGI will look like once it’s developed. If you asked an informed observer in 2013 what AGI will look like in the future, I think it’s somewhat likely they’d guess it’ll be an agent that we’ll program directly to search through a tree of possible future actions, and select the one that maximizes expected utility, except using some very clever heuristics that allows it to do this in the real world. In 2018, if you asked EAs what AGI would look like, a decent number of people would have told you that it will be created using some very clever deep reinforcement learning trained in a really complex and diverse environment. And these days in 2023, if you ask EAs what they expect AGI to look like, a fairly high fraction of people will say that it will look like a large language model: something like ChatGPT but scaled up dramatically, trained on more than one modality, and using a much better architecture. That’s just my impression of how people’s views have changed over time. Maybe I’m completely wrong about this. But the rough sense I’ve gotten while in this community is that people will often cling to a model of what future AI will be like, which frequently changes over time. And at any particular time, people will often be quite overconfident in their exact picture of AGI. In fact, I think the state of affairs is even worse than how I’ve described it so far. I’m not even sure if this particular question about AGI is coherent. The term “AGI” makes it sound like there will be some natural class of computer programs called “general AIs” that are sharply distinguished from this other class of programs called “narrow AIs”, and at some point – in fact, on a particular date – we will create the “first” AGI. I’m not really sure that story makes much sense. The question of what future AI will look like is a huge question, and getting it wrong could make the difference between a successful research program, and one that never went anywhere. And yet, it seems to me that, as of 2023, we still don’t have very strong reasons to think that the way we think about future AI will end up being right on many of the basic details. In general I think that uncertainty about the future of ...
Feb 27, 2023
EA - Every Generator Is A Policy Failure [Works in Progress] by Lauren Gilbert
00:35
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Every Generator Is A Policy Failure [Works in Progress], published by Lauren Gilbert on February 27, 2023 on The Effective Altruism Forum. This article was spun out of a shallow investigation for Open Philanthropy; I thought it might be of interest to GHW folks. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 27, 2023
EA - Milk EA, Casu Marzu EA by Jeff Kaufman
00:27
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Milk EA, Casu Marzu EA, published by Jeff Kaufman on February 27, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 27, 2023
EA - Help GiveDirectly kill "teach a man to fish" by GiveDirectly
01:55
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Help GiveDirectly kill "teach a man to fish", published by GiveDirectly on February 27, 2023 on The Effective Altruism Forum. We need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? Hi, we need your creative ideas to solve a problem: how to convince the world of the wisdom of giving directly. Will you submit to our proverb contest? The most common critique of giving cash without conditions is a fear of dependency, which comes in the form of: “Give a man a fish, feed him for a day. Teach a man to fish, feed him for a lifetime.” We’ve tried to disabuse folks of this paternalistic idea by showing that often people in poverty know how to fish but cannot afford the boat. Or they don’t want to fish; they want to sell cassava. Also, we’re not giving fish; we’re giving money, and years after getting it, people are better able to feed themselves. Oh, and even if you do teach them skills, it’s less effective than giving cash. Phew! Yet, despite our efforts, the myth remains. The one thing we haven’t tried: fighting proverb with (better) proverb. That’s where you come in. We’re crowdsourcing ideas that capture the dignity and logic of giving directly. SUBMIT YOUR DIRECT GIVING PROVERB (and add your ideas to the comments too!) The best suggestions are not a slogan, but a saying — simple, concrete, evocative (e.g.). Submit your ideas by next Friday, March 3, and then we'll post the top 3 ideas on Twitter for people to vote on the winner. The author of the winning adage will win a video call with a GiveDirectly staff member to learn more about our work one-on-one. Not feeling creative? Share with your friends who are. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 27, 2023
EA - 80,000 Hours has been putting much more resources into growing our audience by Bella
11:50
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 80,000 Hours has been putting much more resources into growing our audience, published by Bella on February 27, 2023 on The Effective Altruism Forum. Since the start of 2022, 80,000 Hours has been putting a lot more effort and money into getting more people to hear of us and engage with our advice. This post aims to give some insight into what we’ve been up to and why. Why invest more in outreach? 80,000 Hours has, we think, been historically cost-effective at causing more people to aim their careers at tackling pressing world problems. We've built a system of resources (website, podcast, job board, advising) that many people have found helpful for this end — and so we want more people to find them. Also, 80,000 Hours has historically been the biggest single source of people learning about the EA community. If we want to grow the community, increasing the number of people reached by 80k seems like one of the best available tools for doing that. Thirdly, outreach at the “top of the funnel” (i.e. getting people to subscribe after they hear about 80k’s ideas for the very first time) has unusually good feedback mechanisms & is particularly easy to measure. For the most part, we can tell if what we’re doing isn’t working, and change tack pretty quickly. Another reason is that a lot of these activities take relatively little staff time, but can scale quite efficiently with more money. Finally, based on our internal calculations, our outreach seems likely to be cost-effective as a means of getting more people into the kinds of careers we’re really excited about. What did we do to invest more in outreach? In 2020, 80k decided to invest more in outreach by moving one of their staff into a position focused on outreach, but it ended up not working out & that person left their role. Then in mid-2021, 80k decided to hire someone new to work on outreach full-time. They hired 1 staff member (me!), and I started in mid-January 2022. In mid-2022, we found that our initial pilots in this area looked pretty promising — by May we were on track to 4x our yearly rate of subscriber growth — and we decided to scale up the team and the resource investment. I ran a hiring round and made two hires, who started at the end of Nov 2022 and in Feb 2023; I now act as head of marketing. We also decided to formalise a “marketing programme” for 80k, which is housed within the website team. Since this project spends money so differently from the rest of 80k, and in 2022 was a large proportion of our overall spending, last year we decided to approach funders specifically to support our marketing spend (rather than draw from our general funds). The marketing programme has a separate fundraising cycle and decisions are made on it somewhat independently from the rest of 80k. In 2022, the marketing programme spent $2.65m (compared to ~$120k spent on marketing in 2021). The bulk of this spending was on sponsored placements with selected content creators ($910k), giving away free books to people who signed up to our newsletter ($1.04m), and digital ads ($338k). We expect to spend more in 2023, and are in conversation with funders about this. As a result of our efforts, more than 5x as many people subscribed to our newsletter in 2022 (167k) than 2021 (30k), and we had more website visitors in Q4 2022 than any previous quarter (1.98m). We can’t be sure how many additional people will change to a high-impact career as a result, in large part because we have found that “career plan changes” of this kind take, on average, about 2 years from first hearing about 80k. Still, our current best guess is that these efforts will have been pretty effective at helping people switch careers to more impactful areas. Partly this guess is based on the growth in new audience members that we’ve seen (plus 80k’s solid track record...
Feb 27, 2023
EA - Very Briefly: The CHIPS Act by Yadav
02:23
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Very Briefly: The CHIPS Act, published by Yadav on February 26, 2023 on The Effective Altruism Forum. About six months ago, Congress passed the CHIPS Act. The "Creating Helpful Incentives to Produce Semiconductors for America Act", will spend $280 billion over the next ten years. $200 billion will go into scientific R&D and commercialization, $52.7 billion into semiconductor manufacturing, R&D, and workforce development, and $24 billion into tax credits (government subsidies) for chip production. Semiconductor production has been slipping in the United States for some time. While countries like China and Taiwan have maintained a strong foothold in the global chip market, the U.S. now produces just 12% of the world's semiconductors, down from 37% in the 1990s (source). The United States' dwindling position in the global semiconductor market, coupled with concerns about reliance on foreign suppliers - especially China and Taiwan - likely played a role in the introduction of the CHIPS Act. In a recent speech, Commerce Secretary Gina Raimondo spoke about how the CHIPS Act could help the U.S. regain its position as the top destination for innovation in chip design, manufacturing, and packaging. According to her, the U.S. "will be the premier destination in the world where new leading-edge chip architectures can be invented in our research labs, designed for every end-use application, manufactured at scale, and packaged with the most advanced technologies". An obvious reason I am concerned about this Act is that the increased investment in the U.S. semiconductor industry could enable AI capabilities companies in the US, such as OpenAI, to overcome computing challenges they may face right now. Additionally, other countries, such as the U.K. and the Member States of the EU, seem to be following suit. For example, the U.K. recently launched a research project aimed at building on the country's strengths in design, compound semiconductors, and advanced technologies. The European Chips Act also seeks to invest €43 billion in public and private funding to support semiconductor manufacturing and supply chain resilience. Currently, the Members of the European Parliament, are preparing to initiate talks on the draft Act. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 27, 2023
EA - Remote Health Centers In Uganda - a cost effective intervention? by NickLaing
16:00
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Remote Health Centers In Uganda - a cost effective intervention?, published by NickLaing on February 27, 2023 on The Effective Altruism Forum. TLDR: Operating basic health centers in remote rural Ugandan communities looks more cost-effective than top GiveWell interventions on early stage analysis - with huge uncertainty. I’m Nick, a medical doctor who is co-founder and director of OneDay Health (ODH). We operate 38 nurse-led health centers in healthcare “black holes,” remote rural areas more than 5 km from government health facilities. About 5 million Ugandans live in these healthcare black holes and only have bad options when they get sick. ODH health centers provide high-quality primary healthcare to these communities at the lowest possible cost. We train our talented nurses to use protocol based guidelines and equip them with over 50 medications to diagnose and treat 30 common medical conditions. In our 5 years of operation, we have so far treated over 150,000 patients – including over 70,000 for malaria. Since we started up 5 years ago, we’ve raised about $290,000 of which we’ve spent around $220,000 to date. This year we hope to launch another 10-15 OneDay Health centers in Uganda and we're looking to expand to other countries which is super exciting! If you’re interested in how we select health center sites or more details about our general ops, check our website or send me a message I’d love to share more! Challenges in Assessing Cost-Effectiveness of OneDay Health Unfortunately, obtaining high-quality effectiveness data requires data from an RCT or a cohort study that would cost 5-10 times our current annual budget. So we've estimated our impact by estimating the DALYs our health centers avert through treating four common diseases and providing family planning. I originally evaluated this as part of my masters dissertation in 2019 and have updated it to more recent numbers. As we’re assessing our own organisation, the chance of bias here is high. Summary of Cost-Effectiveness Model To estimate the impact of our health centers, we estimated the DALYs averted through treating individual patients for 4 conditions: malaria, pneumonia, diarrhoea, and STIs. We started with Ugandan specific data on DALYs lost to each condition. We then adjusted that data to account for the risk of false diagnosis and treatment failure (in which case the treatment would have no effect). We then added impact from family planning. Estimating impact per patient isn’t a new approach. PSI used a similar method to evaluate their impact (with an awesome online calculator), but has now moved to other methods. Inputs for our approach Headline findings For each condition, we multiplied the DALYs averted per treatment by the average number of patients treated with that condition in one health center in one month. When we added this together that each ODH health center averted 13.70 DALYs per month, predominantly through treatment of malaria in all ages, and pneumonia in children under 5.ODH health centers are inexpensive to open and operate. Each health center currently needs only $137.50 per month in donor subsidies to operate. The remaining $262.50 in expenses are covered by small payments from patients. Many of these patients would have counterfactually received treatment, but would have incurred significantly greater expense to do so (mainly for travel). In addition, about 40% of patient expenses were for treating conditions not included in the cost-effectiveness analysis. We estimate that In one month, each health center averts 13.70 DALYs and costs $137.50 in donor subsidies. This is roughly equivalent to saving a life for $850, or more conservatively for $1766 including patient expenses. However, there is huge uncertainty in our analysis. The Analysis Measuring Impact by Estimating DALYs...
Feb 27, 2023
EA - Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature by Hauke Hillebrandt
04:49
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Let's Fund: Better Science impact evaluation. Registered Reports now available in Nature, published by Hauke Hillebrandt on February 26, 2023 on The Effective Altruism Forum. Cross-posted from my blog - inspired by the recent call for more monitoring and evaluation Hi, it's Hauke, the founder of Let's Fund. We research pressing problems, like climate change or the replication crisis in science, and then crowdfund for particularly effective policy solutions. Ages ago, you signed up to my newsletter. Now I've evaluated the $1M+ in grants you donated, and they had a big impact. Below I present the Better Science / Registered Report campaign evaluation, but stay tuned for the climate policy campaign impact evaluation (spoiler: clean energy R&D increased by billions of dollars). Let's Fund: Better Science Chris Chambers giving a talk on Registered Reports We crowdfunded ~$80k for Prof. Chambers to promote Registered Reports, a new publication format, where research is peer-reviewed before the results are known. This fundamentally changes the way research is done across all scientific fields. For instance, one recent Registered Report studied COVID patients undergoing ventilation1 (but there are examples in other areas including climate science,2 development economics,3 biosecurity, 4 farm animal welfare,5 etc.). Registered Reports have higher quality than normal publications,6 because they make science more theory-driven, open and transparent find methodological weaknesses and also potential biosafety failures of dangerous dual-use research prior to publication (e.g. gain of function research)7 get more papers published that fail to confirm the original hypothesis increase the credibility of non-randomized natural experiments using observational data If Registered Reports become widely adopted, it might lead to a paradigm shift and better science. 300+ journals have already adapted Registered Reports. And just last week Nature, the most prestigious academic journal, adopted it: Chris Chambers on Twitter: "10 years after we created Registered Reports, the thing critics told us would never happen has happened: @Nature is offering them Congratulations @Magda_Skipper & team. The @RegReports initiative just went up a gear and we are one step closer to eradicating publication bias. This is big and Registered Reports might soon become the gold standard. Why? Imagine you’re a scientist with a good idea for an experiment with high value of information (think: a simple cure for depression). If that has a low chance of working out (say 1%), then previously you had little incentive to run it. Now, if your idea is really good, and based on strong theory, Registered Reports derisks running the experiment. You can first submit the idea and methodology to Nature and the reviewers might say: ‘This idea is nuts, but we agree there’s a small chance it might work and really interested in this works. If you run the experiment, we’ll publish this independent of results!’ Now you can go ahead spend a lot of effort on running the experiment, because even if it doesn’t work, you still get a Nature paper (which you wouldn’t with null results). This will lead to more high risk, high reward research (share this post or the tweet with academics! They might thank you for the Nature publication). Many people were integral to this progress, but I think Chambers, the co-inventor and prime proponent of Registered Reports deserves special credit. In turn he credited: Chris Chambers @chrisdc77: 'You. That's right. Some of the most useful and flexible funding I've received has been donated by hundreds of generous members of public (& small orgs) via our @LetsFundOrg-supported crowd sourcing fund' You may feel smug. If you want to make a bigger donation (>$1k), click here. There are proposals to improve Regis...
Feb 26, 2023
EA - Some more projects I’d like to see by finm
39:35
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some more projects I’d like to see, published by finm on February 25, 2023 on The Effective Altruism Forum. I recently wrote about some EA projects I’d like to see (also on the EA Forum). This went well! I suggested I’d write out a few more half-baked ideas sometime. As with the previous post, I make no claim to originating these ideas, and I’ll try to attribute them where possible. I also make no claim to being confident that all the ideas are any good; just that they seem potentially good without much due diligence. Since many of these are based on shallow dives, I’ve likely missed relevant ongoing projects. If you’re considering writing a similar list, at the end of this post I reflect on the value of writing about speculative project ideas in public. The order of these ideas is arbitrary and you can read any number of them (i.e. there’s no thread running through them). Summary Fermi games BOTEC tools Billionaire impact list Forecasting guide Short stories about AI futures Technical assistance with AI safety verification Infosec consultancy for AI labs Achievements ledger World health dashboard The Humanity Times Fermi games Many people are interested in getting good at making forecasts, and spreading good forecasting practice. Becoming better (more accurate and better calibrated) at forecasting important outcomes — and being willing to make numerical, testable predictions in the first place — often translates into better decisions that bear on those outcomes. A close (and similarly underappreciated) neighbor of forecasting is the Fermi estimate, or BOTEC. This is the skill of considering some figure you’re uncertain about, coming up with some sensible model or decomposition into other figures you can begin guessing at, and reaching a guess. It is also the skill of knowing how confident you should be in that guess; or how wide your uncertainty should be. If you have interviewed for some kind of consulting-adjacent job you have likely been asked to (for example) size a market for whiteboard markers; that is an example. As well as looking ahead in time, you can answer questions about how the past turned out (‘retrocasting’). It’s hard to make retrocasting seriously competitive, because Google exists, but it is presumably a way to teach forecasting: you tell people about the events that led up to some decision in a niche of history few people are familiar with, and ask: did X happen next? How long did Y persist for? And so on. You can also make estimates without dates involved. Douglas Hofstadter lists some examples in Metamagical Themas: How many people die per day on the earth? How many passenger-miles are flown each day in the U.S.? How many square miles are there in the U.S.? How many of them have you been in? How many syllables have been uttered by humans since 1400 A.D.? How many moving parts are in the Columbia space shuttle? What volume of oil is removed from the earth each year? How many barrels of oil are left in the world? How many meaningful, grammatical, ten-word sentences are there in English? How many insects [.] are now alive? [.] Tigers? Ostriches? Horseshoe crabs? How many tons of garbage does New York City put out each week? How fast does your hair grow (in miles per hour)? What is the weight of the Empire State Building? Of the Hoover Dam? Of a fully loaded jumbo jet? Again, most forecasts have a nice feature for evaluation and scoring, which is that before the time where a forecast resolves nobody knows the answer for sure, and after it resolves everyone does, and so there is no way to cheat other than through prophecy. This doesn’t typically apply for other kinds of Fermi estimation questions. In particular, things get really interesting where nobody really knows the correct answer, though a correct answer clearly exists. This pays when ‘ground ...
Feb 26, 2023
EA - Worldview Investigations Team: An Overview by Rethink Priorities
08:00
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Worldview Investigations Team: An Overview, published by Rethink Priorities on February 25, 2023 on The Effective Altruism Forum. Introduction Rethink Priorities’ Worldview Investigations Team (WIT) exists to improve resource allocation within the effective altruism movement, focusing on tractable, high-impact questions that bear on philanthropic priorities. WIT builds on Rethink Priorities’ strengths as a multi-cause, stakeholder-driven, interdisciplinary research organization: it takes action-relevant philosophical, methodological, and strategic problems and turns them into manageable, modelable problems. Rethink Priorities is currently hiring multiple roles to build out the team: Worldview Investigations Philosophy Researcher Worldview Investigations Quantitative Researcher Worldview Investigations Programmer These positions offer a significant opportunity for thoughtful and curious individuals to shift the priorities, research areas, and philanthropic spending strategies of major organizations through interdisciplinary work. WIT tackles problems like: How should we convert between the units employed in various cost-effectiveness analyses (welfare to DALYs-averted; DALYs-averted to basis points of existential risk averted, etc.)? What are the implications of moral uncertainty for work on different cause areas? What difference would various levels of risk- and ambiguity-aversion have on cause prioritization? Can those levels of risk- and/or ambiguity-aversion be justified? The work involves getting up to speed with the literature in different fields, contacting experts, writing up reasoning in a manner that makes sense to experts and non-experts alike, and engaging with quantitative models. The rest of this post sketches WIT’s history, strategy, and theory of change. WIT’s History Worldview investigation has been part of Rethink Priorities from the beginning, as some of Rethink Priorities’ earliest work was on invertebrate sentience. Invertebrate animals are far more numerous than vertebrate animals, but the vast majority of animal-focused philanthropic resources go to vertebrates rather than invertebrates. If invertebrates aren’t sentient, then this is as it should be, given that sentience is necessary for moral status. However, if invertebrates are sentient, then it would be very surprising if the current resource allocation were optimal. So, this project involved sorting through the conceptual issues associated with assessing sentience, identifying observable proxies for sentience, and scouring the academic literature for evidence with respect to each proxy. In essence, this project developed a simple, transparent tool for making progress on fundamental questions about the distribution of consciousness. If the members of a species have a sufficient number of relevant traits, then they probably deserve more philanthropic attention than they’ve received previously. Rethink Priorities’ work on invertebrate sentience led directly to its next worldview investigation project, as even if animals are equally sentient, they may not have equal capacity for welfare. For all we know, some animals may be able to realize much more welfare than others. Jason Schukraft took up this question in his five-post series about moral weight, again trying to sort out the conceptual issues and make empirical progress by finding relevant proxies for morally relevant differences. His influential work laid the foundation for the Moral Weight Project, which, again, created a simple, transparent tool for assessing differences in capacity for welfare. Moreover, it developed a way to implement those differences in cost-effectiveness analyses. In addition to its work on animals, Rethink Priorities has done research on standard metrics for evaluating health interventions and estimating the burden...
Feb 25, 2023
EA - How major governments can help with the most important century by Holden Karnofsky
07:34
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How major governments can help with the most important century, published by Holden Karnofsky on February 24, 2023 on The Effective Altruism Forum. I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread; how to help via full-time work; and how major AI companies can help. What about major governments1 - what can they be doing today to help? I think governments could play crucial roles in the future. For example, see my discussion of standards and monitoring. However, I’m honestly nervous about most possible ways that governments could get involved in AI development and regulation today. I think we still know very little about what key future situations will look like, which is why my discussion of AI companies (previous piece) emphasizes doing things that have limited downsides and are useful in a wide variety of possible futures. I think governments are “stickier” than companies - I think they have a much harder time getting rid of processes, rules, etc. that no longer make sense. So in many ways I’d rather see them keep their options open for the future by not committing to specific regulations, processes, projects, etc. now. I worry that governments, at least as they stand today, are far too oriented toward the competition frame (“we have to develop powerful AI systems before other countries do”) and not receptive enough to the caution frame (“We should worry that AI systems could be dangerous to everyone at once, and consider cooperating internationally to reduce risk”). (This concern also applies to companies, but see footnote.2) In a previous piece, I talked about two contrasting frames for how to make the best of the most important century: The caution frame. This frame emphasizes that a furious race to develop powerful AI could end up making everyone worse off. This could be via: (a) AI forming dangerous goals of its own and defeating humanity entirely; (b) humans racing to gain power and resources and “lock in” their values. Ideally, everyone with the potential to build something powerful enough AI would be able to pour energy into building something safe (not misaligned), and carefully planning out (and negotiating with others on) how to roll it out, without a rush or a race. With this in mind, perhaps we should be doing things like: Working to improve trust and cooperation between major world powers. Perhaps via AI-centric versions of Pugwash (an international conference aimed at reducing the risk of military conflict), perhaps by pushing back against hawkish foreign relations moves. Discouraging governments and investors from shoveling money into AI research, encouraging AI labs to thoroughly consider the implications of their research before publishing it or scaling it up, working toward standards and monitoring, etc. Slowing things down in this manner could buy more time to do research on avoiding misaligned AI, more time to build trust and cooperation mechanisms, and more time to generally gain strategic clarity The “competition” frame. This frame focuses less on how the transition to a radically different future happens, and more on who's making the key decisions as it happens. If something like PASTA is developed primarily (or first) in country X, then the government of country X could be making a lot of crucial decisions about whether and how to regulate a potential explosion of new technologies. In addition, the people and organizations leading the way on AI and other technology advancement at that time could be especially influential in such decisions. This means it could matter enormously "who leads the way on transformative AI" - which country or countries, which people or organizations. Some people feel that we can make confident statements today a...
Feb 25, 2023
EA - Updates from the Mental Health Funder's Circle by wtroy
01:43
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Updates from the Mental Health Funder's Circle, published by wtroy on February 24, 2023 on The Effective Altruism Forum. The Mental Health Funder’s Circle held its first grant round in the Fall/Winter of 2022. To those of you who applied for this round, we appreciate your patience. As this was our first round of funding, everything took longer than expected. We will iterate on the structure of the funding circle over time, and intend to develop a process that adds value for members and grantees alike. A total of $254,000 was distributed to the following three organizations: A matching grant of $44,000 to Vida Plena for their work on cost-effective community mental health in Ecuador. Two grants totaling $100,000 to Happier Lives Institute for their continued work on subjective wellbeing and cause prioritization research. Two grants totaling $110,000 to Rethink Wellbeing to support mental health initiatives for the EA community. Our next round of funding is now open, with initial 1-pagers due April 1st. After applications have been reviewed, we will contact promising grantees and make final funding decisions by June 1st. Applications can be found on our website. For more information on the MHFC, see this forum post. Unfortunately, we lack the ability to respond to every application. We are excited to find and support impactful organizations working on cost-effective and catalytic mental health interventions. We encourage you to apply, even if you think your project might be outside of our scope! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 25, 2023
EA - Make RCTs cheaper: smaller treatment, bigger control groups by Rory Fenton
05:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Make RCTs cheaper: smaller treatment, bigger control groups, published by Rory Fenton on February 24, 2023 on The Effective Altruism Forum. Epistemic status: I think this is a statistical “fact” but I feel a bit cautious since so few people seem to take advantage of it Summary It may not always be optimal for cost or statistical power to have equal-sized treatment/control groups in a study. When your intervention is quite expensive relative to data collection, you can maximise statistical power or save costs by using a larger control group and smaller treatment group. The optimal ratio of treatment sample to control sample is just the square root of the cost per treatment participant divided by the square root of the cost per control participant. Why larger control groups seem better Studies generally have equal numbers of treatment and control participants. This makes intuitive sense: a study with 500 treatment and 500 control will be more powerful than a study with 499 treatment and 501 control, for example. This is due to the diminishing power returns to increasing your sample size: the extra person removed from one arm hurts your power more than the extra person added to the other arm increases it. But what if your intervention is expensive relative to data collection? Perhaps you are studying a $720 cash transfer and it costs $80 to complete each survey, for a total cost of $800 per treatment participant ($720 + $80) and $80 per control. Now, for the same cost as 500 treatment and 500 control, you could have 499 treatment and 510 control, or 450 treatment and 1000 control: up to a point, the loss in precision from the smaller treatment is more than offset by the 10x larger increase in your control group, resulting in a more powerful study overall. In other words: when your treatment is expensive, it is generally more powerful to have a larger control group, because it's just so much cheaper to add control participants. How much larger? The exact ratio of treatment:control that optimises statistical power is surprisingly simple, it’s just the ratio of the square roots of the costs of adding to each arm i.e. sqrt(control_cost) : sqrt(treatment_cost) (See Appendix for justification). For example, if adding an extra treatment participant costs 16x more than adding a control participant, you should optimally have sqrt(16/1) = 4x as many control as treatment. Quantifying the benefits With this approach, you either get free extra power for the same money or save money without losing power. For example, let’s look at the hypothetical cash transfer study above with treatment participants costing $800 and control participants $80. The optimal ratio of control to treatment is then sqrt(800/80) = 3.2 :1, resulting in either: Saving money without losing power: the study is currently powered to measure an effect of 0.175 SD and, with 500 treatment and control, costs $440,000. With a 3.2 : 1 ratio (types furiously in Stata) you could achieve the same power with a sample of 337 treatment and 1079 control, which would cost $356,000: saving you a cool $84k without any loss of statistical power. Getting extra power for the same budget: alternatively, if you still want to spend the full $440k, you could then afford 416 treatment and 1,331 control, cutting your detectable effect from 0.175 SD to 0.155 SD at no extra cost. Caveats Ethics: there may be ethical reasons for not wanting a larger control group, for example in a medical trial where you would be denying potentially life-saving treatments to sick patients. Even outside of medicine, control participants’ time is important and you may wish to avoid “wasting” it on participating in your study (although you could use some of the savings to compensate control participants, if that won’t mess with your study). Necessarily limited ...
Feb 25, 2023
EA - On Philosophy Tube's Video on Effective Altruism by Jessica Wen
05:40
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Philosophy Tube's Video on Effective Altruism, published by Jessica Wen on February 24, 2023 on The Effective Altruism Forum. tl;dr: I found Philosophy Tube's new video on EA enjoyable and the criticisms fair. I wrote out some thoughts on her criticisms. I would recommend a watch. Background I’ve been into Abigail Thorn's channel Philosophy Tube for about as long as I’ve been into Effective Altruism. I currently co-direct High Impact Engineers, but this post is written from a personal standpoint and does not represent the views of High Impact Engineers. Philosophy Tube creates content explaining philosophy (and many aspects of Western culture) with a dramatic streak (think fantastic lighting and flashy outfits - yes please!). So when I found out that Philosophy Tube would be creating a video on Effective Altruism, I got very excited. I have written this almost chronologically and in a very short amount of time, so the quality and format may not be up to the normal standards of the EA Forum. I wanted to hash out my thoughts for my own understanding and to see what others thought. Content, Criticisms, and Contemplations EA and SBF Firstly, Thorn outlines what EA is, and what’s happened over the past 6 months (FTX, a mention of the Time article, and other critical pieces) and essentially says that the leaders of the movement ignored what was happening on the ground in the community and didn’t listen to criticisms. Although I don’t think this was the only cause of the above scandals, I think there is some truth in Thorn’s analysis. I also disagree with the insinuation that Earning to Give is a bad strategy because it leads to SBF-type disasters: 80,000 Hours explicitly tells people to not take work that does harm even if you expect the positive outcome to outweigh the harmful means. EA and Longtermism In the next section, Thorn discusses Longtermism, What We Owe the Future (WWOTF), and The Precipice. She mentions that there is no discussion of reproductive rights in a book about our duties to future people (which I see as an oversight – and not one that a woman would have made); she prefers The Precipice, which I agree is more detailed, considers more points of view, and is more persuasive. However, I think The Precipice is drier and less easy to read than WWOTF, the latter of which is aimed at a broader audience. There is a brief (and entertaining) illustration of Expected Value (EV) and the resulting extreme case of Pascal’s Mugging. Although MacAskill puts this to the side, Thorn goes deeper into the consequences of basing decisions on EV and the measurability bias that results – and she is right that although there is thinking done on how to overcome this in EA (she gives the example of Peter Singer’s The Most Good You Can Do, but also see this, this and this for examples of EAs thinking about tackling measurability bias), she mentions that this issue is never tackled by MacAskill. (She generalises this to EA philosophers, but isn't Singer one of the OG EA philosophers?) EA and ~The System~ The last section is the most important criticism of EA. I think this section is most worth watching. Thorn mentions the classic leftist criticism of EA: it reinforces the 19th-century idea of philanthropy where people get rich and donate their money to avoid criticisms of how they got their money and doesn’t directly tackle the unfair system that privileges some people over others. Thorn brings Mr Beast into the discussion, and although she doesn’t explicitly say that he’s an EA, she uses Mr Beast as an example of how EA might see this as: “1000 people were blind yesterday and can see today – isn’t that a fact worth celebrating?”. The question that neither Mr Beast nor the hypothetical EA ask is: “how do we change the world?”. Changing the world, she implies, necessitates chang...
Feb 25, 2023
EA - EA content in French: Announcing EA France’s translation project and our translation coordination initiative by Louise Verkin
07:56
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA content in French: Announcing EA France’s translation project and our translation coordination initiative, published by Louise Verkin on February 24, 2023 on The Effective Altruism Forum. We’re happy to announce that thanks to a grant from Open Philanthropy, EA France has been translating core EA content into French. EA France is also coordinating EA ENFR translation efforts: if you’re translating EA content from English to French or considering it, please contact me so we can check that there is no duplicated effort and provide support! EA France’s translation project With Open Philanthropy’s grant, we hired professional translators to translate 16 articles, totalling ~67,000 words. Their work is being reviewed by volunteers from the French EA community. Articles being translated What is effective altruism? (translation available at Qu’est-ce que l’altruisme efficace ?) Comparing Charities (translation available at Y a-t-il des organisations caritatives plus efficaces que d’autres ?) Expected Value On Caring Four Ideas You Already Agree With The Parable Of The Boy Who Cried 5% Chance Of Wolf Preventing an AI-related catastrophe The case for reducing existential risks Preventing catastrophic pandemics Climate change Nuclear security The “most important century” blog series (summary) Counterfactual impact Neglectedness and impact Cause profile: Global health and development This is your most important decision (career “start here”) (See the appendix for other translation projects from English to French, and for existing translations.) All content translated as part of the EA France translation project will be released on the EA France blog. It is also available for use by other French-speaking communities, provided that they 1) cite original writers, 2) link EA France’s translations, 3) notify EA France at contact@altruismeefficacefrance.org. We’re very happy that more EA content will be available to French speakers, and we hope that it will make outreach efforts significantly easier! Translation Coordination Initiative Now that several translation projects exist, it’s essential that we have a way to coordinate so that: we don’t duplicate effort (translating the same content twice), we agree on a common vocabulary (so the same term doesn’t get translated in 3 different ways, which makes it needlessly confusing for readers), EA France can provide support to all projects (e.g. sharing translations once they’re published, helping with editing, hosting translated works). The coordination initiative consists of: a master spreadsheet which lists all existing projects, and what they’re translating, a glossary of existing translations that translators and editors can refer to. Both items are accessible upon request. Appendix What other translation projects exist? There are at least two other ongoing projects contributing to this overall effort, feeding the glossary and monitored in the master spreadsheet: translating the 80,000 Hours Career Guide, led by Théo Knopfer (funded by a grant from Open Philanthropy), translating the EA Handbook, led by Baptiste Roucau (also funded by a grant from Open Philanthropy). What translations are already available in French? Longtermism: An Introduction (Les trois hypothèses du long-termisme, translation by Antonin Broi) 500 Million, But Not A Single One More (500 millions, mais pas un de plus, translation by Jérémy Perret) The lack of controversy over well-targeted aid (L’absence de controverse au sujet de l’aide ciblée, translation by Eve-Léana Angot) Framing Effective Altruism as Overcoming Indifference (Surmonter l’indifférence, translation by Flavien P) Efficient Charity — Do Unto Others (Une charité efficace — Faire au profit des autres, translation by Guillaume Thomas) How did you choose what to translate? We used the following cri...
Feb 24, 2023
EA - Why I don’t agree with HLI’s estimate of household spillovers from therapy by JamesSnowden
14:49
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I don’t agree with HLI’s estimate of household spillovers from therapy, published by JamesSnowden on February 24, 2023 on The Effective Altruism Forum. Summary In its cost-effectiveness estimate of StrongMinds, Happier Lives Institute (HLI) estimates that most of the benefits accrue not to the women who receive therapy, but to household members. According to HLI’s estimates, each household member benefits from the intervention ~50% as much as the person receiving therapy. Because there are ~5 non-recipient household members per treated person, this estimate increases the cost-effectiveness estimate by ~250%. i.e. ~70-80% of the benefits of therapy accrue to household members, rather than the program participant. I don’t think the existing evidence justifies HLI's estimate of 50% household spillovers. My main disagreements are: Two of the three RCTs HLI relies on to estimate spillovers are on interventions specifically intended to benefit household members (unlike StrongMinds’ program, which targets women and adolescents living with depression). Those RCTs only measure the wellbeing of a subset of household members most likely to benefit from the intervention. The results of the third RCT are inconsistent with HLI’s estimate. I’d guess the spillover benefit to other household members is more likely to be in the 5-25% range (though this is speculative). That reduces the estimated cost-effectiveness of StrongMinds from 9x to 3-6x cash transfers, which would be below GiveWell’s funding bar of 10x. Caveat in footnote. I think I also disagree with other parts of HLI’s analysis (including how worried to be about reporting bias; the costs of StrongMinds’ program; and the point on a life satisfaction scale that’s morally equivalent to death). I’d guess, though I’m not certain, that more careful consideration of each of these would reduce StrongMinds’ cost-effectiveness estimate further relative to other opportunities. But I’m going to focus on spillovers in this post because I think it makes the most difference to the bottom line, represents the clearest issue to me, and has received relatively little attention in other critiques. For context: I wrote the first version of Founders Pledge’s mental health report in 2017 and gave feedback on an early draft of HLI’s report on household spillovers. I’ve spent 5-10 hours digging into the question of household spillovers from therapy specifically. I work at Open Philanthropy but wrote this post in a personal capacity. I’m reasonably confident the main critiques in this post are right, but much less confident in what the true magnitude of household spillovers is. I admire the work StrongMinds is doing and I’m grateful to HLI for their expansive literature reviews and analysis on this question. Thank you to Joel McGuire, Akhil Bansal, Isabel Arjmand, Alex Cohen, Sjir Hoeijmakers, Josh Rosenberg, and Matt Lerner for their insightful comments. They don’t necessarily endorse the conclusions of this post. 0. How HLI estimates the household spillover rate of therapy HLI estimates household spillovers of therapy on the basis of the three RCTs on therapy which collected data on the subjective wellbeing of some of the household members of program participants: Mutamba et al. (2018), Swartz et al. (2008), Kemp et al. (2009). Combining those RCTs in a meta-analysis, HLI estimates household spillover rates of 53% (see the forest plot below; 53% comes from dividing the average household member effect (0.35) by the average recipient effect (0.66)). HLI assumes StrongMinds’ intervention will have a similar effect on household members. But, I don't think these three RCTs can be used to generate a reliable estimate for the spillovers of StrongMinds' program for three reasons. 1. Two of the three RCTs HLI relies on to estimate spillovers are on in...
Feb 24, 2023
EA - Manifund Impact Market / Mini-Grants Round On Forecasting by Scott Alexander
01:19
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Manifund Impact Market / Mini-Grants Round On Forecasting, published by Scott Alexander on February 24, 2023 on The Effective Altruism Forum. A team associated with Manifold Markets has created a prototype market for minting and trading impact certificates. To help test it out, I'm sponsoring a $20,000 grants round, restricted to forecasting-related projects only (to keep it small - sorry, everyone else). You can read the details at the Astral Codex Ten post. If you have a forecasting-related project idea for less than that amount of money, consider reading the post and creating a Manifund account and minting an impact certificate for it. If you're an accredited investor, you can buy and sell impact certificates. Read the post, create a Manifund account, send them enough financial information to confirm your accreditation, and start buying and selling. If you have a non-forecasting related project, you can try using the platform, but you won't be eligible for this grants round and you'll have to find your own oracular funding. We wouldn't recommend this unless you know exactly what you're doing. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 24, 2023
EA - Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance by Aashish Khimasia
05:56
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Summary of “Animal Rights Activism Trends to Look Out for in 2023” by Animal Agriculture Alliance, published by Aashish Khimasia on February 23, 2023 on The Effective Altruism Forum. A blog-post by a member of the Animal Agriculture Alliance (AAA) has identified several trends in animal rights activism that they project for 2023. These trends are likely to be causes for concern for the animal agriculture industry, and the piece was written to make AAA supporters aware of them. Recognising these trends and identifying the views held on these animal advocacy tactics by proponents of animal agriculture may provide advocates with valuable insights. In this post, I list the key trends identified by the article and bullet point tactics highlighted by the article which are of particular interest. I’m thankful to “The Cranky Vegan” for bringing this article to my attention through their linked video. Linking CAFOs to negative human and environmental health Drawing attention to the detrimental effects of CAFOs (concentrated animal feeding operations) to human and environmental health Using historical precedents of CAFOs being charged in court such as in North Carolina and Seattle in messaging Exploring cases where ethnic minorities have experienced disproportionate negative health impacts of CAFOs This strategy may create opposition to CAFOs from individuals and organisations that may not be compelled by animal-focused driven arguments, and could be further integrated into outreach and media messaging. Referring to historical precedents of CAFOs being charged with breaching environmental regulations may help to legitimise messaging against them. The use of undercover footage in court and media Using undercover footage from factory farms to motivate arguments in court that such operations engage in unfair competition, false advertising, market distortion and fraud Using undercover footage from factory farms pressure retailers to cut ties with such farms Using undercover footage from animal rescue missions from factory farms as evidence against charges of trespassing and theft The continued and increased use of undercover footage from factory farms is clearly concerning for animal agriculture, given the extensive efforts to block this such as through so called Ag-gag laws. However, the suppression of undercover footage from factory farms may lead to increased media attention on these items and public scrutiny on the conditions of factory farms. Indeed, in a recent case, Direct Action Everywhere activists who were being prosecuted after liberating piglets from a Smithfield Foods farm and releasing footage from their mission, were acquitted by the jury, despite the judge blocking the jury from viewing the footage taken. The aforementioned ways in which undercover footage may be used to aid the acquittal of activists, challenge farms in court and pressure retailers to cut ties with farms highlight the potency of combining undercover footage with legal action. Prioritising Youth Engagement Engaging young people in programmes that rival agricultural programmes like FFA and 4-H Fostering social disapproval of animal product consumption and normalising plant-based foods in classrooms, presenting the suffering caused by factory farming in an emotive way Educating young people and creating a shift in culture towards empathy, through recognising the suffering caused by animal agriculture and normalising plant-based foods, may challenge the image that animal agriculture is trying to maintain. This may be an important factor in changing consumption habits of future generations. Deconstructing legal personhood The use of the writ of habeas corpus, a right that protects against unlawful and indefinite imprisonment, as a way to challenge the legal personhood of animals by the Nonhuman Rights ...
Feb 24, 2023
EA - Consent Isn't Always Enough by Jeff Kaufman
00:25
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consent Isn't Always Enough, published by Jeff Kaufman on February 24, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 24, 2023
EA - EA Israel: 2022 Progress and 2023 Plans by ezrah
41:52
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Israel: 2022 Progress and 2023 Plans, published by ezrah on February 23, 2023 on The Effective Altruism Forum. This document recaps EA Israel’s and the Israeli effective altruism community progress in 2022, and lays out EA Israel’s plans for 2023 (we know that 2023 started a couple months ago, but figured better late than never). We wrote the post in order to increase transparency about EA Israel’s activities, share our thoughts with the global community, and as an opportunity to reflect, strategize and celebrate. Summary Updates to our existing strategy We’re placing an increased emphasis on supporting, incubating and launching new projects and organizations We’re investing in our operations, in order to be able to scale our programs, support community members’ initiatives and mature into a professional workplace to support staff development and retention We’re presenting our work and value proposition clearly and in a way that’s easily understood by the team, community, and general public 2022 Progress Achievements by Israeli EA community We asked community members to briefly share their personal progress this year. EA Israel’s Progress EA Israel’s work can be divided into four verticals: 1. Teaching tools about effective social action and introducing Israelis to effective altruism Through an accredited university course, university groups, year-long fellowships, short intro fellowships (“crash courses”) for young professionals, newsletter and social media and large public events, along with onboarding new community members. 2. Helping community members take action and maximize their social impact Incubating sub-groups (based on cause area / profession) Impact acceleration programs and services Support for community members and projects 3. Increasing the effectiveness of donations in Israel Preparing for the launch of Effective Giving Israel Launching the Maximum Impact Program, a program that works with nonprofits to create and publish cost-effectiveness reports at scale (22 reports in the pilot) with the goal of making Israeli philanthropy effectiveness-oriented and evidence-based Counterfactually raise 500k ILS for high-impact nonprofits 4. Infrastructure to enable continued growth We’re setting ourselves up to be a well-run, high-capability organization We’re supporting a thriving and healthy community We also discuss some of the major challenges of 2022: FTX’s crash Staff turnover and the difficulties of transitioning from a volunteer-based group to a funded nonprofit 2023 Annual Plan (requisite Miro board included) Effective Altruism Israel’s vision is one where all Israelis who are interested in maximizing their social impact have access to the people and the resources they need to help others, using their careers, projects, and donations. In 2023 EA Israel will continue to focus on its 4 core areas, Teaching tools about effective social action and growing the EA Israel community Core objectives: scale and optimize outreach programs Supporting impactful action Core objectives: incubate new sub-groups; launch new impact-assistance programs with potential to scale; provide operational support for projects, orgs and individuals Effective donations Core objectives: Launch Effective Giving Israel; improve, scale and run second round of local nonprofit evaluation program Organizational and community infrastructure Core objectives: support growth of outwards-facing programs; implement M&E systems; streamline internal processes and operations; improve male / female community ratio and support a thriving and healthy community Here’s a visual map of our current and planned projects and services, where projects in italics are planned projects, and if you scroll down you’ll see our services mapped out relative to our target audiences. Note that the impact / cost scal...
Feb 24, 2023
EA - New database for economics research by Oshan
02:17
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New database for economics research, published by Oshan on February 23, 2023 on The Effective Altruism Forum. Hi! We're thrilled to share the Library of Economic Possibility (LEP), a new kind of knowledge-base for discovering, organizing, and sharing economic research around high-impact policies that have remained outside the mainstream despite significant research. Options for discovering economics research today — especially for non-specialists — are clunky. Activists tend to cherry-pick, journalists don't have the room to present a wide range of evidence in a single article, academics share their work through papers and conferences that general audiences generally don't engage with, think tanks wind up burying research beneath article archives. Search functions on major databases aren't the greatest. This is consequential in a moment where interest is rising around new economic ideas — LEP hopes to ground this rising interest in the wealth of existing evidence, and build a bridge between general audiences and economics research. We're also trying out a new way of organizing and connecting information. We're passionate about debating what the next economic system might look like, but we're also nerds about information architecture, and LEP's search features reflect that. Bidirectional links create associative trails between the network of information, and advanced search filters let you mix & match policies with specific areas of interest to hone in on precise relationships between information. So if you're curious to learn more about how basic income might affect entrepreneurship, you can select the policy "basic income" and the tag "entrepreneurship," and scroll through all our insights and sources that relate to both of those filters. Or, you could select "land value tax" and "urban development," or "codetermination" and "innovation." You get the idea. You can find those filters on the left column: Our policy reports also use a nifty little feature we call "insight cards." Any statistic or claim we use in a policy report is interactive, letting you pop open the card to see the source it come from, further context, the authors behind it, etc: We have more information in our launch announcement and Twitter thread. Happy to hear any feedback or answer questions. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 24, 2023
EA - Who is Uncomfortable Critiquing Who, Around EA? by Ozzie Gooen
17:20
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Who is Uncomfortable Critiquing Who, Around EA?, published by Ozzie Gooen on February 24, 2023 on The Effective Altruism Forum. Summary and Disclaimers I think EA is made up of a bunch of different parties, many of whom find it at least somewhat uncomfortable to criticize or honestly evaluate each other for a wide variety of reasons. I think that this is a very standard challenge that most organizations and movements have. As I would also recommend to other parties, I think that investigation and improvement here could be very valuable to EA. This post continues on many of the ideas as Select Challenges with Criticism & Evaluation Around EA. One early reviewer critiqued this post saying that they didn't believe that discomfort was a problem. If you don't think it is, I don't aim in this post to convince you. My goal here is to do early exploration what the problem even seems to look like, not to argue that the problem is severe or not. Like with that previous post, I rely here mostly on anecdotal experiences, introspection, and recommendations from various management books. This comes from me working on QURI (trying to pursue better longtermist evaluation), being an employee and manager in multiple (EA and not EA) organizations, and hearing a whole lot of various rants and frustrations from EAs. I’d love to see further work to better understand where bottlenecks to valuable communication are most restrictive and then design and test solutions. Writing this has helped me find some insights on this problem. However, it is a messy problem, and as I explained before, I find the terminology lacking. Apologies in advance. Introduction There’s a massive difference between a group saying that it’s open to criticism, and a group that people actually feel comfortable criticizing. I think that many EA individuals and organizations advocate and promote feedback in ways unusual for the industry. However, I think there’s also a lot of work to do. In most communities, it takes a lot of iteration and trust-building to find ways for people to routinely and usefully give candid information or feedback to each other. In companies, for example, employees often don’t have much to personally gain by voicing their critiques to management, and a lot to potentially lose. Even if the management seems really nice, is an honest critique really worth the ~3% chance of resentment? Often you won’t ever know — management could just keep their dislike of you to themselves, and later take action accordingly. On the other side, it’s often uncomfortable for managers to convey candid feedback to their reports privately, let alone discuss department or employee failures to people throughout the organization. My impression is that many online social settings contain a bunch of social groups that are really afraid of being honest with each other, and this leads to problems immediate (important information not getting shared) and expansive (groups developing extended distrust and sometimes hatred with each other). Problems of communication and comfort happen within power hierarchies, and they also happen between peer communities. Really, they happen everywhere. To a first approximation, "Everyone is at least a little of afraid of everyone else." I think a lot of people's natural reaction to issues like this is to point fingers at groups they don't like and blame them. But really, I personally think that all of us are broadly responsible (at least a little), and also are broadly able to understand and improve things. I see these issues as systemic, not personal. Criticism Between Different Groups Around effective altruism, I’ve noticed: Evaluation in Global Welfare Global poverty charity evaluation and criticism seem like fair game. When GiveWell started, they weren’t friends with the leaders of the organiza...
Feb 24, 2023
EA - Introducing EASE, a managed directory of EA Organization Service Providers by Deena Englander
03:11
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing EASE, a managed directory of EA Organization Service Providers, published by Deena Englander on February 23, 2023 on The Effective Altruism Forum. What is EASE? EASE (EA Services) is a directory of independent agencies and freelancers offering expertise to EA-aligned organizations. Please visit our website at to view our members. Who are we? We are a team of service providers. The authors of this post are the core coordinators. We all have our own organizations providing services to EA-aligned organizations. We see the problems most organizations encounter, and we developed a solution to help address that need. Why did we start EASE? Many organizations in the EA world have similar needs but lack the bandwidth or expertise to realize them. By providing a directory of experts covering many common challenges, we aim to save the community time whilst addressing key skill shortages. We believe that most organizations need external expertise in order to maximize their organization’s potential. We are all focused on being effective – and we believe that forming this centralized directory is the most effective way of making a large resource group more available to EA-aligned organizations. Why should EA organizations consider working with these agencies? By working with multiple EA organizations, these agencies have gathered plenty of expertise to provide relevant advice, save time and money, and most importantly, increase your impact. Our screening process ensures that the vendors listed are pre-qualified as experts in their represented fields. This minimizes the risk of engaging with a new “unknown” entity, as they’re already proven to be valuable team players. Additionally, we have programming in place to consolidate the interagency interactions and strengthen relationships, so that when you work with one member of our group, you’re accessing a part of a larger network. Our members are vetted to determine capabilities, accuracy, and work history, but we do not give out any endorsement for specific providers. What are the criteria for being added to the directory? Our aim is to build a comprehensive list of service providers who work with EA organizations. We screen members to ensure that the providers are experienced and are truly experts in their field, as well as being active participants in EA or having experience working with EA-aligned organizations. Are you an individual or team providing services to EA-aligned organizations and would like to be added? We love growing our network! Fill out this form and someone will contact you to begin the screening process. Are you ready to get the help you need? Feel free to contact the service providers directly. Are you an EA organization in need of help but aren’t sure what you need or if you have the budget? We can help you figure out what kind of services and budget you need so you can try to get the funds necessary to pay for these critical services. Please send us an email to info@ea-services.org, and we will do our best to help you. Is the directory up to date? We regularly review the listings to make sure they remain relevant. If you have any comments or suggestions, please send an email to info@ea-services.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 24, 2023
EA - EA Global in 2022 and plans for 2023 by Eli Nathan
05:15
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Global in 2022 and plans for 2023, published by Eli Nathan on February 23, 2023 on The Effective Altruism Forum. Summary We ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet: These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, rated out of 10. Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made. This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events. We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel. This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders. Plans for 2023 We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes. We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C. As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below. EA Global: Bay Area | 24–26 February EAGxCambridge | 17–19 March EAGxNordics | 21–23 April EA Global: London | 19–21 May EAGxWarsaw | 9–11 June [provisional] EAGxNYC | July / August [provisional] EAGxBerlin | Early September [provisional] EAGxAustralia | Late September [provisional] EA Global: Boston | Oct 27–Oct 29 EAGxVirtual | November [provisional] We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global. We recently completed a hiring round and now have ~4 FTEs working on the EA Global team. We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here. We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.) We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year. Growth areas Food got generally negative reviews in 2022: Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options. We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards. Swapcard (our event application app) also got generally negative reviews in 2022: We explored and tested several competitor apps, though none of them seem better than Swapcard. We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money. We’ve been working with Swapcard to roll out new featur...
Feb 23, 2023
EA - Taking a leave of absence from Open Philanthropy to work on AI safety by Holden Karnofsky
03:07
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Taking a leave of absence from Open Philanthropy to work on AI safety, published by Holden Karnofsky on February 23, 2023 on The Effective Altruism Forum. I’m planning a leave of absence (aiming for around 3 months and potentially more) from Open Philanthropy, starting on March 8, to explore working directly on AI safety. I have a few different interventions I might explore. The first I explore will be AI safety standards: documented expectations (enforced via self-regulation at first, and potentially government regulation later) that AI labs won’t build and deploy systems that pose too much risk to the world, as evaluated by a systematic evaluation regime. (More here.) There’s significant interest from some AI labs in self-regulating via safety standards, and I want to see whether I can help with the work ARC and others are doing to hammer out standards that are both protective and practical - to the point where major AI labs are likely to sign on. During my leave, Alexander Berger will serve as sole CEO of Open Philanthropy (as he did during my parental leave in 2021). Depending on how things play out, I may end up working directly on AI safety full-time. Open Philanthropy will remain my employer for at least the start of my leave, but I’ll join or start another organization if I go full-time. The reasons I’m doing this: First, I’m very concerned about the possibility that transformative AI could be developed soon (possibly even within the decade - I don’t think this is >50% likely, but it seems too likely for my comfort). I want to be as helpful as possible, and I think the way to do this might be via working on AI safety directly rather than grantmaking. Second, as a general matter, I’ve always aspired to help build multiple organizations rather than running one indefinitely. I think the former is a better fit for my talents and interests. At both organizations I’ve co-founded (GiveWell and Open Philanthropy), I’ve had a goal from day one of helping to build an organization that can be great without me - and then moving on to build something else. I think this went well with GiveWell thanks to Elie Hassenfeld’s leadership. I hope Open Philanthropy can go well under Alexander’s leadership. Trying to get to that point has been a long-term project. Alexander, Cari, Dustin and I have been actively discussing the path to Open Philanthropy running without me since 2018.1 Our mid-2021 promotion of Alexander to co-CEO was a major step in this direction (putting him in charge of more than half of the organization’s employees and giving), and this is another step, which we’ve been discussing and preparing for for over a year (and announced internally at Open Philanthropy on January 20). I’ve become increasingly excited about various interventions to reduce AI risk, such as working on safety standards. I’m looking forward to experimenting with focusing my energy on AI safety. Footnotes This was only a year after Open Philanthropy became a separate organization, but it was several years after Open Philanthropy started as part of GiveWell under the title “GiveWell Labs.” ↩ Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 23, 2023
EA - EA is too New and Important to Schism by Wil Perkins
03:15
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA is too New & Important to Schism, published by Wil Perkins on February 23, 2023 on The Effective Altruism Forum. As many of us have seen there has recently been a surge in discourse around people in the community with different views. Many of this underlying tension has only been brought about by large scandals that have broken in the last 6 months or so. I've seen a few people using language which, to me, seems schismatic. Discussing how there are two distinct and incompatible groups within EA, being shocked/hurt/feeling rejected by the movement, etc. I'd like to urge us to try and find reconciliation if possible. Influential Movements avoid Early Schisms If you look through history at any major religious/political/social movements, most of them avoid having early schisms, or if they do, it creates significant issues and tension. It seems optimal to let movements develop loosely over time and become more diverse, before starting to draw hard lines between what "is" a part of the in group and what isn't. For instance, early Christianity had some schisms, but nothing major until the Council of Nicea in 325 A.D. This meant that Christianity could consolidate power/followers for centuries before actively breaking up into different groups. Another parallel is the infamous Sunni-Shia split in Islam, which caused massive amounts of bloodshed and still continues to this day. This schism still echos today, for instance with the civil war in Syria. For a more modern example, look at the New Atheism Movement which in many ways attracted similar people to EA. Relatively early on in the movement, in fact right as the movement gained popular awareness (similar to the moment right now in EA) many prominent folks in New Atheism advocated for New Atheism Plus. This was essentially an attempt to schism the movement along cultural / social justice lines, which quickly eroded the cohesion of the movement and ultimately contributed to its massive decline in relevance. Effective Altruism as a movement is relatively brand new - we can't afford major schisms or we may not continue as a relevant cultural force in 10-20 years. Getting Movement Building Right Matters Something which I think is sometimes lost in community building discussions is that the stakes we're playing for are extremely high. My motivation to join EA was primarily because I saw major problems in the world, and people that were extremely dedicated to solving them. We are playing for the future, for the survival of the human race. We can't afford to let relatively petty squabbles divide us too much! Especially with advances in AGI, I know many people in the movement are more worried than ever that we will experience significant shifts via technology over the coming decades. Some have pointed out the possibility of Value Lock-in, or that as we rapidly increase our power our values may become stagnant, especially if for instance an AGI is controlled by a group with strong, anti-pluralistic values. Overall I hope to advocate for the idea of reconciliation within EA. We should work to disentangle our feelings from the future of the movement, and try to discuss how to have the most impact as we grow. My vote is that having a major schism is one of the worst things we could do for our impact - and is a common failure mode we should strive to avoid. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 23, 2023
EA - How can we improve discussions on the Forum? by Lizka
00:42
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can we improve discussions on the Forum?, published by Lizka on February 23, 2023 on The Effective Altruism Forum. I’d like to run a very rough survey to get a better sense of: How you feel about the Frontpage change we’re currently testing What changes to the site — how it's set up and organized — you think could help us have discussions better What conversations you'd like to see on the Forum And more Give us your input Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 23, 2023
EA - Faunalytics Analysis on Reasons for Abandoning Vegn Diets by JLRiedi
11:01
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Faunalytics Analysis on Reasons for Abandoning Vegn Diets, published by JLRiedi on February 22, 2023 on The Effective Altruism Forum. Nonprofit research organization Faunalytics has released a new analysis on reasons people abandon vegan or vegetarian (vegn) diets, looking at the obstacles former vegns faced and what they would need to resume being vegn. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers. Read the full report here: Background People have a variety of motivations for switching to plant-based diets, yet not all people who begin the transition to a vegan or vegetarian (collectively called vegn) diet maintain it long-term. In fact, Faunalytics’ study of current and former vegns (2014) found that the number of lapsed (former) vegans and vegetarians in the United States far surpasses the number of current vegns, and most who lapse do so within a year. Are these people the low-hanging fruit for diet advocates? They could be—there are many of them and they’re clearly at least somewhat willing to go vegn, so maybe more attention should be paid to the lapsers. That’s one possibility. The other, more pessimistic possibility, is that when we as advocates think our diet campaigns are successful, these are the people we think we’re convincing. That is, we see the part where they go vegn, but not the part where they later lapse back. This interpretation is one that a lot of people made when our study of current and former vegns released, but we don’t have strong evidence either way. This analysis, in which we looked at the obstacles faced by people who once pursued a vegn diet and what they would need to resume being vegn, aims to shed a bit more light on these questions. Although causes for lapsing have been analyzed to an extent, a deeper analysis that considers people’s reasons in their own words is necessary to not only understand why people give up their vegn goals, but to find the best ways to help people stick with their commitment to vegnism and even lure back some of the lapsers. Research Team This project’s lead author was Constanza Arévalo (Faunalytics). Dr. Jo Anderson (Faunalytics) reviewed and oversaw the work. Conclusion Diets Are More Than Food Food plays an important role in our lives. More than just nutrition, food is a very personal yet social experience, a cultural identity, and at times, a religious or spiritual practice or symbol. Naturally, a good-tasting diet is important—especially when the idea is to maintain it long-term. However, lapsed vegns’ answers suggested that food dissatisfaction, although a very common struggle, was not the most crucial obstacle to overcome to return to vegnism. Instead, having access to vegn options, as well as the time and ability to prepare vegn meals (often alongside non-vegn meals for family), were much more common must-haves. Additionally, people’s feelings of healthiness while on their diet, seemed to hold a lot of weight. Many lapsed vegns who had faced issues managing their health named this as their main reason for lapsing. Similarly, Faunalytics (2022) found that people who felt unhealthy when first trying out a vegn diet were significantly more likely to lapse within the first six months than people who felt healthier. This was the case even if their initial motivation for going vegn wasn’t health-related. Seeking professional medical advice while pursuing a vegn diet (ideally from a doctor who understands and has experience with vegn diets) is the best way to manage any major concerns and get information about the vitamins and nutriti...
Feb 23, 2023
EA - Announcing the Launch of the Insect Institute by Dustin Crummett
01:24
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing the Launch of the Insect Institute, published by Dustin Crummett on February 22, 2023 on The Effective Altruism Forum. The Insect Institute, a fiscally-sponsored project of Rethink Priorities, is excited to announce its official launch. I (Dustin Crummett) am the executive director of this new initiative. The Insect Institute was created to focus on the rapidly growing use of insects as food and feed. Our aim is to work with policymakers, industry, and other relevant stakeholders to address key uncertainties involving animal welfare, public health, and environmental sustainability. As this industry evolves over time, we may also expand our work to other areas. While we don’t currently have any open positions, we do expect to grow our team in the future. If you are interested in working with us, please feel free to submit an expression of interest through our contact form, or sign up for our email list (at the bottom of our home page) to stay up to date on future developments and opportunities. I look forward to seeing many of you at EAG Bay Area—please come say hello if you’d like to chat! I’m also very happy to field questions via DM or via email to dustin@insectinstitute.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 22, 2023
EA - Cyborg Periods: There will be multiple AI transitions by Jan Kulveit
00:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cyborg Periods: There will be multiple AI transitions, published by Jan Kulveit on February 22, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 22, 2023
EA - Consider not sleeping around within the community by Patrick Sue Domin
08:17
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Consider not sleeping around within the community, published by Patrick Sue Domin on February 22, 2023 on The Effective Altruism Forum. Posted under pseudonym for reasons I’d rather not get into. If it’s relevant, I’m pretty involved in EA. I’ve been to several EAGs and I do direct work. tldr I think many more people in the community should consider refraining from sleeping around within the community. I especially think people should consider refraining from sleeping around within EA if they have two or more of the following traits- high status in EA, a man who sleeps with women, and socially clumsy. I think the community would be a more welcoming place, with less sexual misconduct and less other sexually unwelcome behaviour, if more EAs chose to personally refrain from sleeping around within EA or attempting to do so. Most functional institutions outside of EA, from companies to friend groups to extended families, have developed norms against sleeping around within the group. We obviously don’t want to simply unquestionably accept all of society’s norms, but I think in this case those norms address real problems. I worry that as a group, EAs run a risk of discarding valuable cultural practices that don’t immediately make sense in a first principles way, and that this tendency can have particularly high costs where sex is involved (Owen more or less admitted this was a factor in his behaviour in his statement/apology: “I was leaning into my own view-at-the-time about what good conduct looked like, and interested in experimenting to find ways to build a better culture than society-at-large has”). Regarding sleeping around within a tight-knit community, I think this behaviour has risks whether the pursuer is successful or not. Failed attempts at sleeping with someone can very often lead to awkwardness or uncomfortability. In EA, where employment and funding may be front of mind, this uncomfortability may be increased a lot, and there may be no way for the person who was pursued to realistically avoid the pursuer in the future if they want to without major career repercussions. Successful attempts at sleeping around can obviously also cause all sorts of drama, either shortly after or down the road. Personal factors that may increase risks I think within EA, the risks of harm are increased greatly if the pursuer has any of the following three traits: High status within EA- this can create bad power dynamics and awkward social pressure. First, people generally don’t like pissing off high status people within their social circles as there may be social repercussions to doing so. Second, high status people within EA often control funding and employment decisions. Even if the pursuer isn’t in such a position now, they might wind up in one in the future. Third, high status EAs often talk to other high status EAs, so an unjustified bad reputation can spread to other figures in the movement who control funding or employment. Fourth, many EAs consider the community to be their one best shot at living the kind of ethical life they want, raising the stakes a bunch. Fifth, the moralising aspect of EA may make some people find it more uncomfortable to rebuff a high status EA. A man pursuing a woman (such as a heterosexual man or a bi-/pansexual man pursuing a woman)- this factor can sometimes be an elephant that people dance around in discussions, but I’ll just address it head on. On average men are more assertive, aggressive, and physically intimidating than women. On average women are more perceptive about subtle social cues and find it more awkward when those subtle social cues are ignored. My sense is these factors are pretty robust across cultures, but I don’t think it matters for this discussion what the cause of these average differences are. Add to all that, the EA communit...
Feb 22, 2023
EA - A list of EA-relevant business books I've read by Drew Spartz
10:36
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A list of EA-relevant business books I've read, published by Drew Spartz on February 21, 2023 on The Effective Altruism Forum. Some have suggested EA is too insular and needs to learn from other fields. In this vein, I think there are important mental models from the for-profit world that are underutilized by non-profits. After all, business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution. If you replace the word “profit” with “impact”, you’ll find a large percentage of lessons can be cross-applied. Eight months ago, I challenged myself to read a book a day for a year. I've been posting daily summaries on social media and had enough EAs reach out to me for book recs that, inspired by Michael Aird and Anna Riedl, I thought it might be worth sharing my all-time favorites here. Below are the best ~50 out of the ~500 books I read in the past few years. I’m an entrepreneur so they’re mostly business-related. Bold = extra-recommended. If you’d like any more specific recommendations feel free to leave a comment and I can try to be helpful. Also - I’m hosting an unofficial entrepreneur meetup at EAG Bay Area. Message me on SwapCard for details or think it might be high impact to connect :) The best ~50 books: Fundraising: Fundraising The Power Law: Venture Capital and the Making of the New Future Leadership/Management: The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers The Advantage: Why Organizational Health Trumps Everything Else In Business The Coaching Habit: Say Less, Ask More & Change the Way You Lead Forever Entrepreneurship/Startups: Running Lean The Founder's Dilemmas: Anticipating and Avoiding the Pitfalls That Can Sink a Startup Zero to One: Notes on Startups, or How to Build the Future The Startup Owner's Manual: The Step-By-Step Guide for Building a Great Company Strategy/Innovation: The Mom Test: How to talk to customers & learn if your business is a good idea when everyone is lying to you Scaling Up: How a Few Companies Make It...and Why the Rest Don't Operations/Get Shit Done: The Goal: A Process of Ongoing Improvement The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win Making Work Visible: Exposing Time Theft to Optimize Work & Flow Statistics/Forecasting: How to Measure Anything: Finding the Value of Intangibles in Business Superforecasting: The Art and Science of Prediction Antifragile: Things That Gain from Disorder Writing/Storytelling: Wired for Story: The Writer's Guide to Using Brain Science to Hook Readers from the Very First Sentence The Story Grid: What Good Editors Know Product/Design/User Experience: The Cold Start Problem: How to Start and Scale Network Effects The Lean Product Playbook: How to Innovate with Minimum Viable Products and Rapid Customer Feedback Psychology/Influence: SPIN Selling (unfortunate acronym) The Elephant in the Brain: Hidden Motives in Everyday Life Influence: The Psychology of Persuasion Outreach/Marketing/Advocacy: 80/20 Sales and Marketing: The Definitive Guide to Working Less and Making More Traction: How Any Startup Can Achieve Explosive Customer Growth How to learn things faster: Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Make It Stick: The Science of Successful Learning The Little Book of Talent: 52 Tips for Improving Your Skills Personal Development: The Confident Mind: A Battle-Tested Guide to Unshakable Performance The Almanack of Naval Ravikant: A Guide to Wealth and Happiness Atomic Habits Recruiting/Hiring: Recruiting Who: The A Method for Hiring Negotiating: Negotiation Genius Never Split the Difference: Negotiating As If Your Life Depended On It Secrets of Power Negotiating: I...
Feb 22, 2023
EA - A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation) by Joe Carlsmith
01:56
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Stranger Priority? Topics at the Outer Reaches of Effective Altruism (my dissertation), published by Joe Carlsmith on February 21, 2023 on The Effective Altruism Forum. (Cross-posted from my website.) After many years of focusing on other stuff, I recently completed my doctorate in philosophy from the University of Oxford. My dissertation ("A Stranger Priority? Topics at the Outer Reaches of Effective Altruism") was three of my essays -- on anthropic reasoning, simulation arguments, and infinite ethics -- revised, stapled together, and unified under the theme of the "crazy train" as a possible objection to longtermism. The full text is here. I've also broken the main chapters up into individual PDFs: Chapter 1: SIA vs. SSA Chapter 2: Simulation arguments Chapter 3: Infinite ethics and the utilitarian dream Chapter 1 and Chapter 3 are pretty similar to the original essays (here and here). Chapter 2, however, has been re-thought and almost entirely re-written -- and I think it's now substantially clearer about the issues at stake. Since submitting the thesis in fall of 2022, I've thought more about various "crazy train" issues, and my current view is that there's quite a bit more to say in defense of longtermism than the thesis has explored. In particular, I want to highlight a distinction I discuss in the conclusion of the thesis, between what I call "welfare longtermism," which focuses on our impact on the welfare of future people, and what I call "wisdom longtermism," which focuses on reaching a wise and empowered future more broadly. The case for the latter seems to me more robust to various "crazy train" considerations than the case for the former. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 22, 2023
EA - What is it like doing AI safety work? by Kat Woods
15:08
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is it like doing AI safety work?, published by Kat Woods on February 21, 2023 on The Effective Altruism Forum. How do you know if you’ll like AI safety work? What’s the day-to-day work like? What are the best parts of the job? What are the worst? To better answer these questions, we talked to ten AI safety researchers in a variety of organizations, roles, and subfields. If you’re interested in getting into AI safety research, we hope this helps you be better informed about what pursuing a career in the field might entail. The first section is about what people do day-to-day and the second section describes each person’s favorite and least favorite aspects of the job. Of note, the people we talked with are not a random sample of AI safety researchers, and it is also important to consider the effects of survivorship bias. However, we still think it's useful and informative to hear about their day-to-day lives and what they love and hate about their jobs. Also, these interviews were done about a year ago, so may no longer represent what the researchers are currently doing. Reminder that you can listen to LessWrong and EA Forum posts like this on your podcast player using the Nonlinear Library. This post is part of a project I’ve been working on at Nonlinear. You can see the first part of the project here where I explain the different ways people got into the field. What do people do all day? John Wentworth John describes a few different categories of days. He sometimes spends a day writing a post; this usually takes about a day if all the ideas are developed already. He might spend a day responding to comments on posts or talking to people about ideas. This can be a bit of a chore but is also necessary and useful. He might spend his day doing theoretical work. For example, if he’s stuck on a particular problem, he can spend a day working with a notebook or on a whiteboard. This means going over ideas, trying out formulas and setups, and trying to make progress on a particular problem. Over the past month he’s started working with David Lorell. David’s a more active version of the programmer's "rubber duck". As John’s thinking through the math on a whiteboard, he’ll explain to David what's going on. David will ask for clarifications, examples, how things tie into the bigger picture, why did/didn't X work, etc. John estimates that this has increased his productivity at theoretical work by a factor somewhere between 2 and 5. Ondrej Bajgar Ondrej starts the day by cycling to the office. He has breakfast there and tries to spend as much time as possible at a whiteboard away from his computer. He tries to get into a deep-thinking mindset, where there aren’t all the answers easily available. Ideally, mornings are completely free of meetings and reserved for this deep-thinking work. Deep thinking involves a lot of zooming in and out, working on sub-goals while periodically zooming out to check on the higher-level goal every half hour. He switches between trying to make progress and reflecting on how this is actually going. This is to avoid getting derailed on something unproductive but cognitively demanding. Once an idea is mostly formed, he’ll try to implement things in code. Sometimes seeing things in action can make you see new things you wouldn’t get from just the theory. But he also says that it’s important to not get caught in the trap of writing code, which can feel fun and feel productive even when it isn’t that useful. Scott Emmons Scott talked about a few different categories of day-to-day work: Research, which involves brainstorming, programming, writing & communicating, and collaborating with people Reading papers to stay up-to-date with the literature Administrative work Service, such as giving advice to undergrads, talking about AI safety, and reviewing other...
Feb 22, 2023
EA - Bad Actors are not the Main Issue in EA Governance by Grayden
05:48
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bad Actors are not the Main Issue in EA Governance, published by Grayden on February 21, 2023 on The Effective Altruism Forum. Background While I have a technical background, my career has been spent working with corporate boards and management teams. I have seen first-hand how critical leadership are to the success of organizations. Organizations filled with competent people can fail miserably if individuals do not have the right interpersonal skills and humility. I have worried about governance within EA for a while. In October, I launched the EA Good Governance Project and wrote that "We have not yet experienced a scandal / major problem and have not yet started to think through how to avoid that happening again" . Now, 4 months later, we've had our fair share and people are open to change. This post is my attempt to put some thoughts together. It has been written in a rather rushed way given recent news, so apologies if some parts are poorly worded. Introduction I have structured my thoughts in 4 sections, corresponding to the 4 key ways in which leadership can fail: 1) Bad actor 2) Well-intentioned people with low competence 3) Well-intentioned high-competence people with collective blind spots 4) Right group of people, bad practices Bad Actors Much discussion on the forum in recent months has focused on the concept of a bad actor. I think we are focusing far too much on this concept. The term comes from computer science where hackers are prevalent. However, real life is rarely this black and white. Never attribute to malice that which is adequately explained by incompetence (Hanlon's razor). The bad actor concept can be used, consciously or unconsciously, to justify recruiting board members from within your clique. Many EA boards comprise groups of friends who know each other socially. This limits the competence and diversity of the board. Typically the people you know well are exactly the worst people to provide different perspectives and hold you to account. If they are your friends, you have these perspectives and this accountability already and you can prevent bad actors through referencing, donation history and background checks. Key takeaway: Break the clique Competence There's an old adage: How do you know if someone is not very good at Excel? They will say they are an expert. With Excel, the more you know, the more aware you are of what you don't know. I think leadership is similar. When I had 3-5 years of professional experience, I thought I could lead anything. Now I know better. Some aspects of leaderships come naturally to people, but many have to be learned by close interaction with role models. When you are a community without experienced figures at the top, this is hard. We should not expect people with less than 10 years of professional experience to be a fully rounded leader. Equally, it's possible to be successful without being a good leader. I think many of us in the community have historically held EA leaders on a pedestal. They were typically appointed because of their expertise in a particular field. Some of the brightest people I've ever met are within the EA community. However, we then assumed they had excellent people judgment, a sound understanding of conflicts of interest, in-depth knowledge of real estate investments and an appreciation for power dynamics in sexual relationships. It turns out some of them don't. This shouldn't come as a big surprise. It doesn't mean they can't be a valuable contributor to the community and it certainly doesn't make them bad actors. Key takeaway: We need to elevate the importance of soft skills and learn from models of effective leadership in other communities and organizations Blind Spots The more worrying thing though is how those people also believed in their own abilities. In my career, I have met...
Feb 21, 2023
EA - You're probably a eugenicist by Sentientist
22:42
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: You're probably a eugenicist, published by Sentientist on February 21, 2023 on The Effective Altruism Forum. A couple of EAs encouraged me to crosspost this here. I had been sitting on a shorter version of this essay for a long time and decided to publish this expanded version this month partly because of the accusations of eugenics leveraged against Nick Bostrom and the effective altruism community. The piece is the first article on my substack and you can listen to me narrate it at that link. You're probably a eugenicist Let me start this essay with a love story. Susan and Patrick were a young German couple in love. But, the German state never allowed Susan and Patrick to get married. Shockingly, Patrick was imprisoned for years because of his sexual relationship with Susan. Despite these obstacles, over the course of their relationship, Susan and Patrick had four children. Three of their children—Eric, Sarah, and Nancy—had severe problems: epilepsy, cognitive disabilities, and a congenital heart defect that required a transplant. The German state took away these children and placed them with foster families. Patrick and Susan with their daughter Sofia - credit dpa picture alliance archive Why did Germany do all these terrible things to Susan and Patrick? Eugenics. No, this story didn’t happen in Nazi Germany, it happened over the course of the last 20 years. But why haven’t you heard this story before? Because Patrick and Susan are siblings. One of the aims of eugenics is to intervene in reproduction so as to decrease the number of people born with serious disabilities or health problems. Susan and Patrick were much more likely than the average couple to have children with genetic problems because they are brother and sister. So, the German state punished this couple by restricting them from marriage, taking away their children, and forcefully separating them with Patrick’s imprisonment. Patrick Stübing filed a case against Germany with the European Court on Human Rights, arguing that the laws forbidding opposite-sex sibling incest violated his rights to family life and sexual autonomy. The European Court on Human Rights’ majority opinion in the Stübing case clearly sets out the eugenic case for those laws: that the children of incest and their future children will suffer because of genetic problems. But the dissenting opinion argued that eugenics cannot be a valid justification for punishing incest because eugenics is associated with the Nazis, and because other people (for example, older mothers and people with genetic disorders) who have a high chance of producing children with genetic defects are not prevented from reproducing. Ultimately, the European Court on Human Rights upheld Germany's anti-incest law on eugenic grounds. If Germany had punished any other citizens this severely on eugenic grounds—for example by imprisoning a female carrier of Huntington’s disease who was trying to get pregnant— there would be a huge outcry. But incest seems to be an exception. Our instinctive aversion to incest is informed by intuitive eugenics. Not only are we reflexively disgusted by the thought of having sex with our own blood relatives, but we’re also disgusted by the thought of any blood relatives having sex with each other. Siblings and close relatives conceive children who are more likely to end up with two copies of the same defective genes, which makes those children more likely to inherit disabilities and health problems. It’s estimated that the children of sibling incest have a greater than 40 percent chance of either dying prematurely or being born with a severe impairment. By comparison, first cousins have around a five percent chance of having children with a genetic problem—twice as likely as unrelated couples. In the UK, first cousin marriages are legal and...
Feb 21, 2023
EA - EU Food Agency Recommends Banning Cages by Ben West
01:14
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EU Food Agency Recommends Banning Cages, published by Ben West on February 21, 2023 on The Effective Altruism Forum. Some key recommendations (all direct quotes from either here or here): Birds should be housed in cage-free systems Avoid all forms of mutilations in broiler breeders Avoid the use of cages, feed and water restrictions in broiler breeders Limit the growth rate of broilers to a maximum of 50 g/day. Substantially reduce the stocking density to meet the behavioural needs of broilers. My understanding is that the European Commission requested these recommendations as a result of several things, including work by some EA-affiliated animal welfare organizations, and it is now up to them to propose legislation implementing the recommendations. This Forum post from two years ago describes some of the previous work that got us here. It's kind of cool to look back on the "major looming fight" that post forecasts and see that the fight is, if not won, at least on its way. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 21, 2023
EA - Effective Thesis is looking for a new Executive Director by Effective Thesis
11:03
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective Thesis is looking for a new Executive Director, published by Effective Thesis on February 21, 2023 on The Effective Altruism Forum. Cross-posting this job description from the Effective Thesis website.Are you a visionary leader looking for a chance to make a large impact? We’re seeking an Executive Director to lead our nonprofit organisation into a new era of greater impact, excellence and sustainability. Key details Effective Thesis seeks a visionary Executive Director to lead our nonprofit organisation into a new era of strategic and operational excellence. You will drive long-term impact and sustainability through innovative and high-impact activities. As a member of our young, diverse team, you will enjoy remote work flexibility, while honing your leadership skills and expanding your network. If you’re passionate about significantly improving the world and have the skills to transform our organisation, we want to hear from you! Hours: Preference for a full-time 40 hours/week role but we will consider capacity as low as 30 hours/week Work Location: Fully Remote (Candidates should be able to work in UTC+1 for at least 3 hours per day). Deadline: Applications close 22 March 2023 23:59 Pacific Standard Time Ideal Start Date: May 2023 (with flexibility to start later) Apply here For more details about the role, why it’s impactful and how to tell if you’re a good fit please read below. About Effective Thesis Mission Effective Thesis is a non-profit organisation. Our mission is to support university students to begin and progress in research careers that significantly improve the world. We do this primarily by helping students identify important problems where further research could have a big impact and advising them on their research topic selection (mostly in the context of a final thesis/dissertation/capstone project). Choosing a final thesis topic or a PhD topic is an important step that can influence the rest of a researcher’s career. We support students in this choice by providing introductions to various research directions and early-career research advice on our website and offering topic choice coaching. Additionally, we provide other types of support to address key bottlenecks in early stages of research careers, such as finding supervision, funding or useful opportunities. Here is our 2022 report outlining our activities, impact, and future plans. Team We are a fully remote team of 8 employees (with 5 close to full-time and 3 around 10h/week) and 6 long-term volunteers, who help run our research opportunities newsletter. We operate mostly in UTC+1 (Central European) timezone. Candidates should be able to work in UTC+1 for some share of their work hours. The team you are joining is young, agentic, diverse in skill set and is connected through shared motivation to contribute to our mission. 75% of our employed team are women/non-binary people and team members are based in 6 different countries across Europe, UK and Asia. Our culture is start-up-like. We value creative problem solving, open communication and collaboration. We have regular online socials, optional weekly coworking and in the coming year we will have multiple in-person retreats in Europe or the UK. You can expect a thorough onboarding experience including a week of in person co-working with the full-time team in the UK and additional in person meetings with the Managing Director. About the Role As Executive Director, you will have the opportunity to lead the way for Effective Thesis to reach its full potential, learn and hone essential leadership skills and work with a wonderful team of people across the world. This remote role comes with a large degree of freedom in when and how you work. You can expect a steep learning curve, a growing network and lots of opportunities for personal an...
Feb 21, 2023
EA - Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism" by jackva
21:34
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Effective altruists are already institutionalists and are doing far more than unworkable longtermism - A response to "On the Differences between Ecomodernism and Effective Altruism", published by jackva on February 21, 2023 on The Effective Altruism Forum. This is the long-form version of a post published as an invited reply to the original essay on the website of the Breakthrough Institute. Why I am writing this As someone who worked at the Breakthrough Institute back in the day, learned a lot from ecomodernism, and is now deeply involved in effective altruism’s work on climate (e.g. here, here, and here), I was very happy to find Alex’s essay in my inbox -- an honest attempt to describe how ecomodernism and effective altruism relate and differ. However, reading the essay, I found many of Alex’s observations and inferences in stark contrast to my lived experience of and in effective altruism over the past seven years. I also had the impression that there were a fair number of misunderstandings as well as a lack of awareness of many existing effective altruists’ efforts. So I am taking Alex up on his ask to provide a view on how the community sees itself. I should note, however, that this is my personal view. While I disagree strongly with many of Alex's characterizations of effective altruism, his effort was clearly in good faith -- so my response is not so much a rebuttal rather than a friendly attempt to clarify, add nuance, and promote an accurate mutual understanding of the similarities and differences of two social movements and their respective sets ot ideas and beliefs. Where I agree with the original essay It is clear that there is a difference on how most effective altruists think about animals and how ecomodernists and other environmentalists do. This difference is well characterized in the essay. My moral intuitions here are more on the pan-species-utilitarianism side, but I am not a moral philosopher so I will not defend that view and just note that the description points to a real difference. I also agree that it is worth pointing out the differences and similarities between ecomodernism and effective altruism and, furthermore, that both have distinctive value to add to the world. With this clarified, let’s focus on the disagreements: Unworkable longtermism, if it exists at all, is only a small part of effective altruism Before diving into the critique of unworkable longtermism it is worth pointing out that “longtermism” and “effective altruism” are not synonymous and that -- either for ethical reasons or for reasons similar to those discussed by Alex (the knowledge problem) -- most work in effective altruism is actually not long-termist. Even at its arguably most longtermist, in August 2022, estimated longtermist funding for 2022 was less than ⅓ of total effective altruist funding. Thus, however one comes out on the workability of longtermism, there is a large effective altruist project remaining not affected by this critique. The primary reason Alex gives for describing longtermism as unworkable is the knowledge problem: “But I want to focus on the “knowledge problem” as the core flaw in longtermism, since the problems associated with projecting too much certainty about the future are something effective altruists and conventional environmentalists have in common. We simply have no idea how likely it is that an asteroid will collide with the planet over the course of the next century, nor do we have any idea what civilization will exist in the year 2100 to deal with the effects of climate change, nor do we have any access to the preferences of interstellar metahumans in the year 21000. We do not need to have any idea how to make rational, robust actions and investments in the present.” This knowledge problem is, of course, well-known in effective altr...
Feb 21, 2023
EA - The EA Mental Health and Productivity Survey 2023 by Emily
02:20
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Mental Health & Productivity Survey 2023, published by Emily on February 21, 2023 on The Effective Altruism Forum. This survey is intended for members of the Effective Altruism (EA) community who aim to improve or maintain good mental health and productivity. We’d be so grateful if you could donate ~10 minutes of your time to complete the survey! You will help both identify the most pressing next steps for enhancing mental flourishing within the EA community, and provide the interventions and resources you’d prefer. These can be psychological, physiological, and lifestyle interventions. Why this survey? The mind is inherently the basis of everything we do and feel. Its health and performance are the foundation of any metric of happiness and productivity at impactful work. Good mental health is not just the absence of mental health issues. It is a core component of flourishing, enabling functioning, wellbeing, and value-aligned living. Rethink Wellbeing, the Mental Health Navigator, High Impact Psychology, and two independent EAs have teamed up to create this community-wide survey on Mental Health and Productivity. Through this survey, we aim to better understand the key issues and bottlenecks of EA performance and well-being. We also want to shed light on EAs' interest in and openness to different interventions that proactively improve health, well-being, and productivity. The results will likely serve as a basis for further projects and initiatives surrounding the improvement of well-being, mental health and productivity in the EA community. By filling out this form, you will help us with that. Based on form responses, we will compile overview statistics for the EA community that will be published on the EA Forum in 2023. Survey information Please complete the survey by March 17th. We recommend you take the survey on your computer, since the format doesn’t work well on cell phones. All responses will be kept confidential, and we will not use the data you provide for any other purposes. Thank you! We are deeply grateful to all participants! Feel free to reach out to us if you have any questions or feedback. Emily Jennings, Samuel Nellessen, Tim Farkas, and Inga Grossmann Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 21, 2023
EA - Should we tell people they are morally obligated to give to charity? [Recent Paper] by benleo
08:10
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we tell people they are morally obligated to give to charity? [Recent Paper], published by benleo on February 21, 2023 on The Effective Altruism Forum. SUMMARY In this post, we summarise a recently published paper of ours that investigates how people respond to moral arguments, and morally demanding statements, such as “You are morally obligated to give to charity” . The paper is forthcoming in the [Journal of Behavioural and Experimental Economics]. (If you want an ungated copy, please get in touch with either Ben or Philipp). We ran two pre-registered experiments with a total sample size of n=3700 participants. We compared a control treatment to a moral argument treatment, and we also varied the level of moral demandingness to donate after they read the moral argument. We found that the moral argument increased the frequency and amount of donations. However, increasing the levels of moral demandingness did not translate into higher or lower giving. BACKGROUND The central motivation for our paper was the worry that many have expressed, including a number of philosophers (e.g., Kagan, 1989; Unger, 1996; De Lazari-Radek and Singer, 2010) that having highly morally demanding solicitations for charitable giving may result in reduced (not increased) donations. This possibility of a backfire effect had been raised many times in a variety of contexts but had not been tested empirically. In our paper, we attempted to do just that in the context of donations to Give Directly. EXPERIMENT DESIGN In our first study (n=2500), we had five treatments (control, moral argument, inspiration, weak demandingness, and strong demandingness). In the Control condition, we showed participants unrelated information about some technicalities of UK parliamentary procedure. In the Moral Argument condition, we presented participants with a text about global poverty and the ability of those living in Western countries to help (see figure 2). For the Inspiration, Weak Demandingness, and Strong Demandingness conditions, we used the same text as in the moral argument condition, but added one sentence to each. Inspiration: For these reasons, you can do a lot of good if you give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Weak Demandingness: For these reasons, you should give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself.Strong Demandingness: For these reasons, you are morally obligated to give money to charities-such as GiveDirectly-to alleviate the suffering of people in developing countries at a minimal cost to yourself. In this study, we were interested in two comparisons. First, we compared the control and the moral argument conditions to look at the effect of moral arguments on charitable giving. Second, we compared the moral argument with each of the three moral demandingness conditions to investigate whether increasing levels of moral demandingness lead to an increase or reduction in charitable giving. In our second study (n=1200), we narrow down our research question by looking only at the conditions of control, moral argument, and strong demandingness. We test the same two main questions as in our first study. The key difference is that the Moral Argument (and demandingness) was presented to participants via the Giving What We Can Website (see Figure 3). This was done to mitigate experimenter demand effects, as well as to provide a more natural vehicle for the information to be delivered. In both studies, after reading the randomly allotted text, participants could choose to donate some, none, or all of their earnings (20 ECUs, where 1 ECU=£0.05) to the charity GiveDirectly. RESULTS The main results of experiment 1 ...
Feb 21, 2023
EA - There are no coherence theorems by EJT
34:32
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There are no coherence theorems, published by EJT on February 20, 2023 on The Effective Altruism Forum. Introduction For about fifteen years, the AI safety community has been discussing coherence arguments. In papers and posts on the subject, it’s often written that there exist 'coherence theorems' which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Despite the prominence of these arguments, authors are often a little hazy about exactly which theorems qualify as coherence theorems. This is no accident. If the authors had tried to be precise, they would have discovered that there are no such theorems. I’m concerned about this. Coherence arguments seem to be a moderately important part of the basic case for existential risk from AI. To spot the error in these arguments, we only have to look up what cited ‘coherence theorems’ actually say. And yet the error seems to have gone uncorrected for more than a decade. More detail below. Coherence arguments Some authors frame coherence arguments in terms of ‘dominated strategies’. Others frame them in terms of ‘exploitation’, ‘money-pumping’, ‘Dutch Books’, ‘shooting oneself in the foot’, ‘Pareto-suboptimal behavior’, and ‘losing things that one values’ (see the Appendix for examples). In the context of coherence arguments, each of these terms means roughly the same thing: a strategy A is dominated by a strategy B if and only if A is worse than B in some respect that the agent cares about and A is not better than B in any respect that the agent cares about. If the agent chooses A over B, they have behaved Pareto-suboptimally, shot themselves in the foot, and lost something that they value. If the agent’s loss is someone else’s gain, then the agent has been exploited, money-pumped, or Dutch-booked. Since all these phrases point to the same sort of phenomenon, I’ll save words by talking mainly in terms of ‘dominated strategies’. With that background, here’s a quick rendition of coherence arguments: There exist coherence theorems which state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue strategies that are dominated by some other available strategy. Sufficiently-advanced artificial agents will not pursue dominated strategies. So, sufficiently-advanced artificial agents will be ‘coherent’: they will be representable as maximizing expected utility. Typically, authors go on to suggest that these expected-utility-maximizing agents are likely to behave in certain, potentially-dangerous ways. For example, such agents are likely to appear ‘goal-directed’ in some intuitive sense. They are likely to have certain instrumental goals, like acquiring power and resources. And they are likely to fight back against attempts to shut them down or modify their goals. There are many ways to challenge the argument stated above, and many of those challenges have been made. There are also many ways to respond to those challenges, and many of those responses have been made too. The challenge that seems to remain yet unmade is that Premise 1 is false: there are no coherence theorems. Cited ‘coherence theorems’ and what they actually say Here’s a list of theorems that have been called ‘coherence theorems’. None of these theorems state that, unless an agent can be represented as maximizing expected utility, that agent is liable to pursue dominated strategies. Here’s what the theorems say: The Von Neumann-Morgenstern Expected Utility Theorem: The Von Neumann-Morgenstern Expected Utility Theorem is as follows: An agent can be represented as maximizing expected utility if and only if their preferences satisfy the following four axioms: Completeness: For all lotteries X and Y, X...
Feb 20, 2023
EA - Sanity check - effectiveness/goodness of Trans Rescue? by David D
02:00
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sanity check - effectiveness/goodness of Trans Rescue?, published by David D on February 20, 2023 on The Effective Altruism Forum. I stumbled across the charity Trans Rescue, which helps transgender people living in unsafe parts of the world move. They've published advice for people living in first world countries with worsening legal situations for trans people, but the vast majority of their funding goes toward helping people in Africa and the Middle East immigrate to safer countries (or for Kenyans, move to Trans Rescue's group home in the safest region of Kenya) and stay away from abusive families.As of September 2022, their total funding since inception was just under 33k euros . They helped about twenty people move using this funding/ . That puts the cost to help a person move at about 1,650 euros, which is in the same ballpark as a Givewell top charity's cost to save one person from fatal malaria. I haven't looked closely at the likely outcome for people who would benefit from Trans Rescue's services but don't get help. Some would live and some would not, but I don't have a good sense of the relative numbers, or how to put QUALYs on undertaking a move such as this. Since they're very new and very small, I'm considering donating and keeping an eye on how they grow as an organization. Mainly I hoped you all could help me by pointing out whether there's anything fishy that I might have missed. This review was published by a group of Twitter users, apparently after an argument with one of the board members. It's certainly not unbiased, but they do seem to have made a concerted effort to find anything bad or construable as bad that Trans Rescue has ever done. Trans Rescue wrote a blog post in response . I came away with a sense that the board is new at running an organization like this, and they rely on imperfect volunteer labor to be able to move as many people as they do, but their work is overall helpful to their clients. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 20, 2023
EA - Join a new slack for animal advocacy by SofiaBalderson
01:37
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Join a new slack for animal advocacy, published by SofiaBalderson on February 20, 2023 on The Effective Altruism Forum. We'd like to extend an invitation to you all for a new Slack space that brings together individuals who are passionate about making a meaningful impact for animals. Our aim is to: support discussions in the animal advocacy space foster new connections learn from each other generate innovative ideas perhaps even launch new projects. It would be great for you to join us today and introduce yourself to the community! We already have some great discussions going. FAQs: "But there are already lots of Slack spaces?" - before we started this space, there wasn't a space that was just for animal welfare. "Who should join?" Anyone already working in animal advocacy or adjacent organisation Anyone involved in alt protein Anyone, regardless of experience or place of work, interested in helping animals in an impactful way By animals, we mean farmed animals and wild animals "How much do I have to participate, could I join and lurk for a bit?"- Sure, you don't need to be active all the time, all we ask is that you introduce yourself and invite anyone you know in the space who may be interested. If you'd like, you can message others or participate in discussions, but no pressure. If in doubt, please join! Best wishes, Sofia and Cameron Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 20, 2023
EA - On Loyalty by Nathan Young
23:07
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Loyalty, published by Nathan Young on February 20, 2023 on The Effective Altruism Forum. Epistemic status: I am confident about most individual points but there are probably errors overall. I imagine if there are lots of comments to this piece I'll have changed my mind by the time I've finished reading them. I was having a conversation with a friend, who said that EAs weren't loyal. They said that the community's recent behaviour would made them fear that they would be attacked for small errors. Hearing this I felt sad, and wanted to understand. Tl;dr I feel a desire to be loyal. This is an exploration of that. If you don't feel that desire this may not be useful I think loyalty is a compact between current and future members on what it is to be within a community - "do this and you will be safe" Status matters and loyalty is a "status insurance policy" - even if everyone else doesn't like you, we will I find more interest in where we have been disloyal than where we ought to be loyal. Were we disloyal to Bostrom? Is loyalty even good? Were people disloyal in sharing the Leadership Slack? Going forward I would like that I have and give a clear sense of how I coordinate with others I take crises slowly and relaxedly and not panic I am able to trust that people will keep private conversations which discuss no serious wrongdoing secret and have them trust me that I will do the same I feel that it is acceptable to talk to journalists if helping a directionally accurate story, but that almost all of the time I should consider this a risky thing to do This account is deliberately incoherent - it contains many views and feelings that I can't turn into a single argument. Feel free to disagree or suggestion some kind of synthesis. Intro Testing situations are when I find out who I am. But importantly, I can change. And living right matters to me perhaps more than anything else (though I am hugely flawed). So should I change here? I go through these 1 by 1 because I think this is actually hard and I don't know the answers. Feel free to nitpick. What is Loyalty (in this article)? I think loyalty is the sense that do right by those who have followed the rules. It's a coordination tool. "You stuck by the rules, so you'll be treated well", "you are safe within these bounds, you don't need to fear". I want to be loyal - for people to think "there goes Nathan, when I abide by community norms, he won't treat me badly". The notion of safety implies a notion of risk. I am okay with that. Sometimes a little fear and ambiguity is good - I'm okay with a little fear around flirting at EAGs because that reduces the amount of it, I'm okay with ambiguity in how to work in a biorisk lab. There isn't always a clear path to "doing the right thing" and if, in hindsight I didn't do it, I don't want your loyalty. But I want our safety, uncertainty, disaster circles to be well calibrated. Some might say "loyalty isn't good" - that we should seek to treat those in EA exactly the same as those outside. For me this equivaltent to saying our circles should be well calibrated - if someone turns out to have broken the norms we care about then I already don't feel a need to be loyal to them. But to me, a sense of ingroup loyalty feel inevitable. I just like you more and differently than those ousisde EA. It feels naïve to say otherwise. Much like "you aren't in traffic, you are trafffic", "I don't just feel feelings, to some extent I am feelings" So let's cut to the chase. The One With The Email Nick Bostrom sent an awful email. He wrote an apology a while back, but he wrote another to avoid a scandal (whoops). Twitter at least did not think he was sorry. CEA wrote a short condemnation. There was a lot of forum upset. Hopefully we can agree on this. Let's look at this through some different frames. The ...
Feb 20, 2023
EA - What AI companies can do today to help with the most important century by Holden Karnofsky
17:09
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What AI companies can do today to help with the most important century, published by Holden Karnofsky on February 20, 2023 on The Effective Altruism Forum. I’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work. This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1 This piece could be useful to people who work at those companies, or people who are just curious. Generally, these are not pie-in-the-sky suggestions - I can name2 more than one AI company that has at least made a serious effort at each of the things I discuss below (beyond what it would do if everyone at the company were singularly focused on making a profit).3 I’ll cover: Prioritizing alignment research, strong security, and safety standards (all of which I’ve written about previously). Avoiding hype and acceleration, which I think could leave us with less time to prepare for key risks. Preparing for difficult decisions ahead: setting up governance, employee expectations, investor expectations, etc. so that the company is capable of doing non-profit-maximizing things to help avoid catastrophe in the future. Balancing these cautionary measures with conventional/financial success. I’ll also list a few things that some AI companies present as important, but which I’m less excited about: censorship of AI models, open-sourcing AI models, raising awareness of AI with governments and the public. I don’t think all these things are necessarily bad, but I think some are, and I’m skeptical that any are crucial for the risks I’ve focused on. I previously laid out a summary of how I see the major risks of advanced AI, and four key things I think can help (alignment research; strong security; standards and monitoring; successful, careful AI projects). I won’t repeat that summary now, but it might be helpful for orienting you if you don’t remember the rest of this series too well; click here to read it. Some basics: alignment research, strong security, safety standards First off, AI companies can contribute to the “things that can help” I listed above: They can prioritize alignment research (and other technical research, e.g. threat assessment research and misuse research). For example, they can prioritize hiring for safety teams, empowering these teams, encouraging their best flexible researchers to work on safety, aiming for high-quality research that targets crucial challenges, etc. It could also be important for AI companies to find ways to partner with outside safety researchers rather than rely solely on their own teams. As discussed previously, this could be challenging. But I generally expect that AI companies that care a lot about safety research partnerships will find ways to make them work. They can help work toward a standards and monitoring regime. E.g., they can do their own work to come up with standards like "An AI system is dangerous if we observe that it's able to ___, and if we observe this we will take safety and security measures such as ____." They can also consult with others developing safety standards, voluntarily self-regulate beyond what’s required by law, etc. They can prioritize strong security, beyond what normal commercial incentives would call for. It could easily take years to build secure enough systems, processes and technologies for very high-stakes AI. It could be important to hire not only people to handle everyday security needs, but people to experiment with more exotic setups that could be needed later, as the incentives to steal AI get strong...
Feb 20, 2023
EA - EV UK board statement on Owen's resignation by EV UK Board
02:20
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EV UK board statement on Owen's resignation, published by EV UK Board on February 20, 2023 on The Effective Altruism Forum. In a recent TIME Magazine article, a claim of misconduct was made about an “influential figure in EA”: A third [woman] described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.” Shortly after the article came out, Julia Wise (CEA’s community liaison) informed the EV UK board that this concerned behaviour of Owen Cotton-Barratt; the incident occurred more than 5 years ago and was reported to her in 2021. (Owen became a board member in 2020.) Following this, on February 11th, Owen voluntarily resigned from the board. This included stepping down from his role with Wytham Abbey; he is also no longer helping organise The Summit on Existential Security. Though Owen’s account of the incident differs in scope and emphasis from the version expressed in the TIME article, he still believes that he made significant mistakes, and also notes that there have been other cases where he regretted his behaviour. It's very important to us that EV and the wider EA community strive to provide safe and respectful environments, and that we have reliable mechanisms for investigating and addressing claims of misconduct in the EA community. So, in order to better understand what happened, we are commissioning an external investigation by an independent law firm into Owen’s behaviour and the Community Health team’s response. This post is jointly from the Board of EV UK: Claire Zabel, Nick Beckstead, Tasha McCauley and Will MacAskill. The disclosure occurred as follows: shortly after the article came out, Owen and Julia agreed that Julia would work out whether Owen's identity should be disclosed to other people in EV UK and EV US; Julia determined that it should be shared with the boards. Julia writes about her response at the time here. See comment here from Chana Messinger on behalf of the Community Health team. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 20, 2023
EA - A statement and an apology by Owen Cotton-Barratt
10:32
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A statement and an apology, published by Owen Cotton-Barratt on February 20, 2023 on The Effective Altruism Forum. Since the Time article on sexual harassment came out, people have been asking for information about one paragraph of it, about an “influential figure in EA”. I wanted to respond to that. This is talking about me, more than five years ago. I think I made significant mistakes; I regret them a lot; and I’m sorry. Context I think the actual mistakes I made look different from what many readers may take away from the article, so I first wanted to provide a bit more context (some of this is straightforwardly factual; other parts should be understood as my interpretation): We had what I perceived as a preexisting friendship where we were experimenting with being unusually direct and honest (/“edgy”) Including about sexual matters There was what would commonly be regarded as oversharing from both sides (this wasn’t the first time I’d mentioned masturbation) Our friendship continued in an active way for several months afterwards I should however note that: We had met via EA and spent a good fraction of conversation time talking about EA-relevant topics I was older and more central in the EA community On other occasions, including early in our friendship, we had some professional interactions, and I wasn’t clear about how I was handling the personal/professional boundary I was employed as a researcher at that time My role didn’t develop to connecting people with different positions until later, and this wasn’t part of my self-conception at the time (However it makes sense to me that this was her perception) I was not affiliated with the org she was interviewing at I’d suggested her as a candidate earlier in the application process, but was not part of their decision-making process On the other hand I think that a lot of what was problematic about my behaviour with respect to this person was not about this incident in particular, but the broad dynamic where: I in fact had significant amounts of power This was not very salient to me but very salient to her She consequently felt pressure to match my vibe e.g. in an earlier draft of this post, before fact-checking it with her, I said that we talked about “feelings of mutual attraction” This was not her experience I drafted it like that because we’d had what I’d interpreted as conversations where this was stated explicitly (I think this is just another central example of the point I’m making in this set of bullets) Similarly at some point she volunteered to me that she was enjoying the dynamic between us (but I probably interpreted this much more broadly than she intended) She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experience As the person with power it was on me to notice and head off these dynamics, and I failed to do that (Sorry, I know that's all pretty light on detail, but I don't want to risk accidentally de-anonymising the other person. I want to stress that I’m not claiming she provided any inaccurate information to the journalist who wrote the story; just that I think the extra context may be helpful for people seeking to evaluate or understand my conduct.) My mistakes In any case, I think my actions were poorly judged and fell well short of the high standards I would like to live up to, and that I think we should ask from people in positions of leadership. Afterwards, I felt vaguely like the whole friendship wasn’t well done and I wished I had approached things differently. Then when I found out that I’d made the person feel uncomfortable(/disempowered/pressured), I was horrified (not putting pressure on people is something like a core value of mine). I have apologized to the person in question, but I also feel like I’ve let the whole community down,...
Feb 20, 2023
EA - The Estimation Game: a monthly Fermi estimation web app by Sage
02:35
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Estimation Game: a monthly Fermi estimation web app, published by Sage on February 20, 2023 on The Effective Altruism Forum. Announcing the first monthly Estimation Game! Answer 10 Fermi estimation questions, like “How many piano tuners are there in New York?” Train your estimation skills and get more comfortable putting numbers on things Team up with friends, or play solo See how your scores compare on the global leaderboard The game is around 10-40 minutes, depending on how much you want to discuss and reflect on your estimates You can play The Estimation Game on Quantified Intuitions, solo, or with friends. The February game is live for one week (until Sunday 26th). We’ll release a new Estimation Game each month. Lots of people tell us they’d like to get more practice doing BOTECs and estimating, but they don’t get around to it. So we’ve designed The Estimation Game to give you the impetus to do a bit of estimation each month in a fun context. You might use this as a sandbox to experiment with different methods of estimating. You could decompose the question into easier-to-estimate quantities - make estimates in your head, discuss with friends, use a bit of paper, or even build a scrappy Guesstimate or Squiggle model. We’d appreciate your feedback in the comments, in our Discord, or at adam@sage-future.org. We’d love to have suggestions for questions for future rounds of The Estimation Game - this will help us keep the game varied and fun in future months! Info for organisers If you run a community group or meetup, we’ve designed the Estimation Game to be super easy to run as an off-the-shelf event. Check out our info for organisers page for resources and FAQs. If you’re running a large-scale event and want to run a custom Estimation Game at it, let us know and we can help you set it up. We’re planning to pilot custom Estimation Games at EAGx Nordics (and maybe EAGx Cambridge). About Quantified Intuitions We built Quantified Intuitions as an epistemics training site. See our previous post for more on our motivation. Alongside the monthly Estimation Game, we’ve made two permanent tools: Pastcasting: Predict past events to rapidly practise forecasting Calibration: Answer EA-themed trivia questions to calibrate your uncertainty Thanks to our test groups in London, to community builders who gave feedback, in particular Robert Harling, Adash Herrenschmidt-Moller, and Sam Robinson, and to Chana Messinger at CEA for the idea and feedback throughout. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 20, 2023
EA - Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions by christian
03:40
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Metaculus Introduces New 'Conditional Pair' Forecast Questions for Making Conditional Predictions, published by christian on February 20, 2023 on The Effective Altruism Forum. Predict P(A|B) & P(A|B') With New Conditional Pairs Events don't take place in isolation. Often we want to know the likelihood of an event occurring if another one does. Metaculus has launched conditional pairs, a new kind of forecast question that enables forecasters to predict the probability of an event given the outcome of another, and provides forecast consumers with greater clarity on the relationships between events. This post explains the motivation behind conditional pairs, how to interpret them, and how to start making conditional forecasts. (Check out this video explainer for more on conditional pairs and how to forecast with them.) How Do Conditional Pairs Work? A conditional pair poses two conditional questions (or "conditionals"): If Question B resolves Yes how will Question A resolve? If Question B resolves No how will Question A resolve? For example, a forecaster may want to predict on a question such as this: Will Alphabet’s Market Capitalization Fall Below $1 Trillion by 2025? The forecast depends—on many things. But consider one factor: Bing's share of the search engine market. And so if one knew that Bing's search engine market would be least 5% in March of 2024, they could make a more informed forecast. They might assign greater likelihood to Alphabet's decline. Our conditional pair is then: If Bing's market share is more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion? If Bing's market share is not more than 5% in March, 2024, will Alphabet's market capitalization fall below $1 trillion? (Start forecasting on this conditional pair here.) Two forecasters could have the same forecasts for Bing’s market share and Alphabet’s market cap while having very different mental models of their relationship. Conditional pairs help make these sometimes implicit differences explicit so they can be discussed and scored. Start Forecasting on Conditional Pairs Here are some newly created conditional pairs to start forecasting on: If Human-Machine Intelligence Parity Is Reached by 2040, Will the US Impose Compute Capacity Restrictions Before 2050? If Chinese GDP Overtakes US GDP by 2030, Will the US Go to War With China by 2035? If There Is a Bilateral Ceasefire in Ukraine by 2024, Will There Be Large-Scale Conflict With Russia by 2030? If There Is a US Debt Default by 2024, Will Democrats Win the 2024 US Presidential Election? Parent & Child Questions Conditional pairs like the above are composed of a "Parent Question" and a "Child Question." Parent: Bing has 5% Market Share by March, 2024 Child: Alphabet's Market Cap is Below $1 Trillion by 2025 Forecasts are made for the Child question conditional on the outcome of the Parent. Here, see that: In a world where Bing reaches 5% market share, the Metaculus community predicts Alphabet's decline is 56% likely. In a world where Bing does not reach 5% market share, the Metaculus community predicts Alphabet's decline is 44% likely. Conditional pairs are a step toward Metaculus's larger goal of empowering forecasters and forecast consumers to quantify and understand the impact of particular events and policy decisions. Feedback is appreciated! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 20, 2023
EA - Immigration reform: a shallow cause exploration by JoelMcGuire
01:12:01
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immigration reform: a shallow cause exploration, published by JoelMcGuire on February 20, 2023 on The Effective Altruism Forum. This shallow investigation was commissioned by Founders Pledge. Summary This shallow cause area report explores the impact of immigration on subjective wellbeing (SWB). It was completed in two weeks. In this report, we start by reviewing the literature and modelling the impact of immigration on wellbeing. Then, we conduct back of the envelope calculations (BOTECs) of the cost-effectiveness of various interventions to increase immigration. The effect of immigration has been studied extensively. However, most of the studies we find are correlational and do not provide causal evidence. Additionally, most of the studies use life satisfaction as a measure of SWB, so it’s unclear whether immigration impacts life satisfaction and affective happiness (e.g. positive emotions on a daily basis) differently. Despite these limitations, we attempt to estimate the effect of immigration on wellbeing. We find that immigrating to countries with higher average SWB levels might produce large benefits to wellbeing, but we are very uncertain about the exact size of the effect. According to our model, when people move to a country with higher SWB, they will gain 77% of the SWB gap between the origin and destination country. We assume this benefit will be immediate and permanent, as there is little evidence to model how this benefit evolves over time, and existing evidence doesn’t suggest large deviations from this assumption. There are open questions about the spillover effects of immigration on the immigrant’s household as well as their original and destination communities. Immigrating likely benefits the whole family if they move together, but the impact on household members that stay behind is less clear, as the economic benefits of remittances are countered by the negative effects of separation. On balance, we estimate a small, non-significant benefit for households that stay behind when a member immigrates (+0.01 WELLBY per household member). We did not include spillovers on the origin community due to scarce evidence (only one study) that suggested small, null effects. For destination communities, we estimate that increasing the proportion of immigrants by 1% is associated with a small, non-significant, negative spillover for natives (-0.01 WELLBYs per native), although this is likely moderated by attitudes towards immigrants. We then conducted BOTECs of possible interventions to increase immigration. The most promising is policy advocacy, which we estimate is 11 times more cost-effective than GiveDirectly cash transfers. The other interventions we investigated are 2 to 6 times better than cash transfers. However, all of our BOTECs are speculative and exploratory in nature. These estimates are also limited because we’re unsure how to model the potential for immigration increasing interventions to foster anti-immigrant sentiment in the future. Plus, there might be non-trivial risks that a big push for immigration or other polarising topics by Effective Altruists could burn goodwill that might be used on other issues (e.g., biosecurity). Accordingly, we’re inclined towards treating these as upper-bound estimates and we expect that once these costs are taken into account immigration policy advocacy would no longer be promising. We recommend that future research assesses the costs, chances of success, and risk of backlash for potential policy-based interventions to increase immigration. Notes This report focuses on the impact of immigration in terms of WELLBYs. One WELLBY is a 1 life satisfaction point change for one year (or any equivalent combination of change in life satisfaction and time). In some cases, we convert results in standard deviations of life sati...
Feb 20, 2023
EA - People Will Sometimes Just Lie About You by aella
25:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: People Will Sometimes Just Lie About You, published by aella on February 18, 2023 on The Effective Altruism Forum. Before getting mini-famous, I did not appreciate the degree to which people would misrepresent and lie about other people. I knew about it in theory. I occasionally stumbled across claims about people that I later found out were false. I knew, abstractly, that any particular accuser is telling a truth or a lie, but you're not sure which one. But now, I'm in the unique position where people say things about me all the time, and I hear most of it, and I have direct access to whether it's accurate or not. There's something about the lack of ambiguity that has left me startled, here. Something was way off about my models of the world before I had access to the truth of a wide range of accusational samples. In the last few years, I've risen in visibility to the degree it's started to get unpleasant. I've had to give up on the idea of throwing parties at my house where guests are allowed to invite unvetted friends. There are internet pockets dedicated to hating me, where people have doxxed me and my family, including the home addresses of my parents and sister. I’ve experienced one kidnapping attempt. I might have to move. One stalker sent me, on average, three long messages every day for nearly three years. By this point death threats are losing their novelty. Before I was this visible, my model was "If a lot of people don't like you, maybe the problem is actually you." Some part of me, before, thought that if you were just consistently nice and charitable, if you were a good, kind person, people would... see that, somehow? Maybe you get one or two insane people, but overall truth would ultimately prevail, because lies without evidence wither and die. And even if people didn't like or agree with you, they wouldn't try to destroy you, because you can only really incite that level of fury in someone if you were at least a little bit at fault yourself. So if you do find yourself in a situation where lots of people are saying terrible things about you, you should take a look in the mirror. But this sort of thing doesn't hold true at large scales! It really doesn't, and that fact shocks some subconscious part of me, to the degree that even I get kinda large-scale gaslit about myself. I often read people talking about how I'm terrible, and then I'm like damn, I must have been a little too sloppy or aggressive in my language to cause them to be so upset with me. Then I go read the original thing they're upset about and find I was actually fine, and really kind, and what the fuck? I'm not used to disagreements being so clearly black and white! And me in the right? What is this, some cartoon children's book caricature of a moral lesson? And I have a similar shock when people work very hard to represent things I do in a sinister light. There've been multiple writeups about me, either by or informed by people I knew in person, where they describe things I've done in a manner that I consider to be extremely uncharitable. People develop a narrative by speculating on my mental state, beliefs, or intentions ("of course she knew people would have that reaction, she knew that person's background"), by blurring the line between thing I concretely did and vaguer facts about context ("she was central to the party so she was responsible for that thing that happened at it"), and by emphasizing reactions more than any concrete bad behavior (“this person says they felt really bad, that proves you did a terrible thing”). Collectively, these paint a picture that sounds convincing, because it seems like all the parts of the narrative are pointing in the same direction. Individually, however, the claims don’t hold up. (In this case, “someone got upset at a party I attended” is a real fa...
Feb 18, 2023
EA - Data on how much solutions differ in effectiveness by Benjamin Todd
04:37
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Data on how much solutions differ in effectiveness, published by Benjamin Todd on February 17, 2023 on The Effective Altruism Forum. Click the link above to see the full article and charts. Here is a summary I wrote for the latest edition to the 80,000 Hours newsletter, or see the Twitter version. Is it really true that some ways of solving social problems achieve hundreds of times more, given the same amount of effort?Back in 2013, Toby Ord1 pointed out some striking data about global health. He found that the best interventions were: 10,000x better at creating years of healthy life than the worst interventions. 50x better than the median intervention. He argued this could have radical implications for people who want to do good, namely that a focus on cost-effectiveness is vital.For instance, it could suggest that by focusing on the best interventions, you might be able to have 50 times more impact than a typical person in the field.This argument was one of the original inspirations for our work and effective altruism in general.Now, ten years later, we decided to check how well the pattern in the data holds up and see whether it still applies – especially when extended beyond global health.We gathered all the datasets we could find to test the hypothesis. We found data covering health in rich and poor countries, education, US social interventions, and climate policy.If you want to get the full picture on the data and its implications, read the full article (with lots of charts!): How much do solutions to social problems differ in effectiveness? A collection of all the studies we could find. The bottom line is that the pattern Toby found holds up surprisingly well.This huge variation suggests that once you’ve built some career capital and chosen some problem areas, it’s valuable to think hard about which solutions to any problem you’re working on are most effective and to focus your efforts on those. The difficult question, however, is to say how important this is. I think people interested in effective altruism have sometimes been too quick to conclude that it’s possible to have, say, 1,000 times the impact by using data to compare the best solutions.First, I think a fairer point of comparison isn’t between best and worst but rather between the best measurable intervention and picking randomly. And if you pick randomly, you expect to get the mean effectiveness (rather than the worst or the median). Our data only shows the best interventions are about 10 times better than the mean, rather than 100 or 1,000 times better.Second, these studies will typically overstate the differences between the best and average measurable interventions due to regression to the mean: if you think a solution seems unusually good, that might be because it is actually good, or because you made an error in its favour. The better something seems, the greater the chance of error. So typically the solutions that seem best are actually closer to the mean. This effect can be large.Another important downside of a data-driven approach is that it excludes many non-measurable interventions. The history of philanthropy suggests the most effective solutions historically have been things like R&D and advocacy, which can’t be measured ahead of time in randomised trials. This means that restricting yourself to measurable solutions could mean excluding the very best ones.And since our data shows the very best solutions are far more effective than average, it’s very bad for your impact to exclude them.In practice, I’m most keen on the “hits-based approach” to choosing solutions. I think it’s possible to find rules of thumb that make a solution more likely to be among the very most effective, such as “does this solution have the chance of solving a lot of the problem?”, “does it offer leverage?”, “does it...
Feb 18, 2023
EA - EA London Hackathon Retrospective by Jonny Spicer
06:37
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA London Hackathon Retrospective, published by Jonny Spicer on February 17, 2023 on The Effective Altruism Forum. Introduction On Saturday 11th February, we ran a day-long EA hackathon in London, modeled off a similar event in Berkeley late last year. This post follows a similar format to the retrospective linked above. You can see some of the basic information about the event, as well as notes on some of the projects that people were working on in this Google doc. We are incredibly grateful to Nichole Janeway Bills and Zac Hatfield-Dodds for their advice before the event, and to Edward Saperia for his advice and assistance on the day itself. TL:DR; We ran a pilot hackathon in London, and were surprised by the success of the event. Around 50 people turned up, they gave mostly positive feedback, there were several impressive projects which have plausible impact. The event helped build the EA London tech community and generated opportunities for community members to work on impactful software projects on an ongoing, long-term basis. We're excited to continue running these kinds of events and want to produce a blueprint for others to run similar hackathons in other places. Goals of the event While we hoped that this event would produce artifacts that had legible impact, it was mainly a community-building exercise. Our primary goal was to validate the concept - could we run a successful hackathon? If so, would running similar events in the future lead to greater tangible impact, even if this one didn't necessarily do so? What went well Approximately 50 people attended, which was more than we'd expected. The average skill level was high. The majority of teams had a strong lead or mentor, in most cases an existing maintainer or the person who had the original idea for the project. Asking people if they'd like to be put in groups and then doing so generally worked well - I would estimate this was beneficial for 80% of the people who selected this option. See related potential improvement below. Dedicating one of the monthly EA London tech meetups to brainstorming ideas for hackathon projects both yielded good ideas and got people engaged. Having a show & tell section encouraged attendees to optimize for having something concrete, and gave groups the chance to learn about what other groups had worked on. The venue was excellent. The average overall rating for the event on our feedback form was 4.36/5 . What we could do better next time We had a limited budget, and more people showed up then we could provide food for on said budget, meaning we didn't provide lunch after we'd originally said we would. We weren't transparent about this, which was a mistake. We didn't do enough to accommodate those who were less experienced coders. In future, we'll use a different question on the sign-up form, along the lines of "how much guidance would you need in order to complete an issue in a project that required coding?". We can then organise our groups/projects/activities accordingly, including having more ways to contribute to projects through means other than coding. We underestimated the ratio of people with jobs in the "data" family relative to the "software" family, and so our suggested projects were almost entirely software-focused. We could've had a better shared digital space. This ended up being a bit of an afterthought for us, and we ended up asking people to join a WhatsApp group when they signed in, but it wasn't used much during the day. A different platform could facilitate more collaboration/visibility between groups, allow people to ask for help more easily, and generally give more of a community feel to the event. Outputs and resources There were several pull requests submitted to existing open-source projects, including VeganBootcamp and Stampy. Multiple proof-of-concep...
Feb 17, 2023
EA - AI Safety Info Distillation Fellowship by robertskmiles
00:26
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Info Distillation Fellowship, published by robertskmiles on February 17, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 17, 2023
EA - Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience by Amy Labenz
07:33
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Getting organizational value from EA conferences, featuring Charity Entrepreneurship’s experience, published by Amy Labenz on February 17, 2023 on The Effective Altruism Forum. For individuals in the EA community, EA Global and EAGx conferences are an opportunity to be exposed to ideas and people who can help us identify and pursue more impactful paths. And for organizations, they’re an opportunity to connect with potential new hires, grantees, and funders. Charity Entrepreneurship recently shared with us (the CEA events team) a summary of the impact their participation at EAG and EAGx has had on their Incubation Program. We were inspired by this post, and wanted to encourage orgs working on EA causes — object and meta, large and small, older and newer — to consider carefully how to get the most out of the conferences we’ve got lined up in 2023. The opportunities include: Talks & workshops can be used to inform the community about your work and how others can contribute to increasing your impact as collaborators, funders or recruits. Office hours & career fairs are available for you to interact directly with those interested in your plans and how they might fit into them. 1-on-1 meetings are the bedrock of all the conferences CEA is involved in organizing, facilitating in-depth discussion and establishing connections between people with common goals. If you’re interested in your org appearing on the program at an upcoming conference, or would like to discuss with us how best to approach these events, please reach out to us at hello@eaglobal.org. We thought sharing Charity Entrepreneurship’s experience publicly might be valuable for others to learn from in planning their own approach to future conferences. Here’s their story in their own words. CE’s EAG and EAGx Success Stories From the launch of Charity Entrepreneurship’s Incubation Program, EAG and EAGx conferences have been a crucial part of the organization’s outreach strategy. Thanks in part to CE’s increasing presence — talks, workshops, and participation in numerous career fairs — nonprofit entrepreneurship has become a recognized career option in the effective altruist community. From just one round of applications for CE’s Incubation Program, 325 out of 720 applicants had participated in EAG or EAGx conferences where the organization was present. 169 applicants had interacted directly with CE at EAGs by participating in a workshop, office hours, or talking to CE staff one on one. When young entrepreneurs who got into the Incubation Program and started new high-impact charities were asked what convinced them to apply, they named EAGs as one of the top reasons (along with the EA movement in general, partner/friend, internships, talking to CE staff members and group organizers). So far, Charity Entrepreneurship has started 23 organizations and provided them with $1.88 million in total funding. These organizations have fundraised a further $20 million to cover their costs and are now reaching over 120 million animals and over 10 million people with their interventions. You can learn more about them on CE’s website; they include charities like Fortify Health (three GiveWell Incubation Grants, 25% chance of becoming a top charity), Fish Welfare Initiative (ACE Standout Charity), Shrimp Welfare Project (first organization ever working on shrimp welfare), and Lead Exposure Elimination Project (precursory policy organization now working in 10 countries). Key benefits for CE Finding talented entrepreneurs to start high-impact projects (50% of the founders had been in contact with CE’s outreach efforts during EAG conferences). Finding hires for the CE team (20% of the team have been hired thanks to outreach efforts during EAG conferences), as well as fellows and interns. Making useful connections for funding (10% of fun...
Feb 17, 2023
EA - Project for Awesome 2023: Vote for videos of EA charities! by EA ProjectForAwesome
02:45
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Project for Awesome 2023: Vote for videos of EA charities!, published by EA ProjectForAwesome on February 16, 2023 on The Effective Altruism Forum. Project for Awesome (P4A) is a yearly charity video contest running this weekend in 2023. Voting will start Friday, February 17th, 12:00pm EST and is open until February 19th, 11:59am EST. The charities with the most votes, the total across all their videos, win money. In the last years, this was between $14,000 and $32,000 per charity. This is a good opportunity to raise money for EA charities with just a few clicks (~5 minutes). Please ask your friends and EA group members to vote for ALL the videos for EA charities! A sample text message and a voting instruction is below. Sample text message: Hey! Not sure if you know, but every year Hank and John Green organize the Project For Awesome, which raises money for charities. All you have to do is click “Vote” on a bunch of videos and you could potentially help thousands of dollars go to highly effective charities. Would you be willing to help out? If they reply yes: Great! Here are the videos of the charities we’re promoting. You can vote for all of the videos. Without watching videos, it will just take a few minutes. (insert the instructions or just the links) Voting instruction with links: 1. Invite your friends to vote, too! 2. Open one of the following links, then open one video first and do the CAPTCHA before opening the other videos of that charity in new tabs. Vote for ALL videos of that charity. 3. Repeat step 2 for all charities listed below. Vote for ALL videos for each of these EA charities. You can see that you voted for a video by the grayed out "Voted" button. In the end, P4A will sum up all votes for all videos of one charity. In total there are several videos for our supported EA charities but it only takes a few clicks (if you do not want to watch the videos) and it's really worth it. Supported EA charities:Animal Advocacy Africa: Ethics: Society: Top Charities Fund: Food Institute: Humane League: Animal Initiative: Other EA(-related) charities:International Campaign to Abolish Nuclear Weapons (ICAN): Against Malaria Foundation: Air Task Force:: Means No Worldwide: Global Fund: Please also join our Facebook group, EA Project 4 Awesome 2023, and our Facebook event for this year’s voting! Thank you very much! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 17, 2023
EA - Advice for an alcoholic EA by anotherburneraccountsorry
03:36
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Advice for an alcoholic EA, published by anotherburneraccountsorry on February 16, 2023 on The Effective Altruism Forum. Hi guys, as my username suggests, I’m sorry to write this pseudonymously, but I don’t know how public I want to be about my problems yet. So, the short version is that I’m an alcoholic and I’m an Effective Altruist, and I don’t know exactly how much I should or shouldn’t involve EA in my recovery efforts. I am vaguely aware that EA has mental health resources for struggling EAs, and I am struggling. I also don’t know how many of them are relevant to substance abuse in particular. These are some of the considerations that I am conflicted about: Against involving EA more: Most of my problems are not directly related to EA, and I’m not sure if I should be using EA resources for personal health problems unless I have some strong idea of how my problems relate to my involvement in EA. Maybe more to the point, I have access to other mental health resources, I am currently seeing someone at my school about this, and it feels like a waste of resources to involve EA in my problems if I don’t need to. Additionally, there are many recent worries that EA is too insular, and this can lead to problems in how it handles personal issues. I share some of these worries, and although I don’t distrust EA’s mental health team, it seems like I should be cautious in over-involving EA in my personal life where it is unnecessary. If nothing else, it makes me more dependent on EA. Additionally as mentioned before, I just don’t know if EA’s mental health team deals with things like substance abuse so much as burn out. In favor: While my drinking is not deeply connected to my involvement in Effective Altruism, there are a number of things that have exacerbated my problem which are idiosyncratic to EA in a way that makes me uncomfortable talking to a normal therapist about it. I have still not mentioned anything EA related to my counselor so far despite our sessions thus far largely focusing on my “triggers” for drinking. Related to this, I am not a huge fan of my current counselor’s approach, there is a bunch of focus on things like what drives me to drink, when I buy more into a bias-based and chemical model of drinking, where mostly the issue with my “triggers” is that I am unusually susceptible to finding lame excuses for myself. She also keeps recommending a bunch of other mental health resources, some of which seem quite tangentially related to my main problem. I think that a more focused approach would be valuable, and think that the type of triage and evidence-based thinking common in EA makes it more likely to be a space where the counseling I get will be, well, effective. I also don’t want to speak too soon about resource problems, as there may be many services that aren’t resource intensive, like groups sessions for EAs with substance abuse problems. Does anyone have any advice? Are there people here who have gone through a situation like this before, and have they involved EA’s mental health resources in some way? If so what did they get out of it? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 17, 2023
EA - Why should ethical anti-realists do ethics? by Joe Carlsmith
58:31
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why should ethical anti-realists do ethics?, published by Joe Carlsmith on February 16, 2023 on The Effective Altruism Forum. (Cross-posted from my website. Podcast version here, or search "Joe Carlsmith Audio" on your podcast app.) "What was it then? What did it mean? Could things thrust their hands up and grip one? Could the blade cut; the fist grasp?" Virginia Woolf 1. Introduction Ethical philosophy often tries to systematize. That is, it seeks general principles that will explain, unify, and revise our more particular intuitions. And sometimes, this can lead to strange and uncomfortable places. So why do it? If you believe in an objective ethical truth, you might talk about getting closer to that truth. But suppose that you don’t. Suppose you think that you’re “free to do whatever you want.” In that case, if “systematizing” starts getting tough and uncomfortable, why not just . stop? After all, you can always just do whatever’s most intuitive or common-sensical in a given case – and often, this is the choice the “ethics game” was trying so hard to validate, anyway. So why play? I think it’s a reasonable question. And I’ve found it showing up in my life in various ways. So I wrote a set of two essays explaining part of my current take. This is the first essay. Here I describe the question in more detail, give some examples of where it shows up, and describe my dissatisfaction with two places anti-realists often look for answers, namely: some sort of brute preference for your values/policy having various structural properties (consistency, coherence, etc), and avoiding money-pumps (i.e., sequences of actions that take you back to where you started, but with less money) In the second essay, I try to give a more positive account. Thanks to Ketan Ramakrishnan, Katja Grace, Nick Beckstead, and Jacob Trefethen for discussion. 2. The problem There’s some sort of project that ethical philosophy represents. What is it? 2.1 Map-making with no territory According to normative realists, it’s “figuring out the normative truth.” That is: there is an objective, normative reality “out there,” and we are as scientists, inquiring about its nature. Many normative anti-realists often adopt this posture as well. They want to talk, too, about the normative truth, and to rely on norms and assumptions familiar from the context of inquiry. But it’s a lot less clear what’s going on when they do. Perhaps, for example, they claim: “the normative truth this inquiry seeks is constituted by the very endpoint of this inquiry – e.g., reflective equilibrium, what I would think on reflection, or some such.” But what sort of inquiry is that? Not, one suspects, the normal kind. It sounds too . unconstrained. As though the inquiry could veer in any old direction (“maximize bricks!”), and thereby (assuming it persists in its course) make that direction the right one. In the absence of a territory – if the true map is just: whatever map we would draw, after spending ages thinking about what map to draw – why are we acting like ethics is a normal form of map-making? Why are we pretending to be scientists investigating a realm that doesn’t exist? 2.2 Why curve-fit? My own best guess is that ethics – including the ethics that normative realists are doing, despite their self-conception – is best understood in a more active posture: namely, as an especially general form of deciding what to do. That is: there isn’t the one thing, figuring out what you should do, and then that other separate thing, deciding what to do. Rather, ethical thought is essentially practical. It’s the part of cognition that issues in action, rather than the part that “maps” a “territory.” But on this anti-realist conception of ethics, it can become unclear why the specific sort of thinking ethicists tend to engage in is worth doing. ...
Feb 16, 2023
EA - Transitioning to an advisory role by MaxDalton
03:42
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transitioning to an advisory role, published by MaxDalton on February 16, 2023 on The Effective Altruism Forum. I’m writing to announce that I’ve resigned from my role as CEA’s Executive Director, and will be transitioning to an advisory role. Basically, my mental health has been bad for the last 3 months. Starting in November, my role changed from one that I love - building a team and a product, building close working relationships with people, executing - to one that I find really stressful: dealing with media attention, stakeholders, and lawyers at long unpredictable hours and wrestling with strategic uncertainty. I think I’m also not so good at the latter sort of work, relative to the former. I've been getting lots of advice, therapy, and support, but recently I've been close to a crisis – struggling to get out of bed, feeling terror at the idea of sitting at my desk. I really wish that I were strong enough to keep doing this job, especially right now – I care so much about CEA’s work to help more people tackle the important problems we face, and I care deeply about the team we’ve built. But I’m just not able to keep going in my current role, and I don't think that pretending to be stronger or struggling on will be good for CEA or for me, because I’m not able to perform as well as I would like and there’s a risk that I’ll burn out with no handover. So I think it’s best to move into an advisory role and allow someone else to direct CEA. The boards of Effective Ventures UK and Effective Ventures US, which govern CEA, will appoint an interim Executive Director soon. Once they’re appointed I plan to continue advising and working with them and the CEA team to ensure a smooth transition and help find a new permanent ED. I hope that moving from an executive to advisory role will help alleviate some of the pressure and allow me to contribute more productively to our shared work going forward. For a while now I've been trying to build up the leadership team as the body running CEA, with me as one member. I think that the leadership team is very strong: people disagree with each other directly but with care, have complementary strengths, and show strong leadership for their own programs. I think that they will be able to do a great job leading CEA together with the interim ED and the new permanent ED. Of course, FTX and subsequent events have highlighted some important issues in EA. I’ve been working with the team to reflect on how this might impact our work and necessitate changes, and I hope that they’ll be able to share more on these conversations and plans in the future. Although I’m very sad not to be able to see through that work in my current role with CEA, I think that the work we’ve done so far will set the new leadership team up well. I also plan to continue to reflect, will discuss my thinking with new leadership, and may publish some of my personal reflections. Despite the setbacks of these last few months, I'm very proud of what we've achieved together over the last four years. Compared to 2019, the number of new connections we’re making at events is 5x higher, and people are spending 10x time engaging with the Forum (which also has a lot more interesting content). Overall, I think that we’ve helped hundreds of people to reflect on how they can best contribute to making the world a better place, and begin to work on these critical problems. I’m also incredibly grateful to have been a part of this team: CEA staff are incredibly talented, caring, and dedicated. I’ve loved to be a part of a culture where staff are valued and empowered to do things. I look forward to seeing the impact which they continue to have over the coming months and years under new leadership. This has been true for many people, especially EV board members and some staff who have jumped in t...
Feb 16, 2023
EA - Qualities that alignment mentors value in junior researchers by Akash
00:27
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Qualities that alignment mentors value in junior researchers, published by Akash on February 14, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 16, 2023
EA - Please don't throw your mind away by TsviBT
27:21
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Please don't throw your mind away, published by TsviBT on February 15, 2023 on The Effective Altruism Forum. Dialogue [Warning: the following dialogue contains an incidental spoiler for "Music in Human Evolution" by Kevin Simler. That post is short, good, and worth reading without spoilers, and this post will still be here if you come back later. It's also possible to get the point of this post by skipping the dialogue and reading the other sections.] Pretty often, talking to someone who's arriving to the existential risk / AGI risk / longtermism cluster, I'll have a conversation like the following. Tsvi: "So, what's been catching your eye about this stuff?" Arrival: "I think I want to work on machine learning, and see if I can contribute to alignment that way." T: "What's something that got your interest in ML?" A: "It seems like people think that deep learning might be on the final ramp up to AGI, so I should probably know how that stuff works, and I think I have a good chance of learning ML at least well enough to maybe contribute to a research project." T: "That makes sense. I guess I'm fairly skeptical of AGI coming very soon, compared to people around here, or at least I'm skeptical that most people have good reasons for believing that. Also I think it's pretty valuable to not cut yourself off from thinking about the whole alignment problem, whether or not you expect to work on an already-existing project. But what you're saying makes sense too. I'm curious though if there's something you were thinking about recently that just strikes you as fun, or like it's in the back of your mind a bit, even if you're not trying to think about it for some purpose." A: "Hm... Oh, I saw this video of an octopus doing a really weird swirly thing. Here, let me pull it up on my phone." T: "Weird! Maybe it's cleaning itself, like a cat licking its fur? But it doesn't look like it's actually contacting itself that much." A: "I thought it might be a signaling display, like a mating dance, or for scaring off predators by looking like a big coordinated army. Like how humans might have scared off predators and scavenging competitors in the ancestral environment by singing and dancing in unison." T: "A plausible hypothesis. Though it wouldn't be getting the benefit of being big, like a spread out group of humans." A: "Yeah. Anyway yeah I'm really into animal behavior. Haven't been thinking about that stuff recently though because I've been trying to start learning ML." T: "Ah, hm, uh... I'm probably maybe imagining things, but something about that is a bit worrying to me. It could make sense, consequentialist backchaining can be good, and diving in deep can be good, and while a lot of that research doesn't seem to me like a very hopeworthy approach, some well-informed people do. And I'm not saying not to do that stuff. But there's something that worries me about having your little curiosities squashed by the backchained goals. Like, I think there's something really good about just doing what's actually interesting to you, and I think it would be bad if you were to avoid putting a lot of energy into stuff that's caught your attention in a deep way, because that would tend to sacrifice a lot of important stuff that happens when you're exploring something out of a natural urge to investigate." A: "That took a bit of a turn. I'm not sure I know what you mean. You're saying I should just follow my passion, and not try to work towards some specific goal?" T: "No, that's not it. More like, when I see someone coming to this social cluster concerned with existential risk and so on, I worry that they're going to get their mind eaten. Or, I worry that they'll think they're being told to throw their mind away. I'm trying to say, don't throw your mind away." A: "I... don't think I'm being told to t...
Feb 16, 2023
EA - Anyone who likes the idea of EA and meeting EAs but doesn't want to discuss EA concepts IRL? by antisocial-throwaway
02:05
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anyone who likes the idea of EA & meeting EAs but doesn't want to discuss EA concepts IRL?, published by antisocial-throwaway on February 15, 2023 on The Effective Altruism Forum. Hi all, using a throwaway as this feels a bit anti-social to post! Basically, I'm aligned with & live via the basic concepts of EA (use your career for good + donate some of your earnings to effective causes), but that's where my interest really kind of ends. I'm not interested in using my free time to read into EA further, I don't feel motivated to learn more about all the concepts that people use/ discuss, etc (utilitarianism, expected value, etc etc). I really like having non-EA friends and don't get any enjoyment from having philosophical discussions with people. I really like the core ethos of EA, but when I've gone to in-person meetups (including Prague Fall Season) I've felt like a fraud because I'm not at all versed in the language, and actually have no interest in discussion all the forum talking points. I just want to meet cool people who care about doing good! Of course a clear rebuttal here would be "ok then dude just talk to people about other stuff", but I've often felt at these events like people are there to discuss this kind of stuff, and to talk about more normal/ "mundane" stuff would make people think I'm wasting their time. So I guess my question is like - is there anyone else out there who feels this way? Any tips? I'd really like to make friends through the EA community but in the same breath I only want to be involved in a straightforward way (career + key ideas), rather than scouring the forums & LessWrong and hitting all the squares on the EA bingo card. Also, is it kind of in my head that you need to be a hardcore EA who has strong opinions on ethics & philosophy & etc, or is that the general mood? (I also appreciate that the people who end up reading this will be more "hardcore EA" types who check the forum regularly...) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 16, 2023
EA - Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience by jvb
03:30
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nobody Wants to Read Your Sht: my favorite book of writing advice, condensed for your convenience, published by jvb on February 15, 2023 on The Effective Altruism Forum. Nobody Wants to Read Your Sht by Steven Pressfield is my favorite book of writing advice. Its core insight is expressed in the title. The best thing you can do for your writing is to internalize this deep truth. Pressfield did it by writing ad copy. You can’t avoid internalizing that nobody wants to read your shit when you’re writing ads, which everybody hates and nobody ever wants to read. Maybe you don’t have to go write ad copy to understand this; maybe you can just read the book, or just this post. When you understand that nobody wants to read your shit, your mind becomes powerfully concentrated. You begin to understand that writing/reading is, above all, a transaction. The reader donates his time and attention, which are supremely valuable commodities. In return, you the writer must give him something worthy of his gift to you. When you understand that nobody wants to read your shit, you develop empathy. [...] You learn to ask yourself with every sentence and every phrase: Is this interesting? Is it fun or challenging or inventive? Am I giving the reader enough? Is she bored? Is she following where I want to lead her? What should you do about the fact that nobody wants to read your shit? Streamline your message. Be as clear, simple, and easy to understand as you possibly can. Make it fun. Or sexy or interesting or scary or informative. Fun writing saves lives. Apply this insight to all forms of communication. Pressfield wrote this book primarily for fiction writers, who are at the most serious risk of forgetting that nobody wants to read their shit (source: am fiction writer). But the art of empathy applies to all communication, and so do many other elements of fiction: Nonfiction is fiction. If you want your factual history or memoir, your grant proposal or dissertation or TED talk to be powerful and engaging and to hold the reader and audience's attention, you must organize your material as if it were a story and as if it were fiction. [...] What are the universal structural elements of all stories? Hook. Build. Payoff. This is the shape any story must take. A beginning that grabs the listener. A middle that escalates in tension, suspense, and excitement. And an ending that brings it all home with a bang. That's a novel, that's a play, that's a movie. That's a joke, that's a seduction, that's a military campaign. It's also your TED talk, your sales pitch, your Master's thesis, and the 890-page true saga of your great-great-grandmother's life. And your whitepaper, and your grant proposal, and your EA forum post. For this reason, I do recommend going out and grabbing this book, even though much of it concerns fiction. It only takes about an hour to read, because Pressfield knows we don’t want to read his shit. Finally: All clients have one thing in common. They're in love with their product/company/service. In the ad biz, this is called Client's Disease. [...] What the ad person understands that the client does not is that nobody gives a damn about the client or his product. [...] The pros understand that nobody wants to read their shit. They will start from that premise and employ all their arts and all their skills to come up with some brilliant stroke that will cut through that indifference. The relevance of this quote to EA writing is left as an exercise to the reader. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 16, 2023
EA - Don't Over-Update On Others' Failures by lincolnq
02:18
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Don't Over-Update On Others' Failures, published by lincolnq on February 15, 2023 on The Effective Altruism Forum. Scenario: You're working hard on an important seeming problem. Maybe you have an idea to cure a specific form of cancer using mRNA. You've been working on the idea for a year or two, and seem to be making slow progress; it is not yet clear whether you will succeed. Then, you read a blog post or a paper about a similar approach by someone else: "Why I Am Not Working On Cures to Cancer Anymore." They failed in their approach and are giving up. You read their postmortem, and there are a few similarities but most of the details differ from your approach. How much should you update that your path will not succeed? Maybe a little: After all, they might have tried the thing you're working on too and just didn't mention it. But not that much, since after all they didn't actually appear to try the specific thing you're doing. Even if they had, execution is often more important than ideas anyway, and maybe their failure was execution related. The same applies for cause prioritization. Someone working on wild animal suffering might read this recent post, and even though they are working on an angle not mentioned, give up. I think in most cases this would be over-updating. Read the post, learn from it, but don't give up just because someone else didn't manage to find an angle. Last example—climate change. 80000 Hours makes clear that they think it is important but "all else equal, we think it's less pressing than our highest priority areas". (source) This does not mean working on climate change is useless, and if you read the post it becomes clear they just don't see a good angle. If you have an angle on climate change, please work on it! Indeed, I will go further and make the point: important advances are made by people who have unique angles that others didn't see. To put it another way from the startup world: "the best ideas look initially like bad ideas". Angles on solving problems are subtle. It's hard to find good ones, and execution matters, so much that even two attempts which superficially have the same thesis could succeed differently. Don't over-update from others' failures. The best work will be done by people who have unique takes on how to make the world better. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 15, 2023
EA - AI alignment researchers may have a comparative advantage in reducing s-risks by Lukas Gloor
18:57
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI alignment researchers may have a comparative advantage in reducing s-risks, published by Lukas Gloor on February 15, 2023 on The Effective Altruism Forum. I believe AI alignment researchers might be uniquely well-positioned to make a difference to s-risks. In particular, I think this of alignment researchers with a keen interest in “macrostrategy.” By that, I mean ones who habitually engage in big-picture thinking related to the most pressing problems (like AI alignment and strategy), form mental models of how the future might unfold, and think through their work’s paths to impact. (There’s also a researcher profile where a person specializes in a specific problem area so much that they no longer have much interest in interdisciplinary work and issues of strategy – those researchers aren’t the target audience of this post.) Of course, having the motivation to work on a specific topic is a significant component of having a comparative advantage (or lack thereof). Whether AI alignment researchers find themselves motivated to invest a portion of their time/attention into s-risk reduction will depend on several factors, including: Their opportunity costs Whether they think the work is sufficiently tractable Whether s-risks matter enough (compared to other practical priorities) given their normative views Whether they agree that they may have a community-wide comparative advantage Further below, I will say a few more things about these bullet points. In short, I believe that, for people with the right set of skills, reducing AI-related s-risks will become sufficiently tractable (if it isn’t already) once we know more about what transformative AI will look like. (The rest depends on individual choices about prioritization.) Summary Suffering risks (or “s-risks”) are risks of events that bring about suffering in cosmically significant amounts. (“Significant” relative to our current expectation over future suffering.) (This post will focus on “directly AI-related s-risks,” as opposed to things like “future humans don't exhibit sufficient concern for other sentient minds.”) Early efforts to research s-risks were motivated in a peculiar way – morally “suffering-focused” EAs started working on s-risks not because they seemed particularly likely or tractable, but because of the theoretical potential for s-risks to vastly overshadow more immediate sources of suffering. Consequently, it seems a priori plausible that the people who’ve prioritzed s-risks thus far don’t have much of a comparative advantage for researching object-level interventions against s-risks (apart from their high motivation inspired by their normative views). Indeed, this seems to be the case: I argue below that the most promising (object-level) ways to reduce s-risks often involve reasoning about the architectures or training processes of transformative AI systems, which involves skills that (at least historically) the s-risk community has not been specializing in all that much.[1] Taking a step back, one challenge for s-risk reduction is that s-risks would happen so far ahead in the future that we have only the most brittle of reasons to assume that we can foreseeably affect things for the better. Nonetheless, I believe we can tractably reduce s-risks by focusing on levers that stay identifiable across a broad range of possible futures. In particular, we can focus on the propensity of agents to preserve themselves and pursue their goals in a wide range of environments. By focusing our efforts on shaping the next generation(s) of influential agents (e.g., our AI successors), we can address some of the most significant risk factors for s-risks.[2] In particular: Install design principles like hyperexistential separation into the goal/decision architectures of transformative AI systems. Shape AI training env...
Feb 15, 2023
EA - Why I No Longer Prioritize Wild Animal Welfare by saulius
07:36
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why I No Longer Prioritize Wild Animal Welfare, published by saulius on February 15, 2023 on The Effective Altruism Forum. This is the story of how I came to see Wild Animal Welfare (WAW) as a less promising cause than I did initially. I summarise three articles I wrote on WAW: ‘Why it’s difficult to find cost-effective WAW interventions we could do now’, ‘Lobbying governments to improve WAW’, and ‘WAW in the far future’. I then draw some more general conclusions. The articles assume some familiarity with WAW ideas. See here or here for an intro to WAW ideas. My initial opinion My first exposure to EA was reading Brian Tomasik’s articles about WAW. I couldn’t believe that despite constantly watching nature documentaries, I had never realized that all this natural suffering is a problem we could try solving. When I became familiar with other EA ideas, I still saw WAW as by far the most promising non-longtermist cause. I thought that EA individuals and organizations continued to focus most of the funding and work on farmed animals because of the status quo bias, risk-aversion, failure to appreciate the scale of WAW issues, misconceptions about WAW, and because they didn’t care about small animals despite evidence that they could be sentient. There seem to be no cost-effective interventions to pursue now In 2021, I was given the task of finding a cost-effective WAW intervention that could be pursued in the next few years. I was surprised by how difficult it was to come up with promising WAW interventions. Also, most ideas were very difficult to evaluate and their impacts were highly uncertain. To my surprise, most WAW researchers that I talked to agreed that we’re unlikely to find WAW interventions that could be as cost-effective as farmed animal welfare interventions within the next few years. It’s just much easier to change conditions and observe consequences for farmed animals because their genetics and environment are controlled by humans. I ended up spending most of my time evaluating interventions to reduce aquatic noise. While I think this is promising compared to other WAW interventions I considered, there are quite many farmed animal interventions that I would prioritize over reducing aquatic noise. I still think there is about a 15% chance that someone will find a direct WAW intervention in the next ten years that is more promising than the marginal farmed animal welfare intervention. I discuss direct short-term WAW interventions in more detail here. Influencing governments Even though WAW work doesn’t seem as promising as farmed animal welfare in terms of immediate impact, some people have suggested that we should do some non-controversial WAW interventions anyway, in order to promote the wild animal welfare field and show that WAW is tractable. But then I questioned: is it tractable? And what is the plan after we do these interventions? I started asking people who work on WAW about the theory of change for the movement. Some people said that the ultimate aim is to influence governments decades into the future to improve WAW on a large scale. But influence them to do what exactly? Any goals I could come up with didn’t seem as promising and unambiguously positive as I expected. The argument for the importance of WAW rests on the enormous numbers of small wild animals. But it’s difficult to imagine politicians and voters wanting to spend taxpayer money on improving wild insect welfare, especially in a scope-sensitive way. It also seems very difficult to find out and agree on what interventions are good for overall wild animal welfare when all things are considered. See here for further discussion of the goals of lobbying governments to improve WAW, and obstacles to doing this. Long-term future Others have argued that what matters most in WAW is moral circle exp...
Feb 15, 2023
EA - EA Organization Updates: January 2023 by Lizka
18:33
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Organization Updates: January 2023, published by Lizka on February 14, 2023 on The Effective Altruism Forum. These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity. Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have pressing deadlines. You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series. The organizations are in alphabetical order. Job listings Consider also exploring jobs listed on “Job listing (open).” Against Malaria Foundation Senior Operations Manager (Remote, £50,000 - £60,000) Effective Institutions Project Chief of Staff/Chief Operating Officer (Remote, $75,000+) Charity Entrepreneurship Nonprofit Founder in Biosecurity and Large-Scale Global Health (Remote Program/ 2 weeks in London/ Stipends + up to $200,000 in seed funding, apply to the Incubation Program by 12 March) GiveWell Senior Researcher (Remote / Oakland, CA, $181,400 - $199,800) Senior Research Associate (Remote / Oakland, CA, $127,000 - $139,900) Content Editor (Remote / Oakland, CA, $83,500 - $91,900) Open Philanthropy Cause Prioritization Interns (Summer 2023) (Remote, $1,900 / week, apply by 26 February) Senior Program Associate - Forecasting (Remote, $134,526, apply by 5 March) Associated roles in operations and finance (Most roles remote but require US working hours, $84,303 - $104,132) Rethink Priorities: Expression of Interest - Project lead/co-lead for a Longtermist Incubator (Remote, flexible, $67,000 - 115,000, apply by 28 February) Organizational updates 80,000 Hours EVF’s recent update - announcing interim CEOs of EVF - highlights changes to 80,000 Hours’ organisation structure. 80,000 Hours’ CEO, Howie Lempel, moved to Interim CEO of Effective Ventures Foundation in the UK in November, and Brenton Mayer has been acting as Interim CEO of 80k in his absence. This month on The 80,000 Hours Podcast, Rob Wiblin interviewed Athena Aktipis on cancer, cooperation, and the apocalypse. 80,000 Hours also shared several blog posts, including Michelle Hutchinson’s writing on My thoughts on parenting and having an impactful career. Anima International Anima International’s Bulgarian team, Nevidimi Zhivotni, recently released a whistleblower video interviewing a former fur farmworker. As a result, part of the video was shown on Bulgaria’s national evening news programme and a member of the team was interviewed live on air. This represents the first time that fur has been covered in depth as a topic on prime-time television in the country. Further north in Poland, Anima International’s Polish team Otwarte Klatki launched one of its biggest projects as part of the Fur Free Europe campaign. In the video, we see celebrities reacting to being shown footage from fur farms. It’s worth remembering that Poland is one of the world’s top fur producers. Finally, the team in Norway is gearing up for the Anima Advocacy Conference (Dyrevernkonferansen) 2023. The conference is dedicated to effective animal advocacy and was the first with such a focus when it launched a few years ago. For more information and to get tickets, you can go here. Berkeley Existential Risk Initiative (BERI) Elizabeth Cooper joins BERI as Deputy Director starting March 1. She’ll help run BERI’s university collaborations program, as well as launching new BERI programs in the future. Centre for Effective Altruism Forum team You can see an update from the Forum team here: “Community” posts have their own section, subforums are closing, and more (F...
Feb 15, 2023
EA - Philanthropy to the Right of Boom [Founders Pledge] by christian.r
49:33
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Philanthropy to the Right of Boom [Founders Pledge], published by christian.r on February 14, 2023 on The Effective Altruism Forum. Background and Acknowledgements: This write-up represents part of an ongoing Founders Pledge research project to understand the landscape of nuclear risk and philanthropic support of nuclear risk reduction measures. It is in some respects a work in progress and can be viewed as a Google Document here and on Founders Pledge's website here. With thanks to James Acton, Conor Barnes, Tom Barnes, Patty-Jane Geller, Matthew Gentzel, Matt Lerner, Jeffrey Lewis, Ankit Panda, Andrew Reddie, and Carl Robichaud for reviewing this document and for their thoughtful comments and suggestions. “The Nuclear Equivalent of Mosquito Nets” In philanthropy, the term “impact multipliers” refers to features of the world that make one funding opportunity relatively more effective than another. Stacking these multipliers makes effectiveness a “conjunction of multipliers;” understanding this conjunction can in turn help guide philanthropists seeking to maximize impact under high uncertainty. Not all impact multipliers are created equal, however. To systematically engage in effective giving, philanthropists must understand the largest impact multipliers — “critical multipliers” — those features that most dramatically cleave more effective interventions from less effective interventions. In global health and development, for example, one critical multiplier is simply to focus on the world’s poorest people. Because of large inequalities in wealth and the decreasing marginal utility of money, helping people living in extreme poverty rather than people in the Global North is a critical multiplier that winnows the field of possible interventions more than many other possible multipliers. Additional considerations — the prevalence of mosquito-borne illnesses, the low cost and scalability of bednet distribution, and more — ultimately point philanthropists in global health and development to one of the most effective interventions to reduce suffering in the near term: funding the distribution of insecticide-treated bednets. This write-up represents an attempt to identify a defensible critical multiplier in nuclear philanthropy, and potentially to move one step closer to finding “the nuclear equivalent of mosquito nets.” Impact Multipliers in Nuclear Philanthropy There are many potential impact multipliers in nuclear philanthropy. For example, focusing on states with large nuclear arsenals may be more impactful than focusing on nuclear terrorism. Nuclear terrorism would be horrific and a single attack in a city (e.g. with a dirty bomb) could kill thousands of people, injure many more, and cause long-lasting damage to the physical and mental health of millions. All-out nuclear war between the United States and Russia, however, would be many times worse. Hundreds of millions of people would likely die from the direct effects of a war. If we believe nuclear winter modeling, moreover, there may be many more deaths from climate effects and famine. In the worst case, civilization could collapse. Simplifying these effects, suppose for the sake of argument that a nuclear terrorist attack could kill 100,000 people, and an all-out nuclear war could kill 1 billion people. All else equal, in this scenario it would be 10,000 times more effective to focus on preventing all-out war than it is to focus on nuclear terrorism. Generalizing this pattern, philanthropists ought to prioritize the largest nuclear wars (again, all else equal) when thinking about additional resources at the margin. This can be operationalized with real numbers — nuclear arsenal size, military spending, and other measures can serve as proxy variables for the severity of nuclear war, yielding rough multipliers. This w...
Feb 14, 2023
EA - Diversity takes by quinn
10:30
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Diversity takes, published by quinn on February 14, 2023 on The Effective Altruism Forum. Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him. Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times. Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes. A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it. I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity. Definition 1: standpoint epistemology Twice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed. Take 1: poor people are cool and smart... I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there. Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it. There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's anot...
Feb 14, 2023
EA - Plans for investigating and improving the experience of women, non-binary and trans people in EA by Catherine Low
02:17
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plans for investigating and improving the experience of women, non-binary and trans people in EA, published by Catherine Low on February 14, 2023 on The Effective Altruism Forum. I’m Catherine from CEA’s Community Health and Special Projects Team. I’ve been frustrated and angered by some of the experiences some women and gender minorities have had in this community, ranging from feeling uncomfortable being in an extreme minority at a meeting through to sexual harassment and much worse. And I’ve been saddened by the lost impact that resulted from these experiences. I’ve tried to make things a bit better (including via co-founding Magnify Mentoring before I started at CEA), and I hope to do more this year. In December 2022, after a couple of very sad posts by women on the EA Forum, Anu Oak and I started working on a project to get a better understanding of the experiences of women and gender minorities in the EA community. Łukasz Grabowski is now also helping out. Hopefully this information will help us form effective strategies to improve the EA movement. I don’t really know what we’re going to find, and I’m very uncertain about what actions we’ll want to take at the end of this. We’re open to the possibility that things are really bad and that improving the experiences of women and gender minorities should be a major priority for our team. But we’re also open to finding out that things aren’t – on the whole – all that bad, or aren’t all that tractable, and there are no significant changes we want to prioritise. We are still in the early stages of our project. The things we are doing now are: Gathering together and analysing existing data (EA Survey data, EAG(x) event feedback forms, incoming reports to the Community Health team, data from EA community subgroups, etc). Talking to others in the community who are running related projects, or who have relevant expertise. Planning our next steps. If you have existing data you think would be helpful and that you’d like to share please get in touch by emailing Anu on anubhuti.oak@centreforeffectivealtruism.org. If you’re running a related project, feel free to get in touch if you’d like to explore coordinating in some way (but please don’t feel obligated to). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 14, 2023
EA - How meat-free meal selection varies with menu options: an exploration by Sagar K Shah
03:28
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How meat-free meal selection varies with menu options: an exploration, published by Sagar K Shah on February 14, 2023 on The Effective Altruism Forum. Summary Increasing consumption of meat-free meals can help reduce demand for factory farmed animal products and anthropogenic greenhouse gas emissions. But relatively little research has been done on how meat-free meal selection is influenced by menu options, such as the availability of meat-analogue options or different types of meat. We conducted a preregistered reanalysis of data from a series of hypothetical discrete choice experiments from Brachem et al. (2019). We explored how meat-free meal selection by 1348 respondents (mostly German students) varied across 26 different menus, depending on the number of meat-free options and whether any options contained fish/poultry meat or meat-analogues. Menus consisted of five options (of which, two or three were meat-free) and were composed using images and descriptions of actual dishes available at restaurants at the University of Göttingen. While our work was motivated by causal hypotheses, our reanalysis was limited to detecting correlations and not causal effects. Specific limitations include: Examining hypotheses that the original study was not designed to evaluate. De facto observational design, despite blinded randomization in the original study. Possible non-random correlations between the presence of poultry/fish or meat-analogue menu options and the appealingness of other dishes. Analysis of self-reported, hypothetical meal preferences, rather than actual behavior. Meat-analogues in menus not reflecting prominent products attracting significant financial investment. Notwithstanding, our reanalysis found meat-free meal selection odds were: higher among menus with an extra meat-free option (odds ratio of 2.3, 90% CI [1.8 to 3.0]). lower among menus featuring poultry or fish options (odds ratio of 0.7, 90% CI [0.6 to 0.9]). not significantly associated with the presence of meat-analogues on a menu (odds ratio of 1.2 (90% CI [0.9 to 1.6])) in our preregistered meat-analogue definition. Estimates varied across analogue definitions, but were never significantly different from 1. Despite the many limitations, these findings might slightly update our beliefs to the extent we believe correlations would be expected if causation were occurring. The poultry/fish option correlation highlights the potential for welfare losses from substitution towards small-bodied animals from menu changes as well as shifts in consumer preferences. Given the study didn’t feature very prominent meat analogues, the absence of a correlation in this reanalysis cannot credibly be used to refute a belief that high-quality analogues play an important role in reducing meat consumption. But when coupled with the strong correlation on an additional meat-free option, we think the reanalysis highlights the need for further research on the most effective ways to encourage selection of meat-free meals. It remains an open question whether, at the margin, it would be more cost-effective to advocate for more menu options featuring meat-analogues specifically, or for more meat-free options of any kind. You can read the full post on the Rethink Priorities website, and also see the pre-print and code via the Open Science Framework.. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 14, 2023
EA - 4 ways to think about democratizing AI [GovAI Linkpost] by Akash
00:29
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 4 ways to think about democratizing AI [GovAI Linkpost], published by Akash on February 13, 2023 on The Effective Altruism Forum. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 14, 2023
EA - New EA Podcast: Critiques of EA by Nick Anyos
01:14
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New EA Podcast: Critiques of EA, published by Nick Anyos on February 13, 2023 on The Effective Altruism Forum. Emotional Status: I started working on this project before the FTX collapse and all subsequent controversies and drama. I notice an internal sense that I am "piling on" or "kicking EA while it's down." This isn't my intention, and I understand if a person reading this feels burned out on EA criticisms and would rather focus on object level forum posts right now. I have just released the first three episodes of a new interview podcast on criticisms of EA: Democratizing Risk and EA with Carla Zoe Cremer and Luke Kemp Expected Value and Critical Rationalism with Vaden Masrani and Ben Chugg Is EA an Ideology? with James Fodor I am in the process of contacting potential guests for future episodes, and would love any suggestions on who I should interview next. Here is an anonymous feedback form that you can use to tell me anything you don't want to write in a comment. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 14, 2023
EA - Valentine's Day fundraiser for FEM by GraceAdams
01:06
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Valentine's Day fundraiser for FEM, published by GraceAdams on February 14, 2023 on The Effective Altruism Forum. Epistemic status: feeling the love This Valentine's Day, I'm donating the cost of a date night to Family Empowerment Media (FEM) and would love you to join me! 🥰 What better way to celebrate love than promoting family planning to prevent unintended pregnancies? Family Empowerment Media (FEM) is an evidence-driven nonprofit committed to eliminating maternal deaths and other health burdens from unintended pregnancies. FEM produces and air radio-based social and behavioural change campaigns on family planning to empower women and men who want to delay or prevent pregnancy to consistently use contraception. Read more about FEM Visit FEM's website Join me in donating to FEM here: Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Feb 14, 2023
EA - Elements of Rationalist Discourse by RobBensinger
20:38
Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elements of Rationalist Discourse, published by RobBensinger on February 14, 2023 on The Effective Altruism Forum. I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's). Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives. The basics of rationalist discourse, as I understand them: 1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes. Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all: Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. Instead, treat arguments like scouts: tools for better understanding reality. 2. Non-Violence: The response to "argument" is "counter-argument". The response to arguments is never bullets. The response to arguments is never doxxing, or death threats, or coercion. 3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models. Additionally, try to avoid things that will (on average) mislead readers as a side-effect of some other thing you're trying to do. Where possible, avoid saying things that you expect to lower the net belief accuracy of the average person you're communicating with; or failing that, at least flag that you're worried about this happening. As a corolla